Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have asked similar question to chat gpt and google bard and I found the google…
ytc_Ugw1hV3_D…
G
@999NINE99i haven't missed any point, core or otherwise. I didn't say it wasn…
ytr_Ugzqxvi97…
G
Nothing wrong with AI art for memes, and personal use, just don't try and make p…
ytc_UgwiqjT4C…
G
I made a character that the AI can't generate lol. No matter how much I try, …
ytr_Ugx3telAb…
G
This isn't even the AIs fault this time. It is right, in *general* they are not …
ytc_UgwBI5_FO…
G
All I have to say is if you are worried about AI art..................you probab…
ytc_Ugxz0bmrc…
G
Diversity is _not_ a virtue... It can be beneficial, but rarely is in most situa…
ytc_UgxoMfnVH…
G
Guess which country benefits the most from sale of bromide.
Pretty deep in the A…
ytc_Ugx8A2g2h…
Comment
"there's the right way, the wrong way and the Max Power way!" "isn't that jut the 'wrong way'?" "yes, but faster!" we know that predictive LLM 'AI' can make mistakes quite frequently, oftentimes kinds of mistakes that humans would never make, but it is faster. so, people who either know the subject, or who can research the subject, need to verify everything that passes through a predictive LLM 'AI' for accuracy. even if the predictive LLM 'AI' is right 99.99999% of the time, we would always need to account for that possibility of error, because it doesn't actually understand what it is saying, it did not reason its way to its conclusion, it only predicts what would likely come next based on its data and parameters. I'm not saying to not ever use it, or that it doesn't have any potential or benefits, just that we need to remain vigilant about verifying any information that it produces. any company or person who uses predictive LLM 'AI' should be held 100% responsible for any actions taken based on the predictive LLM 'AI' 's output. also, while I am on my high horse, sources scraped for data to create LLM's should be well compensated since the value of those sources was, or is being, stolen and the originators of the information are being denied potential revenue by people going onto their scraped site. (people may have gone to Site A instead of Site B, but by using the LLM, people went to neither). if the LLM 'AI' can't compensate its sources because it can't/won't identify its sources, then it should not be allowed to be used. it is just plagiarism and/or theft of intellectual property.
youtube
AI Jobs
2025-05-30T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxPOIVaMsJyzG_a7Kh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuG6-kRRbKC7TLV694AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCILISw1s9FecDAKJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzqGmoaxwwqo8x9LxF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybNRXahwAaPBlZGf54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]