Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi, I am here from 8 years into the future - the first to beta test Google's new…
ytc_UgwjQ91C6…
G
AI doesn't think for itself. It just scrapes up information it's fed and spits i…
ytc_UgwJYTBqM…
G
My guess is Youtube is getting the audience accustomed to that slight AI fake lo…
ytc_UgzHP5mc2…
G
I would honestly argue this test isnt as foreboding as it seems. AI bots are bas…
ytc_Ugw8UEs9l…
G
idk it felt ai assisted. I was at 2x playback and was hearing that static AI ton…
ytc_Ugw2Hr1SZ…
G
Since when is "opinion" legally valid as support evidence ??? Opinion is subject…
ytc_UgxqxKQBM…
G
And art is free too? Yes, it takes time, but that is with any hobby. If you don'…
ytr_UgwdhtKIx…
G
Even scientists don't put 100% faith in AI because the tech is too young to be a…
ytc_Ugws3HA-7…
Comment
Full cycle analysis and ai will use less energy than mantaining human environmental controls in offices and warehouses and factories, the jobs ai will replace first. Surveillance and predictive enforcement, and proto "mind reading" control systems are the only real issue with ai. As well as what its going to do for human thought, just the next step of google effect on cognition, and information flows in general, not really part of the dialogue. The sustainability argument is weak though. Ai will become more efficient over time as well.
youtube
Cross-Cultural
2026-01-23T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzJmYJWF-YivliGs-h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_y3iUsfqNHNjwpcd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy3Qb3K93mJf5FoPOJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy1b89AhYlIF928wJV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzAGvGPnn_1ZZO7sLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjEKWdGF_dnVhWyEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUh9jInH5FIuQWcaN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw5E2LeEoReVr2ypZ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwb8oBOLu4oXzEslAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzgF6t4XArjNWcRCGJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]