Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bad thing is that man will destroy everything before AI will. Just look at the p…
ytc_UgzX0QAjm…
G
People, especially young people, have been coming up with crazy ideas with amazi…
ytc_UgxpIxEJy…
G
The only profitable usage area of AI I can think of is probably scam. For me it …
ytc_Ugyvhok6x…
G
Interesting that they didn’t bother to give her arms, but still ensured that she…
ytc_UgyfI-4sm…
G
Human can create a new style, while AI mashes up the preexisting data. Like, Cub…
ytc_UgyIVbq19…
G
Terminator is a fictional movie that would have you believe Skynet ran off a mod…
ytr_Ugx-Uccoo…
G
@stallover7463 Yeah right that's what I would have thought. It's not like there…
ytr_UgyGLz05E…
G
The manufacturer of this Ai is evil ,they want to replace the humans created by …
ytc_UgwHwhMj3…
Comment
Many good points were made, but some parts of AI were left out. He worked at Google, a company that profits from user data, so he likely knew the risks. Living in a highly capitalistic country, it’s hard to avoid the pressure of big business. Still, he made an important point. AI can be dangerous in the wrong hands, but if we train it to help people, we can reduce the risks. Thanks for the video.
youtube
AI Governance
2025-06-23T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyufjiCXOAq61peCFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUtEL8rK925dv0kKt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQRmTy62MN9QpI3Cx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYSJvUhdgg_5Nnv5p4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbGEhlkSFf7TjN-qp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwqLWi-mGspTNRSzmh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgywplESWDm1hTekgxt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7zRr76_8pZ3z2fdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz7rlydfJ6iqdgsBpZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxncn8Hb0xce6oeer94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}
]