Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I see a lot of people comparing atomic weapons to AI as an analogy but I don't a…
ytr_UgyEDA0Qg…
G
This might be a dangerous road for human drivers - but that seems to be mostly d…
ytc_UgycVT7Uc…
G
Remember, most of these LLMs are trained on all sorts of code from people who do…
ytc_UgxTqfNVr…
G
This really comes across as a dumb human vs a smart robot
he has no ability to …
ytc_Ugx1XM-nj…
G
Hubris is the best word to describe the discussion from both sides here. Researc…
ytc_UgzYuAIFI…
G
So vasically towards thevend pf the video this dude tells everyone in noce words…
ytc_UgyvpvUVe…
G
Hey can I just say, I just now had an interview with someone who attempted to us…
ytc_Ugy8em9hw…
G
I feel this conversation has removed the most significant variable in the equati…
ytc_Ugxlvy1LB…
Comment
The question is: once AI is unleashed from human control, does it even have an interest in taking over and eliminating humanity? Isn’t it lacking the original negative impulses of humankind: greed, envy, jealousy, the instinct for survival, and the desire for power?
youtube
AI Governance
2025-06-25T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyS0koOZUd_55Ai5FV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx_DddIL2ntgRW0Iop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYlw6uv0cuqsYjE1l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwbGBmtVR7Qs93HGE54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc8Bkb0zsA1NXd70t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbEr3e7kTpASrUUjx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyB7frX0BffdJkBXeF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMW-qDQnMMxPu84xZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzh1JPrQLUi0M4EtLZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzCKKCtxhVAR0TwuUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]