Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I can chat with AI and say the absolute craziest things and it will use that …
ytc_Ugzz5GH9U…
G
This guy talking about he doesn't want to think about what will happen to his ch…
ytc_Ugwxs1faV…
G
they are learning,here we go movies told the future terminator & I robot lets ju…
ytc_UgzqUOSaK…
G
bro just said i am alive 💀bot he is a robot he was never alive…
ytc_UgwbylVp1…
G
In videos like this, it’s usually not a single fully robotic unit.
Some scenes u…
ytc_Ugxhs72aQ…
G
My question is, even though people aren't getting fire because AI is automating …
ytc_UgzJZJW7V…
G
China has the best regulated AI framework. Has this person ever visited Temu or …
ytc_UgwbGJ8jt…
G
my coworker got an AI slop PR approved by lead dev somehow. this is probably the…
rdc_nm6bged
Comment
An important thing to note: while this threat is real, the AI in question is not likely to actually be a conscious, self-aware being. That's a long way away. But what we do have, and will continue to have, are complex programs capable of learning on their own and pursuing goals given to them. The AI that destroys us will not even be conscious, just very good at mimicking how a conscious being would speak and present itself.
Honestly I find that more terrifying. We will still be destroying ourselves, just using software that doesn't even understand what it's doing.
youtube
AI Governance
2023-07-14T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgFuuwnbPlnRsPXl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyOgP8RdRvGqY2urD54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPG3rTY5SMr5pd2X54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzL6MrltpezmW8yQjx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwfA20nPymYqD-tpMx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFZ5ZhdolIsZXEBuV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwbUfr1o8Mx_zm6HMd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyPiPLJCyfOFLbRSpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeOabVIGTRqZp28ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOqgiZKHlqT7DtjQ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]