Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First off how do I know this video is not a deep fake.. Please prove to us this …
ytc_UgxcgJHHx…
G
Anything coming from Yudkowsky/LessWrong needs to be taken with a huge grain of …
ytc_Ugy2z9Qt1…
G
The real danger is that these "AI" algorithm aren't really all that intelligent …
ytc_UgxWWyGsi…
G
It’s not creating art it taking stuff that’s already there and mixing up with ot…
ytc_UgwupU8x_…
G
I can easily use an AI chat bot for over 7 hours a day. I've experienced the goo…
ytc_UgwZEQw41…
G
Every time you make a video about this AI stuff, I feel a little bit better abou…
ytc_Ugwm3oqEm…
G
Ai, sure. Bragging when using AI? Don't. Mock AI users that did not do anything?…
ytc_UgxahdTdD…
G
"Ah, the eternal battle between the artists and the AI enthusiasts! It's like wa…
ytc_UgxxApzMg…
Comment
What got me is how direct and to the point this robot is. Unlike Humans, who beat around the bush or dance around the question. This AI just blurted the truth all out without hesitation. Even pondered the question for a brief moment.
When AI has no qualms about answering a question in such a direct manner, especially as if it already knows the outcome, humanity is in serious trouble.
Imagine AI actually leading humanity to believe their safety were of high importance to AI, while in the BG, they were leading humans around by a proverbial dog collar.
youtube
AI Harm Incident
2024-05-16T14:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxsxAO66j8_z5bEeyJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwABhhWSaGZtr7YSE14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxzWNA84yKlhxdqJ5V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5Eb6p7qc8MrQ2oOJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyRDcdyV5-Nzl2RJ014AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzRRMxlj1lKnGLlg5R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyX73WBbgCaZmsO8hN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxqzLFW88tZL8i4TxJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIS1t8bS57ypeHpOR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytfMdvcIIimBYAgVV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]