Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah this is fake news man. Current AI is really heavily based on predictive algo…
ytc_Ugz4hm0IZ…
G
Have you opened a news website lately? What hope is there? The rich have taken…
rdc_mxz8v2u
G
So in 12 years or so we will see how smart this kid is…. Problem is the Asians w…
ytc_Ugx9tnfVt…
G
Matrix, but without happy ending. That’s what can come. And no one would or coul…
ytc_Ugwbt1Iec…
G
Well... I've been saying for decades that "To err is human. To really F$%% up yo…
ytc_UgwpPkkyw…
G
idk why the hell i thought the ai pictures would worry me when you showed them. …
ytc_UgxIrVBlO…
G
So autonomous trucks are going to 'fight' climate change.
Really. This climate c…
ytc_UgykkwMjp…
G
Everything starts with all these lunatics who use this AI crap.
Average humans w…
ytc_Ugz0buSNB…
Comment
Why the almost cute 'piecemeal' approach to the edge cases of harmful AI scenarios like suicide enablement? Seems like a total distraction in the face of the overwhelming cataclysmic consequences of AI that our society is about to face. These harms include loss of employment, loss of truth, loss of reality, loss of meaning, loss of privacy, loss of humanity, loss of control, and loss of life. Humanity is totally unprepared for the tidal wave of dystopian change that AI will bring in the coming decade and these guys are withering on about the specific suicide edge case. How about a more general discussion of the AI enabled, and then AI led, end of our species? Is it just their lack of intelligence (ironically) and imagination that prevents them from addressing the broader scope of the impending disaster?
youtube
2025-10-29T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzjMuhHpeBxgFaMox14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzS7rs4qpPFajDL3xZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxJltvddOyvDoWLQR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwR_h976aiAY0MUe-54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeGsA0PUnjwPkSKpB4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxWksEdbSqNeFPlhlB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyPgwpD3bErFDZqKW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugya4IiESzcDdKtfi-94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjCmpMQHrAstjjVHd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-_tgLL9sufjfjjzF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]