Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"> LLMs cannot do real work autonomously. ..". yeah, our company got everyone to…
ytr_UgxngjT2u…
G
right yeah I asked an ai what 2+2 was and it said 5 so that must be the answer…
ytc_UgzHBukPn…
G
Driverless vehicles are an absolute menace, and apparently many people are going…
ytc_UgyvG7WVn…
G
No industrial/automation revolution has ever resulted into less work. The same i…
ytc_Ugz_z4Kwa…
G
Machine learning not AI. It will never out do a human being unless a human bein…
ytc_UgzJlCu7F…
G
@josehumdinger6872 AI is not "studying" anything. It's not actually intelligent,…
ytr_UgxMEllv5…
G
IM GOING TO SAY THIS ONCE AND ONCE ONLY:
AI IS A TOOL TO HELP YOU MAKE ART, IT …
ytc_UgxNKaXRK…
G
Nobody is gonna talk about the first one? How detailed it is Ai is getting smart…
ytc_UgwXkt_kY…
Comment
Any realistic AGI timeline has to include semi-autonomous intelligence — systems that already act with human-like initiative. These systems will become highly destructive if their goals diverge from human intent. Without robust guardrails, misalignment is virtually guaranteed. And the current “race to win” culture makes it almost inevitable that safety will be sacrificed. Unless we slow down or impose hard alignment standards, the emergence of destructive AI behavior is not a question of if, but when.
youtube
AI Responsibility
2025-10-14T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxL3Nq4n9Puyw0VgC54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1gxwdLl9GaDv425t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxeMKP49SjcvwLQ4CF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6i9zxloece1JIrVR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXhklaAlEQi8AYcgV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxP5uzmlJrYjhE3oGx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyfs6O8WCLGx8Jye4V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw5KG6o4GJs0JxvrZZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgybZ74F-piXI_gEIXZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzBYmSxnDg6stJ_hNN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]