Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Main Issue: *We don't fully understand/ have a grasp on "Human Consciousness"*
…
ytc_Ugw85WYa0…
G
You can't blame AI for bad medical advice - you are literally asking a type of p…
ytc_Ugx3KVe-i…
G
The Honda robot was a remote controlled puppet, he had two operators, Asmio was …
ytc_UgxejZPyC…
G
the only time I've used AI to generate images is when I was testing what slop it…
ytc_UgwgOVDgN…
G
Ofcourse she should be mad, but she also should've taken into consideration that…
ytc_Ugw35PeCs…
G
It seems the interviewer doesn’t consider AI having a higher intelligence than h…
ytc_UgyP7Xz11…
G
everyone hates AI because all they've heard is slop. I've heard some fantastic s…
ytc_Ugxj9S3Uv…
G
Something does feel a bit fishy
Because AI likes to pull up things from across…
ytr_Ugxff8x7q…
Comment
By my calculation the moment we make AGI we get extinct.
And there is no chance to stop, game theory makes that any frontier actor will have a choice on gabling between world domination and human extinction, and world domination race will be too enticing to discard from up to 80% chance of extinction, only if chance of benign AI is lower than 20% we would have a slim chance of coordination to stop since this would hit biological threshold of gambling aversion, otherwise some actor will continue until futher consequence and extinction.
youtube
AI Moral Status
2025-12-28T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxnPCumyxMKK3717b14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaDvAX7AdH6u3IcaB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxxWTCQBKonLdaCkit4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXq2SejcE0YW9gOMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYpc5lxwLzREOnoFV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyHRmya9YFRj9hM3WV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwNqOeHWmPWJprdE_B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJqb9bi9dvALKdQjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwaO307HSeTYj5B0PV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzOlKg3eC76vmMR3Q54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]