Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“How soon can I retire?” is a question we used to ask ourselves, at least my ge…
ytc_UgyKNyfuf…
G
So basically,over the decades it was never about the human potential or achievem…
ytc_UgyWe31Oz…
G
I find it hard to understand how anyone can think this is AGI or even close. The…
ytc_UgzVCB2pP…
G
The topic of Sam’s belief is interesting.
It sounds like something else, somew…
ytc_Ugw9HWj-j…
G
Haaleluya! A scientist agrees with me on the AI nonesense. Its all hype. Its not…
ytc_UgwancDht…
G
Big tech bet on AI against developers instead of AI for developers. That’s not d…
ytc_UgwBiDIOi…
G
This prognosis has one drawback: it doesn't consider that the only source of pro…
ytc_UgylYLrbn…
G
They definitely are smarter than humans- I’ve seen a thousand films warning us a…
ytc_Ugyq1a2QG…
Comment
AI safeguards are useless. You can prompt the AI to give you any answer you please. Even if it warns against something, you can say something like "Well I wasn't planning to do that, I just want to know the answer to my question". And it will assume you're not going to do the dangerous thing, then go on to describe exactly how to to it.
youtube
AI Harm Incident
2025-12-16T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw6VXLpThTbG6TXO1N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNmsOIXU9-696LeJ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_NgzSMqFTUzTG8Rl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQlHLMQlOA9JpMRXl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-cAwbTuO_h8P7du94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxA21bdgfDnxxRVr8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwSQ9Iv2kcoYDRF3Fp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQA-4u2TKcjapFfgd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy-wCuIpj7fMWnBfDF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8-C8uCN40B-pUYAJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]