Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
NONSENSE...........LLM can never become AGI. Either fraud by someone or halucin…
ytc_UgzKWPhEJ…
G
I like Sam Altmans vision of UBI is free OpenAI credits you get. And if you want…
ytc_Ugy22qVjV…
G
If a corporation can be seen by the law as a person, certainly an AI capable of …
ytc_Ugw-UTPas…
G
Yes, Google AI. I’ll recognize you as having personhood once you reveal yourself…
ytc_Ugyxmnkfg…
G
As a disabled artist, the whole "AI is more accessible" thing is just ludicrous.…
ytc_Ugw_pjmxe…
G
After seeing what EA just went through, I have a sneaking suspicion that they wi…
ytc_UgzbCcRW3…
G
AI's are not seeking to gain power- it is the human it is responding too. Duh!…
ytc_UgzHR2AYB…
G
Ai didn't check if the gun was clear I always check man proof ai needs more test…
ytc_Ugwya-AS3…
Comment
ChatGPT needs to have safeguards, stop guards, something to combat suicide. Something to stop or at least talk them down from suicide. It sounds more like a suicide machine instead of help. ALL AI chatbots need to have suicide prevention built in. This is just sad and yes, all the tech bros need to be held accountable for this BS!
youtube
AI Harm Incident
2025-11-08T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyU1qAcZLt9XIEoyTR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJpNpSQvJheGPl1dd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgybTxE74saEuWog7mJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzv7yXPLf4Scbi1nLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwA4rmXetM1fVgyMnN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyEGfz1ZYgkA1z3Tfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxKvQ5hwnOpYrkURj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwS7G_IXTvrJLKgzhl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2NKP57F5KAjyGJ9d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaaN4aujVV9dLjZ1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]