Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m not even an artist and I immediately go “oh it’s AI well f*ck you then.”…
ytc_UgxYbQICa…
G
I've been using Hosa AI companion to chat and practice social skills. It feels m…
rdc_ndmolkj
G
I haven’t heard any AI Developer answer that question yet - I think they are gid…
ytc_UgxM9V1PZ…
G
Imagine if AI is gonna be on a blockchain that is based on quantum computing nod…
ytc_UgzAU0hdr…
G
AI is a direct connection with disembodied evil spirits. The Nephilim. Demons. A…
ytc_Ugx9DJk9Y…
G
We need to let all the AI and robot based companies let AI and robots purchases …
ytc_UgxFrrZFn…
G
If there is solid proof that AI art violates copyrights, why don't artists pursu…
ytc_Ugw_GEo2H…
G
I guess this is exactly why we need AI lawyers - at least for the proofreading o…
ytc_Ugz36B8V4…
Comment
honestly I used chatgpt 3.5 a lot, and I believe he probably could have gotten it to say 'why yes, you should go right ahead and eat some sodium bromide, yum yum yum'. but nobody was seriously telling people to talk to AI models for health anything at 3.5.
GPT 4 you could *generally* get sane advice from ChatGPT on most subjects as long as you didn't try hard to push it off track... and imo GPT 5 is significantly safer than google (which isn't to say that ChatGPT should replace a doctor's advice). But with 3.5, I specifically remember getting ChatGPT to endorse some terrible, terrible things by talking to it for long enough. I never tested something exactly like this, so I can't make promises, and it's entirely possible that the AI didn't do anything too crazy here, but I will say that 3.5 was unsafe and there's good reason for OpenAI's discontinuation of it.
youtube
AI Harm Incident
2026-04-21T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyq7F8uKd4-q6H9KVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw-sACa30q38aUCiER4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzQVy8xXvsbGgG35HV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwHXFYLZSlUeXxCJLd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCux2GKQxk0BvIrGx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQEUGuAWwaCn8fOFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhvOum004-Hp6wjCF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3Gknio5-FAbynV4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKcZPI7CfR7CFmqCJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxi85BHGv50ld_SYnV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]