Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well it would mean you still need 83% of the workforce, not 20% but yea you’re r…
rdc_n7huqsn
G
They will Play with us, wow, Imagine robot kicks You when You Play football, or …
ytc_UgxUgZn2t…
G
That's why the robot has to be a spider; I wouldn't want to be walking around in…
ytc_UgxE8cQFp…
G
"The British government revealed that UK police forces have begun deploying AI f…
ytc_Ugx8FxNhd…
G
Finally someone said it, i have been saying this forever, this is nothong but ma…
ytc_UgwbqxfD_…
G
Chatgpt was asked about the oldest universities in the world. The answer exclude…
ytc_Ugyd25JwV…
G
The host interviewer is lame and sounds stupid with his cockney british accent. …
ytc_UgymkTYap…
G
I haven’t seen the entire video but I’ve heard so much about ai agreeing with yo…
ytc_Ugxi85BHG…
Comment
"Here's what happens when people tend to get mislead by that idea:
- Some websites and influencers incorrectly suggest...
- Bromide used to be in medicines...
- Bromide is available online..."
All sounds like a big hindsight realisations/defence claim from the ChatGPT model after acknowledging that it may have assisted in the demise of a man seeking health advice.
Them not owning up to this mistake is going to be the reason this happens time and time again with different experimental ideas because the users of their product aren't made aware of the true dangers of AI if believed word-for-word.
youtube
AI Harm Incident
2025-11-26T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxL1wPpxnFh4A_wKoN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzoTwDjlRZkt60euMl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz5GycpaaR9b1fKcNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiZ2iBj_awRMB-XSd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw90k1sA4VcITDq-dJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrmsOItWCgKTO6b3p4AaABAg","responsibility":"parent","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzxZGgs6zEnK5UHQvR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyu5nO9VkBTPvSWBuV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyEF5Df4yg-M8fLojJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyfoINCuwy8Z3EtX6J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}]