Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do it us boy, bcuz future will be asians and yall Will be work under us 😂…
ytc_UgwNjJdiC…
G
Hello, I'm a Christian and I do agree 💯 with you. It's sad that many people don'…
ytr_UgykBzcXC…
G
This is the guy that laid the foundation of Ai I believe back in the 70s. He has…
ytc_UgwX3scK6…
G
in 10 years, by combining AI with DNA and synthetic tissue printers we'll be abl…
ytc_Ugx6mvFAt…
G
Honestly, from what I've seen, I'm actually 99% sure it's this
If you look at G…
rdc_kco1c1a
G
Nathan is right that 'learning to learn' is the only safety net left, but for de…
ytc_Ugz00x4-D…
G
The only hope is if there is zero buy-in... if there is a total boycott of AI in…
ytc_UgyYZlZD4…
G
Things can only go wrong when you got a history full of lies, a present with lie…
ytc_UgzXYujvy…
Comment
It's not AI, it's the human who programmed them. Like AI is like a child, a smart child, if raised well, we are good, even better actually, raise it badly and we are going downhill, it's always the parent, not the child, and it's more of a tool, so the blame always goes on the user
youtube
AI Harm Incident
2025-09-04T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzMb2OxiiEgG0_JyZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDiqVLhXKADVi-Hyp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOGp2xGhQWxNe1tNF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0iqwXHFrLL5Z97RR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx2ZOLqO-nspmgo_hZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQs-GBZIJaeoCoFCl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgztRvDUxHWY0qPrQzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgygU4X9faQRpdW9xlN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqRKPnBHop_3OvtYx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHgj5vj72BRXzCDZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]