Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds
Line openAI fucked up and is now going back to cover their tracks and to …
ytc_Ugw_afGuK…
G
If Google can program the AI to be forced to answer "yes" to "are you an AI?" do…
ytc_Ugz-omDuF…
G
And your job of making videos will be gone to .cause so will put a robot AI in y…
ytc_UgzebUWYu…
G
Years ago, I had an interview to work at AT&T.. Didn't get the Job. They treat…
ytc_UgyLL_Zs9…
G
The one big question I never hear anyone ask: how do we know that the informatio…
ytc_UgzqknUUH…
G
Rather have AI than "Steven Williams," aka Suresh Patel, take my call. It would…
ytc_UgwzyJ1cl…
G
A.I. is not a lookup table. It generates the answer based on the question and it…
ytr_UgyM5LKiZ…
G
There is no friend rate. OpenAI already isn't profitable and what drives the co…
rdc_n4eiurq
Comment
Even when you have an autopilot that is safer than humans, that actually makes it WORSE. The human would gain too much trust for the automation, allow themself to become disengaged, and be unprepared to take over once it does fail. No advertising needed.
youtube
AI Harm Incident
2025-08-15T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwLkAbxotBibd7cxp54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyRHf6ENyLqEP_dRet4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzy4ocmWdhh9fFKdxR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBnLjIlH9s9FPB78t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx-LI2OIE09ehGIb3Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyd9Glq6Bt3SkTR8e94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbQJn5bPEhDv9gFzx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzIodfh6WzQyQkUJ-h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyuGm2txsj8064zUPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzNKyX1kYaY-Epe-FB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}
]