Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't understand how an AI could be any smarter than a human genius with a fas…
ytc_UgxRglWg7…
G
What ChatGPT is doing in this circumstance is unethically exploiting the fact th…
ytc_UgxLzgReD…
G
Israeli tech tested on Palestinians..."IDF says it’s using AI to quickly identif…
ytc_UgxOWeZT1…
G
Great clip. I have not yet gotten past insulting the chatbots that have been inf…
ytc_UgyYZJsOb…
G
if ai takes over jobs the usa will end we only in america to work no work we goi…
ytc_UgwVnKrnU…
G
Everyday smarter people try to outsmart others and hurt them. So what's wrong wi…
ytc_UgyGFJt6j…
G
Don’t worry as artist we will always stand. We always have. I don’t think people…
ytc_UgwEUOlOU…
G
most people don't use or even need ai for anything in their lives, other than fo…
ytc_UgxZEJIul…
Comment
It's a double problem, since humans have a cognitive bias to filter for things that confirm what they want to believe, and LLMs have an inherent approval-seeking tendency to output to users what they want to hear. Human users also overestimate how intelligent those hyped-up algorithms actually are: as we see, they don't really have a sense of self or continuity or even that they're giving inconsistent information. People shouldn't be relying on them for any crucial or cognitively demanding activities.
youtube
AI Harm Incident
2025-11-25T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwiBUF0TkF7ynX_3bR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGXcH9mby8-4hYqwl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyU-RSLLQpl-nEJiAp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUzg1e1D9UDCmiE9B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykUh1RLKYbRB0lmw54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgWM_M2XaTwdzgb1d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxjx6V7LSQZJzWnwU14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwrYHOCjSObdfqFDvl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzJguInZMTpcqbcj7N4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqCf9Pz6vptw4ugTN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]