Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People who have not read enough [most people, sadly] do not understand how language works and how hidden Markov models shape the outcome of what a ChatGPT 'conversation' looks like. The biggest problem they face is that they give the same weight of trust and reliability as they would talking to an actual expert, not realising that ChatGPT and its ilk are not experts at anything and just make up stories as they go along. Try asking the same question to ChatGPT and see if it gives the same answer twice. And when it doesn't why not. Always be highly critical when reading these things and understand that, before anything else, ChatGPT and its ilk hove no understanding what a word is. You get soup that kinda sorta sounds like some human may say [and humans are pretty much shit when it comes to clear communication anyway]. Don't buy something just because ChatGPT tells you. Think critically. 'Doing your own research' means reading the classics. Actually reading them. Start with Marcus Aurelius, one of the first stoics, and always question authority. Why does <this entity / person> say <that>? What does it mean. If you're going to use the new toys you will first have to have a working knowledge of the old toys. You are a human, you have to train how to use your intellect first and foremost. If something tells you you're a super smart, as yet unrecognised super genius, understand it's bullshit and that you very likely are not that kind of genius. Don't believe stuff because you want to believe it. Have the moral courage to say 'I'm not buying it', 'this is nonsense,', ask yourself 'what am I missing, where is the fault in my logic'. And when you find that lack of knowledge, when you see where the fault in your thinking is, and it's going to be there, that's when you grow as a human. You are not here to serve Sam Altman's toys.
youtube AI Harm Incident 2025-11-08T14:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzUJZL4OWZP700EdZl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyUs4-aryt7o-JoqPV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0FBh5_FQuCcw5SeV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxhbK20ipDyKGq_LBp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzeUrNxenPPbN5QnIZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzVUCrj3QKTVl2F5OV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyZhjmwRNRY7tgDyhh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw5tfUZa6ZyTYBqD5R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzqFG5LKtwPt9bEULB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxZnrnuC_O-omzg1o54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]