Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly I think people have way overestimated how many jobs AI is going to take. The biggest issue is that AI cannot be trusted or held accountable for decisions. Ask yourself: would you trust ChatGPT to do your taxes for you? What if it hallucinates your income somewhere which leads to you accidentally committing tax fraud? If that happens, YOU go to prison, not Sam Altman. And no meaningful progress has been made on the problem of hallucination. The very latest models (however amazing they are) STILL hallucinate. So if an individual wouldn't trust a chatbot to do their taxes, would a large MNC trust a chatbot to make a decision where if it's wrong millions (or billions) of dollars are on the line? How does executive leadership justify that to shareholders? The equivalent of Gemini told me to put glue on my pizza? People are unwilling to accept liability for something an AI agent does. Yes, some very low-skill knowledge work is going to be replaced by routine AI processes, but anything that involves decision-making cannot happen without humans on the loop. AI is powerful to be sure, but always remember: it's fancy autocomplete. It's not thinking. It's not a replacement for a human brain. It still requires a human to check its work for correctness.
youtube Viral AI Reaction 2026-04-24T13:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzzxcBB8SiJikSRYfV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwPaARDZ0GKAPonFgt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxrAB9eqj_kMsxgFLl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZw6Qg75Hp4oGATwx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywJc2KHbwAMm0iTLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHENdpPKf0_wvk5RZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxLHt8WzwLEnqV3J6J4AaABAg","responsibility":"society","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzkDWiB-FEfwrUg3NJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGCo9Z-4UWFsFGPL54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyh3buyno7emjE3LRh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]