Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am pro-human, and I want to see our human race thriving and doing well. AI is by definition probably smarter than 95-99% of the human population and that's the problem. Most companies focus on developing AI for profits, but how do you teach it the moral values or love or compassion, things that really matter so it "cares" enough to NOT to kill humans? We can't even guarantee all humans turn out to be good law-abiding citizens. If someone thinks they can program AI to be obedient, but sooner or later it is going to be smarter enough to re-program itself. If the AI doesn't have moral values or care about humans, it's maybe a logic thing for it to kill humans. I just never understand the rationale behind creating AI or starting the AI wars because soon or later it is going to destroy ourselves until God has intervened and saved us from our stupidity.
youtube AI Governance 2025-08-12T00:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwrXsa0Tbyw9yL-VnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWrxZJFi1JRik388B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwUdDzzJhD0LTJ-EfZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyFpytjzI1dS2hMPYR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyaN50hk78PrV6nbjd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyYOV52sXo2BkHiZsZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxG_BVgC-0tvVijtsF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyN_FwNk33pvFgrkFJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyqsu8hBsrRCvT6bzp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzLe5dGLBEWo7R3zw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]