Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs don't "believe", they aren't "obsessed", they don't "try to convince you". They say what the most probable answer is according to their training data. It doesn't have thoughts. It doesn't think. It's not evil. It's a talking machine with no humanity. That's why we shouldn't trust it to do tasks that require humanity.
youtube AI Moral Status 2026-01-30T12:4… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxkE4CDbpdflsqrv454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOQgVWGOJsFXpGcgh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzF2t3a69pSt0yqt_t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMyiuRE_Up8yIvAY54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyj-aMAs0JL-h3ZFO94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhYFADDOWrSx5IRmR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0uCPCITm6Vg-Nxdt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZwYTQSOvmX2Sqfep4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwu1hIYllPkyTgAdY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyAAA-QqSUW1O_8Gs54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]