Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is knowledge and then there is wisdom. Will AI have the wisdom to question why something should/shouldn’t be done by 2030? Probably not. Which is why it is dangerous. But is it any more dangerous than a human who also doesn’t have wisdom?
youtube 2025-09-06T14:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyPRIeyyhJXPba9uFJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx_0X2dn8hxaRw8GlR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzCHI2j3g9fSi9K8hJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxSMjq0RmfvTF-PA994AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugx2fO2THi2AYLofE3J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0QXmRBa8p3zhViih4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx2i9LG54wrz-bRKid4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxChQwYgsnY0-cJ07Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxRqlledxWnKESBzXp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXU_LCRLMUy6vVlUd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]