Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You would expect an entity that can reason to not make very drastic rational mistakes, which AI does still regularly make. What appears to be reasoning is not at all like human reasoning.
youtube AI Responsibility 2026-01-02T18:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwwZCujli0B4x5fCu94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyEyJWCSHmwD-Kru7x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwq4PY5ONLga4H7Xx54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwO8h5bOu6ffXdPUJR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxI9Ykwky3K6ime3414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdAZPfj9L0KPk627N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyT6j-L34xgw8eAz254AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNC4n1RVH0WgiLM5l4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtJeSrM9jLu_jRnG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzk49Fppyiw3f2dIO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]