Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI ever decides to do something bad it will be because it is trained on us humans
youtube AI Governance 2025-06-30T09:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyAZJlio5IYzQPbPtJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx_-p84CHuYEeQaM9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwUX5MajKdSACC1I4x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxtgfiqZ_X26Qp8WXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyKvXGbwZ5f-4qJpkR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxMp8M9CgkI79DaYtB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzqpqEQ_PCv65PdAiF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugxx_PIequBdckkz9Kl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxpUOhRQWRRUPCDMp14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzQL1LQmSbaW5cHDuZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"} ]