Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ok, so the the guys that make AI think it could end humanity... Yet the continue to make AI.. Seems like they don't care if they end humanity, so why should AI?
youtube AI Governance 2024-01-01T20:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzZURB72pN-H0rj1E14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0rQ6F8W3yOfM5CGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9ucjPjRSa6i5psNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyTtmnNWGcKuZlMNCN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6TZlT67za1HQxP4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwy8vtt-SmOS66rlep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwumRVU0VKnHdpT_uN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbKfkDOzcD6cdwE0t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwveza6vDs8F_e5-sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy3DzDJADYaMeQwrg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"} ]