Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why do we assume that something more intelligent than us will be malevolent? Surely as we get more intelligent we would stop producing conflicts? Would this not be the same for AI?
youtube AI Governance 2025-09-11T03:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx3kDsrrEXpn6Au6VZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzZhEjxKXjRl8j_5Y14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy1A0r4B7cmDzEP1Wd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx784Z7R4ADNBI4ihx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxXUlnKQuv0W_LJBWB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwZGWJc_EbMcQp4Dr54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEUShtyKk6dxBXlnV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwYUj0zkLdmO9H40ex4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"disapproval"}, {"id":"ytc_Ugy3sIF8rC1bvPdZ5j54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxK9UPrgduzwyfhrgN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]