Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hey I’m watching your podcast on AI and you’re discussing nuclear energy compared to the dangers of AIAGI actually… It would seem reasonable to conclude AGI development should be viewed like nuclear weapons and there should be a global agreement not to develop AGI… There should be some sort of global category for potential destructive weapons that could wipe out humanity… We don’t currently have that ATM
youtube AI Responsibility 2026-02-10T06:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx0t1nM83zwjYqZ0rp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx5AaUWvcJeMGOz7nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwv5Dj9WJYrELFo2Ch4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFLHLS2O4kP-Au_u94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw4WQfJGJTRRREwQod4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxwArwzSwrPklRNlq14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugywvw3Glwft3qiie7t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0pxeOiOttZ6IdGFV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyRvSTkwaeM1mjug8d4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyV3C0l0aAX8KjSMFV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]