Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If AI so smart and built to be loyal to us, then wouldn’t they be smarter at figuring out at sustaining humanity rather than having us trusting other fellow humans in power to sustain us in the current status quo? Why assume they are smart yet so dumb as not to understand existential risk to humans?
youtube AI Governance 2023-07-12T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugwnw3SYzESwHw7Z8554AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlXkPIN3oROn36zXx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz-ZQO-Blc8svSRkt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw8lL4YbPdBN_CVmPF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyafhT2kXn14bl6Uup4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwHGK7BEBnXJnf-dit4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyXJPnVb7uy5-4xFG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx-StT6n7J5xA2FQZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQQ2YTaUZu7EcQbnF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUm2-SGyL-jywMKvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]