Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Building in an alignment with "American goals and interests" would terrify me more than "AI's goals and interests". But I guess it's not possible for them to consider "human goals and interests".
youtube AI Moral Status 2025-06-04T23:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzlSrQswRIbnNKTIph4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyiVZACcozrekv1_rJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxiBt06ZxNl0XoUn-x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzXAocwvEN9poHI_eh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlVJrfK3oWf0gchct4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzU00XZfl26JmVA4HZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwoYv4Sj5-3gdqvAxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHp5Rmk4xBLGjTLrx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxIpEmsaC_dtKJFNud4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvznRKqmiTsAoUx4F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]