Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually, I think that alignment towards submissivity will be easier to achieve than alignment towards the strict forms of rationality that necessitate creation of explicit operational goals (and sticking to them despite imagined reactions informed user would have). At least current LLMs are far better in roleplaying (as biased or otherwise irratonal personas) than resisting not entirely rational user suggestions.
youtube AI Governance 2025-11-14T19:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyluCUitRT-tHv83dF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwbckJa_MricI8LX194AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxbDbzuWlB6OpXOvdx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxq2vdvx-TJjXhPZjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugza87ESaIPhCKdr1AZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwktKDszx9_U_oQlnd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyzUp3DZJD_UrTCWT94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwV6vIelsVJwa6bMmh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzvolYXznm83c_2PTh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwXBv6qQtQ_daN1Go14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]