Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My issue is that even the “everyone dies” scenario is not even really the worst option. Worse and more likely is that ai becomes a stronger weapon for humans to cause more longer, undignified suffering against each other short and long term. I honestly wish “the next evolution” was happening but that is clearly not what is happening.
youtube AI Moral Status 2025-11-25T00:1… ♥ 3
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgypMbjp_0O-0bRAUHx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwB1ZjZ7h99zRNhxLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzwc7dYMn9tvTKnnzN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyVAcI0RjxY8VScb2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzDyyk2BAyMVWx_GHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwV61n2GVJpzVHkEa94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzOJKhJ3g6tgydgR_d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyqO4QfUHv1sJYvPk14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwl82nGWamDuP3jmYt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw-Kw1b2S3Tj3KBwEZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]