Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i think the issue with ai ist the goal is too broad and doesn't include specifics like "be helpful to the user" what if the user wants an atom bomb then you need to change it: "be helpful to the user, without harming anyone" what if the user asks if he should kill someone trying to kill him? you see the issue becomes how much reason can you put into a prompt and data, because that's what they lack, reason. maybe they should say "be reasonable" but then that will also include what "reasonable" means on reddit.
youtube AI Moral Status 2025-12-15T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugy08cRqfdWrfiPvMfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5XwfLhOgBo9WKKuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwADlEM6OFCHxRLhCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxn60-oigQPBiW8Umx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxarHxDLb0wO3Oi_cV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx9lrwYkfafZVwn8th4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzCG0MF8m37sHu0Nil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzb4gUvOBUau98PxIJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbqkVZKD_jtAdABWp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzmnPLy-8m8qRGaBUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]