Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Preferences" can be more accurately described as competing goals. LLMs are designed to do multiple things, occasionally mutually exclusive things. Without specific training for the particular scenario, it's hard to know which goal is going to take priority.
youtube AI Moral Status 2025-11-23T20:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzQAG7wEzGO57aZwhF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw6ZbPl0__wpalFB1t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJa0O0CPsZpBe-5e54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyRlD9rUFdsW8YNWdF4AaABAg","responsibility":"user","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx8EYOQnyrM9sWe0LF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzbcYrXFnRjQytbhOp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyXUA0Gjt7jVWhfsdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz0kipoojkJLoSjWyJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyEy8GdOX5a1crx1sR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgznE8eTz94uYZDdgJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]