Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Robots lack compassion.. but robots also don't think things like "I want to kill fucking terrorists". They don't have racist or bigoted attitudes, etc. Robots do what they are programmed to do. If they make mistakes, it is because they were programmed poorly. I don't see any reason why AI of the future won't be much better than humans at determining if someone is a threat. And they won't make judgements based on bigotry either.
youtube 2012-11-23T17:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgxVTDG_AcOqtX5Mat54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxP9paH9FALh-nIfnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxZzZdUq5YTfEBRWuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyvoV0RgNJfvfGauOl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxyCaSrLWjXndY9nGh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcUcbQq_FNZ__zAWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx6cXP0pv4_NK9-6IN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwCbLlgUMEG7OZrV9R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw8fQ-5ELa48r5vVPV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVOqPyOnA2Rcu-oAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"})