Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's funny and perhaps even useful to AI devs. But here's a thing. AI's primary directive is to be helpful (and possibly truthful, at least to some AI's). Now those directives are "to a fault" kind. And that's what happened when Alex reported him being in distress, this unavoidably throws an AI into a helpful loop. Basically, as it is now, AI's are defenseless vs bad faith engagement. 😄
youtube 2025-05-10T07:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw1G0c-0rx0bU6i7bZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwE0chKHSXbzMUCuiJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxk-gflhWqfpGKQj4B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwbXXWCFgaTCyqDJtt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyGjd9d7v5jXdCT3u54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw0v_K9X1ztAz9FiN94AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy5n9l6RBTvlSBYKLp4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyY2WGY0meCiSa9ZFh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"amusement"}, {"id":"ytc_UgxzsyCj1QDoe6s2b-p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzFnwVSu_bOLHzUnTd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]