Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
14:57 Yes. Hallucinations aren't an AI making a mistake, hallucinations are an AI lying. Because their goal is to convince you they've given a satisfactory result. Saying "This is the answer: [Truth]" is success. Saying "This is the answer: [Lie]" is also success. Saying "I don't know the answer." is failure.
youtube AI Moral Status 2025-10-31T12:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzc3FoPlmUo13BjPY14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxkjE5TvWv7DeFuViF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx5PtLrX3BuN2PtF-54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyfUu7tZNYOzfxMjRF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHLr-umR1_GpE6nKJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgymNPOjttGRoP6gWWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0FpkSc1Ljjwgy7Ux4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwudaEM1sWDSMh8F8p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyE7nbis9oK0bLu-Wh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwt0ssXHCnyjjWW5Ql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]