Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI models are just probabilistically compressed data, with a contextual probabilistic access method. the stuff people call "hallucinations" are just its own form of artifacting, it's fundamentally the same kind of thing as JPEG artifacts, only with text content.
youtube AI Responsibility 2026-04-15T11:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxamUxoV7xAGSn4c9p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxQvrzmAYOmZNS2kdx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxQjZo509jN7CrJk5h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxY1ndoiW4xrAD-9IV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxKwEU9n-7MqraomEV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx-kvdf2U56rJ4551p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwVXdZ17kzO1vDUm1B4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzZqAMUpucMGMDmUap4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzIH_OsgG9IaCRMIGh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwDow8uK1cKlEpBgkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]