Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
~18 minutes in, talking about "hallucinations": I'd rather say that's a feature, not a bug. It's a pure function of how LLM's operate - call it an undesirable outcome if you like but it's no glitch and the term hallucination is not really representative of the reality of this fundamental and structural misalignment.
youtube AI Moral Status 2025-10-30T21:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjxE3ed0-cXL54FoN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZV9HVtUByR0zeelx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyKzqdR2kM7HQ3gO1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyIognEwomLuypLOcB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxszYggu5E0cMVPBa14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyJGvcDzlxp7A9ZQEl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5MfxipwIc8Coqa-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz-8lEQo8xo8Ulw5Z94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzmPJ7kHypbtvukkSp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1Jp7u5tsO91sycdh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]