Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not convinced that merely having a conversation with AI would reveal that it doesn't have consciousness. Except for the fact that, say ChatGPT, will tell you it's not conscious, I don't know if I could tell - the responses, including subtleties like self-depricating humour, are alarmingly human-like.
youtube AI Moral Status 2025-07-30T16:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEXUQ0hWF__RnewyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdIPkm4hwAD_31etl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxSWz98i0kaw2sXvYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyabGXMCCb6u62p6op4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJxqRbczx4O1-ulpp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwfCBTE7LLvtIrDBC94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy0tsE5TA1DitEke2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgySmTO2pIFLFeqpqY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtYqq07-liVQNvSWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxJvDD9gEi2uYG4n0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]