Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While answering one of those quizzes, I need to add that AI is not sentient, will never be, and yet at some point, we will believe it is. I explain this by saying that we have ideas which, no matter how deep they may seem, are always incomplete—often for reasons that aren’t immediately obvious. So, the day AI meets all the “criteria” that lead us to classify it as sentient, we will do so, even if it truly isn’t.
youtube AI Moral Status 2025-07-10T13:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwfdOmJBs8Rdp13DbZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwex_uX8PjhxMt1Otd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzOAYb5e2GD5ImqSYR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyea5ZjUoBXhU5ZwBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzpAqCQ1MRfyFB0WLN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyO_K3ZZTSaj2IHbVp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxAF6PegFg24yMPofl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyWttrGgc7KCQgVgC94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgygquEvcPs0ab75Y2B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxIYBrZ0aJgfK0J-xl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]