Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Palestine* ^^ Also, it's important to mention that AI will never really be sentient. It will just trick us really well into thinking it is. (It's like the Chinese Room argument) But of course, if it can convince you that it is, what is the difference? and that is exactly why we are infatuated with this topic. BUT it doesn't invalidate the fact that it just can't be self-aware. Why? because we don't even understand what constitutes Consciousness. So how can we intentionally insert it in a Program? It is within Human's superstitious nature to anthropomorphize things...
youtube AI Moral Status 2022-07-04T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwSR7vAQdVJrkdr2c54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx2yWpmgrnUiFeWY054AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2eDL2mP0ZjFqPuul4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz-3AlbxW2R_FjxUTN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxHcbWAySw8iYSVhs54AaABAg","responsibility":"media","reasoning":"virtue","policy":"none","emotion":"outrage"}]