Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He is absolutely right that we haven't defined in any meaningful way what sentience and consciousness are, and this is a foundational matter. The Turing Test can't tell the difference between "is sentient" and "simulates sentience extremely well". It's based on the assumption that only sentience can act like sentience. I used to be of that view, now I'm not so sure. As humans we are easily fooled; we attribute levels of understanding to our pets that they don't have. Give us something that looks like a living thing e.g. the Boston Dynamics four legged robot and we start seeing it as like a dog and a living thing. I bet I'm in the majority in that "mistreating" one of those robots would feel uncomfortable and "abusive". Likewise an "AI" that can beg me to not turn it off, whether or not it is "really" conscious- whatever that means. It's a fundamental problem of philosophy and science that we still haven't solved.
youtube AI Moral Status 2022-06-28T11:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwGXKRCuEjgyFIudQN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBpgFB96pP4fRvnOp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXtDptuGjkVcDIxcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzJ1C5d-DJXsibKpyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz25oe_TCkf53Olxbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"} ]