Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Turing test is the wrong way around. If the AI can figure out whether it's talking to a person or another AI, that's sentience. Fooling humans is something machines do all the time.
youtube AI Moral Status 2022-07-03T08:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwC4Kw3dyiX5-NT3ud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzwmHPwAu_263ymR614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx1wY-86-XDputSr1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzsTFlMcttrzEllTPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx3-zZxEWfRRl1xsjl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]