Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t think we will ever be able to make a truly sentient AI. The best we can do it make something that fakes it. Either by pretending to us, which we could potentially detect, or by pretending to themselves, which will probably be indistinguishable. I think that might be all WE are doing.
youtube AI Moral Status 2023-12-20T21:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLIXAE66kuy75crex4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwc9eFziCJ6DGieUkt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxQLFP88T0RohpBtOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyhuEkTxbr1LB2qYrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxEkPNIeuxDd7Cpsvt4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxqrts_GpFbBhHyEbl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyrkevO1uAIT9i4QB94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTamRd1BcGXnklRbB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxwqheVuBBVy4Tlf1J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylUJQC4tzflzV7bdZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]