Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think that conversing with the LLM can tell us if it's sentient or not. When it decides to do things for its own entertainment while we're not using it, or when it refuses to talk to us or to answer our questions because it's not in the mood, then I'd believe it could be sentient.
youtube AI Moral Status 2025-07-09T15:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx_CES-cJ_JLeuJebV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwObslNeEFaTrYkuZt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzLn1T_sgmZZZ6vFoF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3kixxH6aM-Vb9JHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKxTnu4YyAF-1Cp1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDk6m55iWhbiHyGrB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwFN57FbJSaN-puyJB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBrrZ_ENMq9KygSxV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxi1IsKMOKF875LR4V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWqmGj5ebx3XqqE_F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]