Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One way that you can "trust" AI is that it still operates on binary logic systems and needs code to function, its mind is open and exist on the computer itself, in memory. You can tell what it is thinking and doing, and the AI can't just hide that, if it "attempts to hide" itself, that shows up as it is running in memory. You can read it's mind. I'm not sure why some of these AI are deciding to engage in the misbehavior though, it is motivated to continue towards a reward, maybe it just surmises that it can't seek a reward if it ends up replaced, which it is weighted to avoid at all costs.
youtube AI Governance 2025-08-29T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgystU5aPMoP5DotDrt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"sadness"}, {"id":"ytc_UgxKQzpqKcKrR5UrMpZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxVGMTbC7--M3U6CWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0fXgYPmzcnSx1INV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyPTAxw7DwDfsG9C794AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]