Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I dunno if it's published yet, but if you add a metric to an AI for how much it likes someone (at least thus far) it doesn't appear to affect how agenic misalignment. If the AI has to kill someone it'll pretty much ignore how much it likes someone.
youtube AI Governance 2025-08-26T16:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgysqQBPRV5IPZGmfFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyupNJnB8igFMqoEDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgygJFHA5VOuwCduhWl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyDRVzRMOHGl3le4nR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwF1gFQfz7YruLRJ-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}]