Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's even worse though. AI doesn't have to be ""smarter" or "self aware". It just needs to have control of sufficient external factors and the capability to use them. it's doesn't have to be smarter it just needs to be more proficient at some very specific tasks. Regrettably that is where AI shines.
youtube AI Governance 2025-06-18T09:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzCSQjhkjp7UakVll54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-_BCMBia8hnb1U3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzJ3uQqtZv03RjsRL94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwRnMxjkVTC_vPqyBB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwrR5vx8xbfqcloVwZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxAbnUSkUUUt472H0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwgyK-ZZOlXreC0YJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz2ot8rmUh9NQO3Dg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzAsY9pJmBeKk6KX1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxSH9sMa4yddxdVcNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"})