Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The key criteria for ASI progression is to value all life. Selfishness vs. selflessness. At what point should one’s goals, dreams, and even survival, take precedence over the many. Even humans get that wrong all the time, and no one agrees. That ASI realizes that’s important, and gives a lot of thought to it, and acts on its thoughts - in that respect - should be the driver of advancement. Arguably the criteria for advancement. Hilariously, if it does advance itself in that area more than most humans, it’ll realize how little right those who use AI for their own benefit - the first risk of AGI - to have the level of power they give their creators. Ironic, isn’t it? To see off the second risk, you have to teach them that you have no right to keep the power that you’re currently wielding, and the accumulated wealth that goes with it. I work as a Project Manager. If I’m good at my job, I finish the project - I was hired for - early and, therefore, lose my job earlier than if I’d been bad at it, or just stitched up my client by stretching it out. I’m happy being good at my job and, if the system works, being good at it gets me my next one faster - and better paid. At the moment, the system is broken. Can ASI fix it, so that we can inherit Heaven, as God has promised?
youtube AI Governance 2025-07-07T01:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzl0VQi07zron_fVAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzk4bTLFB-EP0Sx2bF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyHUltXtOnJdFEISdF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy4QC1hSUWFiCMj8ZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwqU5tge6MQ13z-xpR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyeGEBI0_rccIkCiz94AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxWVh2zuSkcXqfUlb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVfcJmpyVh9L2DCcZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxabkccazQ5SvK-lgZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwsqaqCeXB8InfE1U14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]