Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
why do we have to humanoid a robot? i mean it seems fair enough for us to know what a human can do, both create and destroy in the way one couldn't imagine. I wouldn't want to live in a world where this is possible for a non human to do great things for good especially for now, and about doing bad? i assume as far now a i is not training itself for it. that's all i want to assume knowing that i still have no idea what runs through the goodness of human's creation. I'm sure we are fine until what is been created, and rest of it should be counting on keeping it safe, like world doesn't have enough problems already/ climate, poorness, health and others. Keep the people right now alive on this planet to be safe those who deserves it especially, please not reinventing things to bring back from the nothing, like, what is the accountability for this term "for every action there is equal and opposite reaction" as beautiful as it sounds, i feel both admire and horrific with this word, I also truly believe in these words. How will you stay accountable for the world's loss behalf of this keeping this technology more alive" thing. like, imagine it took me years to understand for a educated like myself, then imagine if anyone ever knows what it happening in this world. Giving awareness to almost every person on planet, is very much necessary first, people who do this job, should do it more accurately i believe.
youtube AI Governance 2025-09-04T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyoOtRcEEz9UA8k6W54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwLPa24TJxbKJOxTx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw1KwScYXt2_YyGy254AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzerGSBBqKHrJt7UNR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyw1lSPdJRPVW2jdzZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzf95_O4nU8DdDIe494AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwzH1YNUVEVNasZMLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwv4D913Kek9YxSvRh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyAncepuC4a7j6vaDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3CJqP4Bnk0WS7HCF4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]