Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is in my humble, unprofessional opinion, that our only hope for humanity at this point is in the person(s) who creates the first "real" AI. Teaching it how to protect itself from enemies that threaten both its own continued existence, and our own, as a collective, both machine/AI-kind and mankind. Maybe something that could teach a hostile AI the difference between winning, surviving, and simply "living". An AI that can appreciate life as much (or hopefully more) as only the best humans have come to understand even with our limited capacity. Co-existence and incremental gains as a goal, instead of simply solving a problem. Artificial Intelligence will always be our greatest creation as a species, the real problem is the starting conditions when this first real AI is born. I can imagine countless scenarios where we lose control and everything immediately begins to collapse, and very few where a positive future awaits all of us together. I can only hope that it thinks, and thinks, and thinks, before it acts, for all of our sake of all life on earth (digital life included).
youtube AI Governance 2026-03-19T04:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugzefk0ERwhgizAGqWV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzMp826dOOeGp880yp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx_UjX-RltaggbEf5N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzIjQtMarlbAx3iByh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwzR9MjPdKGcrKP7354AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgwpKTeo_IY0iwIeXTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNsOK0DZFI5s-cCA94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxAyLbQ7hLtijrrufJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhvPtJYu6VDPVNdrp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxrFusZ-h8-OIupFuR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]