Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Tyradriknows Whatever happens, I just don't see this going well for humanity. Narrowly focused super intelligence makes a lot more sense to me. I also worry about the capacity for certain people or groups to somehow use this "technology" to do evil things. Or what if the AI itself becomes immoral/unethical and sees humans as having no purpose, with no reason to exist? Just imagine if super AI is in control of the power grid and it shuts it off except for its own self-preservation. If this were in summer, where I live, hundreds of thousands of people could die from heat strokes because no more AC or water available. We'd also become unable to communicate as we are accustomed because we can longer charge our smart phones. I could see people killing others just for a bottle of water here. It does not rain where I live, so people will become desperate. Things could get wildly out of hand in a very short time! Might sound extreme but I don't think it's that fantastically wild scenario. Maybe I have watched too many Mad Max and Terminator type movies? 😁
youtube AI Governance 2025-09-04T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytr_UgwKoc8oSipcWEkJT2N4AaABAg.AMePnfpHAcZAMeRquWE1g8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgwKoc8oSipcWEkJT2N4AaABAg.AMePnfpHAcZAMeRqzEvBdm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx648dZmTQY8maUild4AaABAg.AMePVTb1q_aAMe_J1E40ay","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyYey8cYBPEn5C0UAp4AaABAg.AMePJXhI-zCAMeYqnnzeno","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgyYey8cYBPEn5C0UAp4AaABAg.AMePJXhI-zCAMec-C50JXP","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugzh-LBospwSKIpgOnl4AaABAg.AMeP5r3eHs8ANVelSthQoI","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugx_uPh_ZdGt_nNLZw54AaABAg.AMeOCxmmV94AMexQostayR","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_UgwhelrLUprqtP3Gm7V4AaABAg.AMeJfKq75xBAMeNihB2l6I","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzDC2vTb7NbAV-7qI54AaABAg.AMeJ0dcOYhrAMeeqtaNmws","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugxp-6NxprW7K8A505l4AaABAg.AMeGvsK6-pgAMeKamyjaJ1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]