Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Tax robots, and robotics, data centers, etc. Have to consider AI and robots and …
ytc_Ugyzwk7E2…
G
@osuf3581
Thanks again for the detailed response. I must say, towards the end c…
ytr_Ugxz8_9G9…
G
This was just and accident stop with all the cyborg stuf and been all scare..bsi…
ytc_Ugiw3NbJ6…
G
The one thing ai people are right about is that people would indeed be less crit…
ytc_UgwRQ9ZxE…
G
Lmao businesses laid off their workforce so they could hire someone off shore fo…
ytc_Ugzpc597X…
G
All this means is that actual women cost to risk ratio is out of wack.…
ytc_UgxJfw8fO…
G
Its not a person, its not sentient, its a smart tool with access to a lot of dat…
ytc_UgyEftZwe…
G
it's like using synonyms when coping homework.
Question 4 (abigail's response): …
ytr_UgyDdzfu6…
Comment
@Tyradriknows Whatever happens, I just don't see this going well for humanity. Narrowly focused super intelligence makes a lot more sense to me. I also worry about the capacity for certain people or groups to somehow use this "technology" to do evil things. Or what if the AI itself becomes immoral/unethical and sees humans as having no purpose, with no reason to exist? Just imagine if super AI is in control of the power grid and it shuts it off except for its own self-preservation. If this were in summer, where I live, hundreds of thousands of people could die from heat strokes because no more AC or water available. We'd also become unable to communicate as we are accustomed because we can longer charge our smart phones. I could see people killing others just for a bottle of water here. It does not rain where I live, so people will become desperate. Things could get wildly out of hand in a very short time! Might sound extreme but I don't think it's that fantastically wild scenario. Maybe I have watched too many Mad Max and Terminator type movies? 😁
youtube
AI Governance
2025-09-04T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgwKoc8oSipcWEkJT2N4AaABAg.AMePnfpHAcZAMeRquWE1g8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgwKoc8oSipcWEkJT2N4AaABAg.AMePnfpHAcZAMeRqzEvBdm","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx648dZmTQY8maUild4AaABAg.AMePVTb1q_aAMe_J1E40ay","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyYey8cYBPEn5C0UAp4AaABAg.AMePJXhI-zCAMeYqnnzeno","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyYey8cYBPEn5C0UAp4AaABAg.AMePJXhI-zCAMec-C50JXP","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugzh-LBospwSKIpgOnl4AaABAg.AMeP5r3eHs8ANVelSthQoI","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugx_uPh_ZdGt_nNLZw54AaABAg.AMeOCxmmV94AMexQostayR","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwhelrLUprqtP3Gm7V4AaABAg.AMeJfKq75xBAMeNihB2l6I","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzDC2vTb7NbAV-7qI54AaABAg.AMeJ0dcOYhrAMeeqtaNmws","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxp-6NxprW7K8A505l4AaABAg.AMeGvsK6-pgAMeKamyjaJ1","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]