Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate the fact we miss these points
1. Cyber secruity was at risk when russian…
ytc_UgxEicj3w…
G
Okay, call me an asshole, a sexist, mysogynist, whatever, but i this rhat that r…
ytc_UgyAhSs8f…
G
Us interview me kitni dangerous smile ki thi isme smile nhi ho pa Rahi is AI se…
ytc_Ugy_MoeMm…
G
nothing to lose and nothing to gain, might that be why they want a Robot in ever…
ytc_Ugwai-knN…
G
I’m sure they could already automate their role if they wanted to. I mean the AI…
ytr_UgweebwXk…
G
The most dangerous thing a monkey and a robot can become is human. We are out ow…
ytc_UgzG4YDRX…
G
Fwiw, cars have been incredibly destructive and have cost countless lives.
But …
ytr_UgzUtcPfI…
G
If you see a video of Sam Altman saying something responsible it is definitely a…
ytc_Ugwac6Z2C…
Comment
The biggest risk is that people think that AI is thinking. It spews what it's trained to spew, nothing more. There's no one in there (I supposed one could say the same about some people, but that's a different conversation). Don't give AI access to real world controls (nukes etc) but writing a essay? garbage in, garbage out.
youtube
AI Governance
2023-03-30T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz_I1xGv0ugNchsz9t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrBohgsOfgX94_QD14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxquTF_yJ6WlpCmp4R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgykCcLSxQhseqH7EFJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyBQKmOTS2yewrgquJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwLQ0vNkF_M7_sXCWp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDGs3OeniU53gNbRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyitHYPUQQC-PqwnB54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugym37Rg9ZnFA57CKDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxQcAiU6hB7fvCvh6x4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}
]