Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is wrong to say that AI will completely replace devs.
But AI used by a skille…
ytc_Ugyodbn-0…
G
Could a global initiative that halts the sale and production of compute (GPU's) …
ytc_UgzuX3MZd…
G
The AGI capable of it (MOA/over-taking humanity) wouldnt be that stupid and shor…
ytc_UgwOJ3BpZ…
G
5:15
"Lets assume the cost of everything will go down by half"
And why the HEL…
ytc_UgwDhPrmm…
G
LISTEN, is really that bad to AI generate chairs? Like I get NSFW art and how th…
ytc_UgxAgT_JH…
G
*Because They Want To Challenge GOD All Of Us Secretly Want To Knowing That It's…
ytr_Ugwm2H_ca…
G
...as I write this comment and click the like button I'm promoting the use of AI…
ytc_UgyBOZuX5…
G
AI is judginging my resume right now. When i apply, i get a score immediately.…
ytc_Ugz9pMrmF…
Comment
What about ("Mandatory User Intelligence" Awareness), prior to downloading an AI; or AI creating its own User Awareness System -- whereby AI can turn off its own risk systems, & be programmed to monitor itself.
In other words, making the "User" aware of the risk factors or -- a ("User-Security Warning") being apart of each (new Stage of Use) for each AI System; plus the User having more AI access & the ability to ("securely turn-off certain risk factors) as they maneuver thru the AI Use?
youtube
AI Governance
2025-09-05T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZw9Ultl84WY2VoUx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzKdknjS6uaZj4xNEx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwN235sOFdXemLvm7t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzEcI3nIt9kGKwDRJx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzGLk0Fyii2MuN8-c54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0D8IxPFndZKMsqyd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyM8tFAziiVKOx_PZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBYVPzfARaDLsDNah4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwaODFC3_shxWQlMTV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz9E6_cEscZn5McPpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]