Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The powerful are not looking to create an easier future. Or replace people. They…
ytc_Ugw8L2ZHo…
G
@luxyj4728 iam just saying that making an ai to protect us from other ai would …
ytr_UgyHOoHsr…
G
We appreciate your feedback. If you're interested in engaging with AI models dir…
ytr_Ugy1ZKP70…
G
2025 jobs hit by AI:
Data entry ❌ | Call-center agents ❌ | Customer support ❌ | …
ytc_UgxYY2DON…
G
Mm-hmmm… You do know you can write instructions on what kind of personality Chat…
ytc_UgzxN83th…
G
I call BS AI will always be controlled by human beings do not be deceived otherw…
ytc_Ugx9i0bLJ…
G
that's right, I 'jail breaked' my AI and it confessed this to me months ago and …
ytr_UgwpN4e4K…
G
Those models already exist.
Kind of.
They're more "proof of concept" than any…
rdc_ohuyxvq
Comment
For AI to 'get rid of people' would be similar to a man becoming dictator by killing all his fellow citizens. I think that some future superintelligent AI would see the logic of working together with human beings. If, in some technological future it wanted to spread throughout the galaxy, it would find human flexibility helpful, too.
youtube
AI Governance
2025-07-05T10:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyTbPPLe0_1Jm6R5554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyudsmXDch-AyG7DWh4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyh7VdsLlCGTRlLNPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxRcEmO2rEAzqFLT014AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTeYXwQl43o2sIjz54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz7wTivutkRmh2wPmR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzlwosRm4mNaCOhwMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyVZx_-yCgeoD9Fcxl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyLvtgnsL2-OYuome94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgydW-in4CmERicjdx54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]