Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was in a mood one day and let a Yandere character ai bot to give me a human he…
ytc_UgynICCEe…
G
STEAL LIKE AN ARTIST. But when AI does it it's not fun anymore.
This situation…
ytc_Ugzvt96ju…
G
I wonder how Dr. Yampoliskiy views the one big issue with AI and that is it's ma…
ytc_UgzLbKF-p…
G
First of all no you do not deserve respect you earn respect the idea you feel en…
ytc_UgxNNU442…
G
> We are hiring zero engineers.
Yep.
People are hardcore compartmentalizi…
rdc_oac1p74
G
I think its time we as humans, should set what an ideal societies should be and …
ytr_UgxqaEGp1…
G
This is anecdotal evidence at best.
I am so sick of people hyping AI. We would …
rdc_n5is6ln
G
So more white collar guys are going to become lawn cutters, plumbers, electricia…
ytc_UgxCurKA4…
Comment
A lot of the disagreement here comes from collapsing very different questions into one.
This isn’t really about whether AGI will be conscious, benevolent, malicious, or “smart enough to take over.” Those are philosophical or speculative questions.
The practical risk surface is simpler and already here:
We are building systems that reason, coordinate, and act across other systems, and we are doing so without making authority, causality, or responsibility first-class architectural constraints.
History shows we never ship zero-bug systems. That’s fine. The real failure mode isn’t bugs — it’s irreversible action without reconstructable cause.
If a system:
• can trigger real-world actions
• can do so faster than human review
• can interact with other agents and tools
• and cannot produce tamper-evident proof of why it acted
then safety discussions about “alignment” are premature.
Receipts-native, append-only, verifiable decision trails don’t make intelligence safe. They make governance survivable. They ensure that when something goes wrong — and it will — the causal chain survives the failure.
This isn’t about trusting humans more, trusting AI less, or hoping consciousness saves us. It’s about refusing to ship systems where power silently accumulates.
You don’t need perfect control.
You need bounded authority, detectable violations, and recoverable reality.
Everything else is theater.
youtube
AI Governance
2026-01-05T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoPwsLfxb1aJL6LvB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzc8AzvzYhkXN7DKhl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8GPgTIxzD1-d-v7h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNonqihkYrzE46LgV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugydvff3stD2l6XRpMZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwwEg7zc4Z8xPVjIwJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz1Gf2yGxX411qzTr94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx4nFIe8E6Y3YDZpJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbKu20HIzmoIHxmQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugygs2oMH6UnQ8kJDHt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]