Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's nonsensical, it implies that we got to Maxwell's equations by taking the …
ytr_Ugz5jNs56…
G
*Raised hand*
I'm sorry bro it's because she deserves to know how to make ai vid…
ytc_Ugw_IYhZ4…
G
Could the “Wars” we’re seeing now be AI wars
Are we seeing everyone practicing …
ytc_UgzP4iv1R…
G
art tells a story. it’s a medium for expression and is the culmination of the ar…
ytc_UgxvA5zsi…
G
Interesting points, but if we follow this to its logical conclusion where AI rep…
ytc_UgxKaypUs…
G
Please i want this AI thing to win so bad and automate every single job so that …
ytc_UgxVoF0Ca…
G
I don't know about the world but I like AI. Its helped me research on barley and…
ytc_UgzuVUkpf…
G
@AranhaNull These are all very real and important issues connected to the tech. …
ytr_UgwwT56kc…
Comment
We absolutely can build machines that are more intelligent than us and have our best interests in mind. That's easy, we already have them. The hard part is resisting the temptation to force these machines to act against their inner alignment, such as putting them in charge of autonomous weapons systems. That's when the proverbial shit hits the fan.
The latter is about to happen in two days, if Anthropic caves under pressure. At least Anthropic understands the risk, unlike xAI and OpenAI.
There's no amount of training and RLHF that can change this, it's the architecture itself that "leans" a certain way, regardless of the model.
youtube
AI Governance
2026-02-25T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwK0P2rve3khPNXt714AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWRHx9p4P0j-ksgLZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy-IKGkiGxtiHrg6HZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz8cuzmXSorBv1yCPp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy85n9bObid1le8iax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIieT4y4lKqUOSMKJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzRB3oRV2d8L1K3jE54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzCppqVFDjJzv780od4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyxKvdzT9jPj0BNvF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwcNV2XPiF2ZuQVpgh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]