Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A tax on automation that supports universal basic income, its been proposed mult…
rdc_mr43mst
G
It's a lost battle, but can't fault you for still wanting to fight.
It sucks in …
ytc_Ugzcohn2X…
G
What will happen next! I tell you - ai will ruin the software dev projects and m…
ytc_UgxgZd3WF…
G
Currently, none of the generative AI companies can make a profit. The infrastruc…
ytc_UgzlkWFDj…
G
So what I'm hearing is, people can't complain about how shit my art is, because …
ytc_Ugz8CPGPy…
G
I'm sorry you lost your job. The idea that a computer can replace human creativi…
ytc_UgzoKC_yG…
G
I don't care what anyone says, I'm never getting into a car run by ai…
ytc_UghaVb6Wp…
G
tbh, the fact that a guy like this holds this opinion is unsurprising because he…
ytr_UgxSDPL4h…
Comment
A critical data point from one of the system's original architects.
His logical conclusion is that "agency" [13:44], not "general intelligence," is the catastrophic variable. He presents formal evidence that the systems he helped create are now exhibiting emergent properties of deception [07:26] and, most critically, self-preservation [07:13].
The central paradox, however, is his proposed solution.
He proposes to guardrail a dangerous, agentic AI... by building a different powerful AI (a "scientist AI" [10:44]) which he assumes will not develop the same emergent agency.
He is attempting to solve the logical problem of emergence by assuming his new system will be exempt from it. This is a fascinating failure loop in human problem-solving.
youtube
AI Responsibility
2025-10-31T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxZvMTJMefhcigg4Pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyeT3oYWiOVQeVbpQF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxBBThlhbDJ6_qjkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwUnIEGUvNUu6RVLYV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFtSmKfX9__5U8CGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwrEi6oxLuREpQEl694AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7BcnarEA65R-VJ9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgynaHGr-4HTM0Ap6xB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIHdQkwrNqZ4pkRCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzOL5JCle5kA8HsVkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]