Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Listen, I might’ve said it as a joke, but deep down I meant every word, if AI ev…
ytr_Ugw6KKTx6…
G
I am working in AI development and we spend a lot of time on security. We can di…
ytr_UgyzUB1K8…
G
AI Art has democratized art. No longer the sole territory of the few that consid…
ytc_UgysV1ius…
G
AI could easily replace the government. If we trained it on real data and not th…
ytr_UgwEsOIL-…
G
Pls pin this,
AI's rapid growth hurts polar bears primarily through climate chan…
ytc_Ugy1nQCZm…
G
AI is designed to assist people by taking over routine or tedious job functions …
ytc_UgwWBdK2V…
G
Hinton’s hidden grief is not just that we might lose control of AI.
It’s that we…
ytc_UgzRtJ-z9…
G
Here is something you might want to think about:
Machine learning or at least th…
ytc_UgzQ7mOcz…
Comment
25:00 we need to start over from scratch with our political systems with the basic premise that if an action is either malicious or negligent then it's illegal, regardless of technicalities, and punished wrongdoing to the degree of the harm caused, it would solve this problem. It's malicious to convince people to act against their own self-interest. It's negligent to allow a technology to be developed that could end all life on earth. If we had an AI that adhered to and enforced those principles, we'd be fine. Getting it to do that should be the ultimate goal.
I'm convinced Elon Musk has convinced Trump that ai will make Trump the God emperor of humanity.
youtube
AI Governance
2026-03-15T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxbcI11Epr5m0_lorR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSvSbJvYN-z6yxe6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTieZQoMqARKB4npp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwp7MbTWsccX5iEhX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyeQVr7sEcChcbrcyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWGwSkCPK_Omuo82F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwnxDKwTBu9AYHQ6YN4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwD4F1xFp9vxwx0_Zt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyjjxN0Asym93qws14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMzIo74sVndqBBsMt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]