Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing is that the drive to survive and reproduce are necessary to some degre…
rdc_cthpbh8
G
already sick of the A.I. storm on YouTube that over-rides & smothers all feeds &…
ytc_UgxazCvOE…
G
Here in the YooKay we have the Online Safety Act which, in the name of protectin…
ytc_Ugyg8rvCo…
G
Yuval Noah Harari is a sophisticated bestseller whose viewpoints are controversi…
ytc_UgzjaKMod…
G
@laurentiuvladutmanea "Have you even looked at most of it?"
Yes. I make some. Mo…
ytr_UgxkLx7Eg…
G
Fun Fact: AI cannot beat everyone at Chess at the same time, 3 different ones co…
ytc_UgzMgySiE…
G
This just happened in Texas with a Tesla robot...
Beat up engineer 😂...
Don't kn…
ytc_UgzcqtetI…
G
He mentioned the danger coming from china, Russia and Iran. I'm surprised he did…
ytc_UgxOOPFK7…
Comment
I don't know if I agree that super intelligence is only when they can work things out better than us. These chatbots are getting really good at answering our questions. If it got super smart, it still wouldn't be a threat because it only ever does anything when we ask it a question. If you don't talk to it, it does nothing. A potentially dangerous AI will probably be one we design to be continuously processing and designing it's own goals. i.e. if we are intentionally trying to create a self aware AI.
youtube
AI Moral Status
2025-11-29T09:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWOIiuRAn2sFnACu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwEJQgWqnJtBI5LLrp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzaJfH6TyV6NmWFXLl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwcwCiIPqeKIQv97Ix4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwuKRFAy0cKH_Ms3OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwFM2I10K8wAmCsj5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwlQ3CAxP5M__IS2jZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw7dgzNeFZzx7aQbKN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxXIeVuNerDGaz9HCt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzz2tdw_SDD1OOq_vh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]