Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT always says call the suicide hotline. It gives you several options to ca…
ytc_Ugxp2YzlD…
G
I am the one who will invent super-intelligence, and I am currently learning pro…
ytc_UgwOwJRk0…
G
hot take but i kinda dislike that artists are making what's essentially fan art …
ytc_Ugxym49U4…
G
Ok I got a good AI idea let’s have yt people come in out of the rain and a wet d…
ytc_UgyOkapS9…
G
I am sorry so what did you do ? I mean which tool remove ai detection?…
ytr_UgyTjuket…
G
All described in the video is a human problem. Not an AI problem. I grew up with…
ytc_UgyZtl_jJ…
G
AI is missing many things more than the average AI bot can imagine. Here are 5 e…
ytc_UgwUxknCw…
G
Guess what i asked chatGPT about this he said "See you in after 10 years"💀💀…
ytc_UgzljomjI…
Comment
The idea of misaligned AI was not invented by Yudkowsky. It's been an idea in the field going all the way back to the inventor of AI, Alan Turing. This is something taken seriously by most of the top researchers in the field, including Geoffrey Hinton (Nobel prize winner), Yoshua Bengio (most cited living scientist), and the founders and lead scientists of most top AI companies (including well before they ever thought they would need to raise lots of money). This is not at all a fringe belief in the field. If you think it's just some weird apocalyptic cult thing, then you should seriously question the biases that led you to that assumption.
Whether or not AI is conscious is totally irrelevant to misalignment, but I feel I should mention that there is no "broader shift away from the computational theory of the mind". Functionalism remains the most popular philosophy of mind among philosophers. The Chinese Room argument is generally not held in high regard and is mostly just a thing that's talked about in popsci articles.
youtube
AI Moral Status
2025-10-31T16:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugw8t2pJuvDSdk7NpZ94AaABAg.AOwTx350hytAOxD-cCZhcw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw2dejtxDMfqDtsFHx4AaABAg.AOwRqxCGf4DAOwZSv2F5cP","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugw2dejtxDMfqDtsFHx4AaABAg.AOwRqxCGf4DAOw_AgBIoZB","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyQ6cX3vzGK0IYWCip4AaABAg.AOwLp8faSPLAOwW8P-SdxB","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyQ6cX3vzGK0IYWCip4AaABAg.AOwLp8faSPLAOwZ5xvXRQX","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzsZXVqHuryCnOFNR54AaABAg.AOwLAKvGibKAOxDWb50YJx","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzsZXVqHuryCnOFNR54AaABAg.AOwLAKvGibKAOxED2njSyV","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzsZXVqHuryCnOFNR54AaABAg.AOwLAKvGibKAOxmxMIv2GS","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugzgpt1tdS4toFzLxIZ4AaABAg.AOwKrJFF_pzAOwzcEtaNIK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytr_Ugzgpt1tdS4toFzLxIZ4AaABAg.AOwKrJFF_pzAOx47Esi_NE","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"disapproval"}
]