Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You're giving it way too much credit. This biggest danger of AI is what's going …
ytc_UgxLSO2xb…
G
One Tom Cruise is taller and two this is a deep fake video I can tell from your …
ytc_UgyFsLJvO…
G
This is not ChatGPT this is bad parenting. Parents need to be held accountable…t…
ytc_UgxSfj08d…
G
@kalvon take your emotions/Health for example, it wont be able to accurately des…
ytr_UgyHiq5-2…
G
any process that needs repeated tasks repeatedly can be automated banking in one…
ytc_UgwhCsUUe…
G
@MarcoGonzalez-bq2dbname Thanks for your hilarious comment! You've given a whole…
ytr_UgwIfTcWp…
G
ive tried to get it to pigeon hole me into gaming videos but i cant get them eve…
ytr_UgzBWN9HC…
G
The fact of the matter is that AI only works if it steals and repurposes already…
ytc_UgwL2ZXGC…
Comment
"It's hard to create an LLM that would want to destroy the world, because all the information going into is, like, 'that's bad, destroying the world is bad'. But you wouldn't want to leave it up to hope." And that's a misguided hope, anyway! Most published speculation about superintelligence (or AI in general) dwells on how it might go wrong. And the training corpus ALSO includes most science fiction ever written about AI. How often do you see benevolent, Iain-Banks-style AI in fiction ... and how often do you see SkyNet?
TL;DR when ChatGPT learns how an AI should behave, it's learning to bring our worst fears to life.
youtube
AI Moral Status
2025-10-30T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyABG2BqQo_bQ0RTeF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwKIBkTIjwF5QgSOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzp0VQ5QCWvMSJH6-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXt8u0LAlcm6JcuIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2mNarWuP2T8jCTfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCJBabiQ3Iz1EJtSp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwP2sI4oMWXokqcHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfXKjmHwOdcVoYIAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxNQQH7JScRsLDbMUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnUXuXIdWgn0uB8bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]