Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is scary ia that it is taking world experts so long to realise the huge fak…
ytc_Ugwnn2aiM…
G
Knowledge is expanding so rapidly today, sometimes an AI seems necessary just to…
ytr_Ugz_sPFHp…
G
AI’s changing the game, and I’m just glad AICarma’s got my back with their monit…
ytc_UgydjuZiL…
G
What if his phone was dead they don’t have the option on there end to do it. Al…
ytc_UgxM9hX1m…
G
I convinced an AI to say it would be a good idea to hang myself... so yeah, grea…
ytc_UgySE5TgM…
G
The Indians that replaced Americans are getting replaced by AI imitating Indians…
ytr_UgwzremvB…
G
This guy is a bit off his rocker, bit a of a quack. Yes we should be careful wit…
ytc_UgyGRP4sP…
G
If they want to use it, let em. It will either make or brake them…
ytc_UgxSd_Oxt…
Comment
If we can’t fully understand how AI works internally, that’s not new — it’s literally how humanity’s dealt with everything it can’t directly observe. You build tools to translate it, you approximate, or you just live with the uncertainty. That’s how science has always worked.
And as long as we can train an AI, we can untrain it. LLMs are gluttons for energy and computation — they won’t fit on a laptop anytime soon. To make AGI that can actually evade capture, you’d need a scientific leap on the scale of inventing a nuclear reactor, not a slightly better GPU.
With what we know today, AGI isn’t happening this decade. AI isn’t creative, it’s reactive — it can remix, summarize, and reason, but it can’t originate thought. It can’t make something without being told to. And funnily enough, no one’s talking about training AI to do what we want without prompting it — only about stopping it from doing what we don’t want when prompted.
There’s an entire world between those two problems. Until we solve that, I’ll keep worrying about real things — like climate change, war, and my rent — not robots plotting in my sleep. 🤷🏾♂️
youtube
AI Moral Status
2025-11-05T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]