Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People will wonder why AI is killing us in 50 or so years and these videos will …
ytc_UgwP8knMP…
G
Ai cant steal art. Art is made inherently with himan emotion in mind, and thats …
ytc_UgxsUkGaT…
G
Nobody wants to watch an AI podcaster interview people... it's the human element…
ytc_UgyDmG3y8…
G
The topic of ai just made by brain to ignore the amazing hyrule field theme in t…
ytc_Ugwj1kwer…
G
In that movie AI become self aware in 2029 and AI engineers are now saying that …
ytr_UgwPzlMot…
G
AI is now tracking every purchase we make and corporations are using surveillanc…
ytc_UgyRBnWeR…
G
@augustnkk2788Because AI lacks sentience, so the concept of “creativity” is inco…
ytr_UgzAcVOie…
G
@CaliMeatWagon when an artists mimics your art style. 1. Its almost never the ex…
ytr_Ugw77A5Tb…
Comment
Most general-purpose LLMs are bad by default. They mirror the data they’re trained on—and that data is us, the monster is us.
The fine-tuning layer exists to make them nice and aligned to a particular moral, but it’s labor-intensive and expensive.
Elon thought he could skip it and cheap it out.
You might also train your llm core only on nice data, and have no monster in it. But we are lacking the amount of data necessary, because we are monsters.
The other possibility is to invent new, more elaborate low level architecture. The current design with a simple activation function and weight is an over-simplification of the real biological neuronal system it tries to mimic. A better architecture would allow to train the models with much less data.
youtube
AI Moral Status
2025-12-12T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx-gKiJl4DA6-3TMj94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx0RPYyQzjewD5LoCZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwPlM8X9R1I-IYUfAl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPFUGkrCzsqnWi0yh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm7-51VLEPCeWaUHB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxG7PZzziicwtPtsdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz990vwa4FndOtcG4p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4L26Rp-Tx9BcOVGV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzfaiMa41DwIIF44jd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTqyXrj9dD5m6E_c54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}
]