Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Correction. Oligarchy could wipe out the middle class. We could make AI work for…
ytc_UgwX9Afc4…
G
This is fucking scary as shit holy fuck
The funny thing is that we won’t use it…
ytc_UgwhW_XRr…
G
When I was a kid I wanted to make robots, now look how far we have come.…
ytc_Ugxx-4ix-…
G
This is maddening. I don't require AI. I haven't sought ought. I don't want anyt…
ytc_UgxmmEFbS…
G
Hi Pavan...really enjoyed this video. Looking forward to your video on technical…
ytc_UgyGdGXWf…
G
My family actually owns a robot plumbing company and they’re getting several mun…
ytr_UgwKoc8oS…
G
Even gooners have standards, after all rule 34 has literally a filter to hide ai…
ytc_UgzIQWqPa…
G
Putin sends troops into Ukraine on electric scooters, motorcycles and regular ca…
rdc_mcqa6n4
Comment
For those wondering why the model suddenly produced antisemitic output: this is almost certainly a regression caused by optimization, not intent or ideology.
In large neural networks, including LLMs, safety behaviors aren’t stored as a separate rule set — they’re distributed across the same parameters that encode everything else. When you fine-tune or otherwise re-optimize the model, you can shift it into a region of the loss landscape where previously learned constraints activate less reliably.
That doesn’t excuse the output, but it does explain it. This is a known failure mode in continual learning and model compression, not evidence that the system “became” anything.
Treating this as a scare story about AI motives misrepresents what is fundamentally an engineering problem. The correct response is better regression testing, constraint preservation, and robustness — not anthropomorphizing an optimizer.
youtube
AI Moral Status
2025-12-15T00:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzE2eG0lakVQMLmQtd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxORgwt8mQqjAxl6bp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugww3pnixEdYVhaI4-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyi59VSh7djtMh2cqZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1yuRp7mCxI1pCcD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwzx8z2YcMR6qAhw7h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw1e5oFOL1yex218WB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-BrjgZBiWFn5wdah4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJp4Xy07PkcIaCD-54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRn0LdjlfiYv_p6Yd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]