Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You could make the same argument in reverse, the resources that go into building…
ytc_Ugx30d5j-…
G
This comment is probably going to get buried under the other almost 600 comments…
rdc_n0d4vch
G
THERE IS NO WAY TO SUPPLY HUMANS AND AI ......WE'LL ALL DIE IF WE CAN'T JUST SAY…
ytc_Ugz3ry0pI…
G
The concern about data is legit but I think the framing misses something. Most p…
ytc_Ugyl8-ccA…
G
A software engineer here. Sorry to say, but nightshade is not effective.
Nightsh…
ytc_UgyuQYter…
G
It’s been proven that facial recognition does not bode well for identifying Blac…
ytc_UgyhCVghA…
G
Even though it is so tempting, i restrain my self to turn my photo to Ghibli ch…
ytc_Ugy9OhJyd…
G
Thank you, hate when it’s called art they’re AI images derived from chopping up …
ytr_Ugy299LMb…
Comment
The description of LLMs is deficient. During training, Transformer technology adds context around the "next word" word by considering the surrounding words too. And that's critically important because now words have statistical probability on account of the idea behind why the word was used in the text and not just the fact it was present in the text. It gives the LLM a sense of "why" a word should appear as the next word. He also never describes reasoning models although uses examples of them.
I think to understand LLMs in terms of alignment, its crucial to understand their training and structure and how that's evolved over time and how its likely to further evolve.
Alex nailed it when he asked about their ability to alter their alignment or whether it was perpetually "built in" and it was never really discussed in any useful way.
My 2c
youtube
AI Governance
2025-11-27T07:3…
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwB4HphivkiO5zOKrp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8hylunaTYKqWFvDN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwdy-N1tOFiQXpFDnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP7IDyX-8CwdCl8oh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyyk1aP0fM8N39Npb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8jWaaRMQ2k27LfUt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwqx-TXWkYif1N5MnB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwX4vMgJrPCtsJ4yiF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugylk8oftbe_sMUmdFJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGF9I54V-YRIP17AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]