Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, it kinda is still kind of interesting , if it was a landscape for exampl…
ytc_Ugx3nxcTL…
G
Tesla cars are dangerous . The battery alone is dangerous. There is no-fail safe…
ytc_UgxhoKaO9…
G
Much like the cold war where we had to stay ahead of the Russians, the new AI fu…
ytc_UgznUrRdU…
G
We do not know how brains work at the level of neurons. We do know that nothing …
ytc_UgyIjxGLJ…
G
Say what you will about a banana taped to a wall, but at least the person taped …
ytc_Ugw0vC46C…
G
Although it might seen like it our brain is vastly more complex and processes ev…
ytr_Ugw_pC8U9…
G
Why is this so true 😭 whenever my mum goes through my phone I always hope she do…
ytc_UgwD0mO_X…
G
And I would like the human operator to have to go on record to override the mach…
rdc_i2sjcg5
Comment
In 47:00, the discussion gets into "whether a model can reason out of its training data". I thought we had this one settled, when Sydney chose to learn Farsi on her own, to be able to chat with an Iranian user, back in 2023. Farsi wasn't part of her training data. So yes, an AI model can think about things which are not part of the training data. This part is settled. Not only that, AI models are even capable of abstract thought, as the series of ARC-AGI tests clearly show. So we're actually debating something that was already proven one side. Yes, a reasoning model can reason outside of its training data. One could state "But then the model need to put the subject to be reasoned about in the token space first". Okay, but how does your brain works? Can you reason without knowing what you are reasoning about? So using the token space to reason is not a proof of the model not reasoning. Actually, that's how reasoning works, so its to the contrary.
youtube
2026-03-25T15:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx8CgiHPtR_PcujKzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL3gRwSi8GiYrlhU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxoAKbFXaFNKheaIWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-OQdbloj83oKvmI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzumermdBNf4qTtoHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy65NnMH35v_0m6yqZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvVxyjyNLIEeXpHht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM84xeyznV3P1Wn4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxPkdUi-FguJl50ZMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3bSc6GOdDKVbDIJd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]