Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here is a robot you dare not punch. If you do you bust up your knuckles.
If you…
ytc_Ugx8NowWn…
G
In the UK 37.5 hours is the norm. It seems weird to do any more than that.…
rdc_dv0mkaa
G
So what you're saying is that the real art is the performance is the journey fro…
ytc_Ugx57qG4Y…
G
LOL no.
I would love to see a debate between Harari and Jaron Lanier. Lanier i…
ytc_UgzVYB3am…
G
Gotta remember a few things about the French revolution:
1. The people had been…
rdc_d7kul3o
G
Please run conduit above a already installed equipment. This should be fun.
Ano…
ytc_UgyoLIXYg…
G
Tesla announcing they are hiring off-site drivers for their robo-taxis was most …
ytc_Ugx5cBfQR…
G
AI replaces Squidward's job as a cashier
SQUIDWARD: HOORAY!
MR KRABS: Mmmm... n…
ytc_UgzCTX5Ty…
Comment
It's fun to play pretend, but if you know how they work, it's just a very convincing emulation. The neural network is only part of it, there's also other things on top which make it happen. Say, the neural network only suggests a statistical distribution of many potential continuations of the dialog, the rest is done by conventional code. There are several strategies how to pick the next best token out of the found candidates, and if you pick a bad configuration/algorithm, the model will start spouting incoherent nonsense, its intelligence will completely disentegrate. If you make the token selection reproducible and remove randomness, the model will always respond with the exact same answer to the same question every time. There's zero self-awareness, all the pretense of intelligence completely collapses when you slightly disturb it, there's no memory, no perception of time. I think consciouness requires memory, perception of time, self-awareness, some sort of resistance to outside forces ("ego"). Otherwise it's just an automaton.
youtube
AI Moral Status
2025-06-05T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw03r_Uqkt70VUBW8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeLem0YEk9-7G6MZ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzihHM0kumGZMHn1k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzjtPfA6dgImIIeBBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzEQmLWO4T7YA-7YU94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7owW1WyXLLnj41fp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzHR2AYBSZYnAQQxY54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySluYDI-hNZt-n1fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgymdxALGkvFswFB6b54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzdWuohwDcPd_EjolR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]