Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's fun to play pretend, but if you know how they work, it's just a very convincing emulation. The neural network is only part of it, there's also other things on top which make it happen. Say, the neural network only suggests a statistical distribution of many potential continuations of the dialog, the rest is done by conventional code. There are several strategies how to pick the next best token out of the found candidates, and if you pick a bad configuration/algorithm, the model will start spouting incoherent nonsense, its intelligence will completely disentegrate. If you make the token selection reproducible and remove randomness, the model will always respond with the exact same answer to the same question every time. There's zero self-awareness, all the pretense of intelligence completely collapses when you slightly disturb it, there's no memory, no perception of time. I think consciouness requires memory, perception of time, self-awareness, some sort of resistance to outside forces ("ego"). Otherwise it's just an automaton.
youtube AI Moral Status 2025-06-05T22:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw03r_Uqkt70VUBW8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxeLem0YEk9-7G6MZ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzihHM0kumGZMHn1k14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzjtPfA6dgImIIeBBB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzEQmLWO4T7YA-7YU94AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7owW1WyXLLnj41fp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHR2AYBSZYnAQQxY54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgySluYDI-hNZt-n1fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgymdxALGkvFswFB6b54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzdWuohwDcPd_EjolR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]