Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Large language models are a breakthrough for humanity. We discovered that if you make a complex, multidimensional model of language and then ask it to simply predict language patterns, then intelligent like behavior seems to emerge just from language alone. That is an astonishing and important discovery. This idea that things that look like complex behaviors and reasoning come from just language modeling. But then we went wrong when some people decided this WAS intelligence and reasoning. And then they started to treat a model of language like it could think and make actions and decisions. It can not. It is just a language model. I feel like if we treated them like what they were and explored the limits of what we can do with these amazing models we would get something incredibly beneficial. We can explore all the hidden relations in language we never realized. We could tweak certain tokens in strange directions to force surprising metaphors. There are so many things you can do with a model like that. But what you can't do with it is expect it to know things and think things.
youtube AI Moral Status 2025-10-30T22:2… ♥ 66
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzKGFzOfu7bIUe0T1F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxD6SmwAXJAjIrvHk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw8aRpaCBFPGZngOlN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-cqGQwQgMUS64zjd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwlJvJOJh0vhdqJiOd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzzJxdB032j4nXLdjp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzcvYv9pPqb6vLCPyJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzAgdTksNK21719Z7V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwGH9cCpJAkbZd1AwR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzpaLDu33gCqYXM7pJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]