Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My problem with the "philosophers will figure out whether this qualifies as 'reasoning'" line is that it still assumes way too much about what the models are actually doing, that "oh, yeah, maybe it *technically* might not be reasoning but hey, it's definitely in that direction". No, it's also possible that LLM chain of thought-style reasoning is ultimately a dead end, that it can sort of do things that look like reasoning but too much is fundamentally missing to lead to even human-level intelligence. Clever Hans wasn't a sign that you could one day teach horses to do math, he was responding to subconscious physical cues unrelated to any numbers, and it's entirely possible you can mimic basic reasoning with syntactical analysis but not anything more advanced, or being far too inefficient to practically do so, like building with bricks and no mortar. And for the obvious counterpoint, yes, technology improves and things get better, but not always and not in every category; fusion and superconductors with reasonable requirements have been just twenty years away for decades, but it always turned out the challenges were much harder than expected. That's why I take so much issue with the above, language affects how we think about things and evaluate evidence, and handwaving "reasoning" as ultimately a philosophical point avoids confronting whether the thing is what we think it might be, or whether it's a very clever facsimile that can't succeed with larger tasks. Talking like LLMs actually understand anything, even with all the caveats in the world, predisposes us to evaluate it in those terms. (I have a whole bunch of rants on these topics and the misrepresentations of what AI is actually doing these days, the aside about hidden states or "knowing it's being tested" being two others, but I have limited time and energy to put together a YouTube comment :P )
youtube AI Moral Status 2025-10-30T20:0… ♥ 42
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz1lxfTWilYllBJG5F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz4kHbcpJBOP46Ifl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7WniCkN-N8KLJgbp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxLv3EAXxRBQZrzcH54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwuNrHVIO76mi4l9al4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwhJHE0Xw6pRv7TYz94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw33QRQLgC9LkVEuDB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaGTEsAQ1XU_TmzZR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy2z9Qt1hW3GTC3v4V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwA7PYa6nANsdVzNGF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]