Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A good example of this is chess. An LLM will tell you all the rules and tactics, but when you ask it to play its obvious it doesn’t REALLY understand any of it. It does all sorts of odd things like jumping queens over other pieces or moving pieces completely off the board. It’s “smart” because it can regurgitate all the right words but it can’t conceptualize anything.
youtube AI Moral Status 2025-11-01T07:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzV42tk9RzMUCIlPSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0-8IOORn442PHOTR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZHWYCwaaxG5KJRBV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyN1MxzeDyN_bc8yid4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwmO9GUr2pYKn9PQmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxVmacntCEhwlW7MMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMX8rJxl-gD74Tw7N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6jyWTPCZbNoj29EV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzOeA4j9MJvJ_mDLv94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxu84KEN_5gy_ufcqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]