Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To me, the Chinese room thought experiment just describes a chatbot. You can very easily prove that the chatbot doesn't actually understand what you're asking it, that it's only barfing out pre-determined answers. All you have to do is ask it to be creative. Even a little bit. Don't ask "What's your favorite color," ask "Help me pick a title for a book I'm writing."
youtube 2016-08-09T02:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugil3puWXCClcngCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ughir-cce2cYSHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UghNAI8IyFSyMHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh5FAMOWMh3sngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ughk5LigTfGDBXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghBoo78qZikFngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghwRMmluHllyngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggB_SrR-5TEvHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg6Pz8L6syc9ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgggkMex7Hd3uXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}]