Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'd like to hone in on the idea of the Chinese room a little bit more. Remember that we take as an assumption that it convinces the native speakers that it knows chinese, so we can always assert it will respond believably human would. Therefore the books must be immensely vast and detailed, or have some algorithm for generating new text. This is because there are so many variations the Chinese speakers can ask of the room, which demand realistic answers: * Teach it to play chess (or any other game), then play correspondence chess with it. The room must either have a *fully complete* list of responses to chess moves, or be able to instruct the human worker to write new books *in chinese* upon learning the game. If it can't play chess (even poorly, as a new human player might) then it is not convincing. * Ask the same question over and over again. To be believable as a real chinese speaker, you would expect it to eventually give different answers to the same question. Like asking a question back ("why do you keep asking me this?") or appearing angry ("gosh darn it, stop asking me the same questions!) So the instruction books aren't simply "if [xyz] then [ljk]". They are vastly more complx. * Ask for the room to review some media, like a sci-fi novel The room must either already have instructions for how to write a review for this novel, or must have a way to generate such instructions when fed a new book (like the chess situation). Indeed, the native speakers can *write a novel and ask for a review*. Therefore, the room *must have a way to review the novel, without a record of the novel beforehand*. ----- Given the abilities of the room, can we really doubt that the system, as a whole, does indeed know chinese?
youtube 2016-08-09T08:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgjAEXe8eAPPKXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghnMNcWHjqirXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UghOoR5pLNkFAXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjpWVyxx3sWy3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggRI4YxpD4AingCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugix1MRH1ekV_3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjIduvw8s80DXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiILC4paR-bQngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugh58EwEXIoq23gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgigHUlJMkVAUngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}]