Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The paper you are citing says nothing about philosophical terms like thinking or reasoning. It actually just analyzes the effectiveness of chain of thought reasoning on tiny gpt2 tier models. We have a lot of evidence from large models that cot is effective. The fact you are citing it for this purpose shows you didn't read and are just consuming headlines to reinforce your preexisting bias. One might even say, you aren't thinking or reasoning...
reddit AI Responsibility 1754846264.0 ♥ 15
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n7zdkc8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n7yxivd","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_n7z8ryk","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n80wgws","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_n7yhqvt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})