Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> I'm wondering how you would attempt to falsify or prove the hypothesis? It wasn't my hypothesis so not up to me to (dis)prove. But I don't know if I can. I only know that I wasn't convinced, even though the idea is appealing to me. > I think the demand for a "right answer" proves dangerous when it causes us to dissociate from the actual situation at hand and over-think things, Agreed. Though this is also a problem if you believe there is no right answer I would think, causing some to freeze up for not having any basis to choose. "Right" for some may also have more to do with social judgement than other outcomes.
reddit AI Moral Status 1584210163.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_fki1pg5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_fkj3bpg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_fln9y8i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_fmg45du","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_fmg46l4","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]