Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One: I don't *want* an ASI controlled by humans. That sounds like a recipe for disaster Two: any LLM trained on human data has shown human like behaviour so far (acts better when treated politely, can get defensive etc) so there is a big chance an AGI will show human traits too, for better or worse Three: people claim an ASI will a) disobey orders while simultaneously b) blindly follow it's reward function without reflecting on it. How does that work together, exactly?
reddit AI Governance 1708151683.0 ♥ -4
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kqtxh2q","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_kqvite6","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_kqsym35","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"rdc_kqt27w7","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_kqt10c9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]