Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> the real risk is not I feel like any time someone makes this statement, OP included, they've fallen into a cognitive trap or a discourse trap (as in OP provided this "real risk" framing, and now you're responding in kind, as per Socrates). There are a whole host of serious problems on the horizon revolving around AI, AGI and ASI, and they are ALL serious. There is not "one real risk" given there's no way to reliably quantify them. > its boring corporate deployment decisions made without any ethical framework at all, scaled to millions of users. that is already happening and nobody is stopping it. This is very true, but rogue super-human-capacity (as in knowledge) or super-human-ability (as in any system connected to a wide network of real world effectors) or super-human-intelligence AI *still* pose a risk.
reddit Viral AI Reaction 1776999017.0 ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohwbqpk","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_ohx6fhf","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"rdc_ohxye78","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"rdc_ohydzo1","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ohyv3kr","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]