Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So "authoritative control" is a problem, but the posts citing it as the main concern I think are missing the bigger point. We are on the cusp of not being able to trust any piece of text, any image, any video, *ever again*. That's not hyperbole. If generative AI continues to iron out its flaws at this rate there will be literally no way to differentiate between AI-generated content and human-generated content. I repeat, if it reaches 1:1 parity with reality, this becomes an *unsolvable problem.* That's not defeatist, that's physics. A photon is a photon. It doesn't care how it originated. That is much, much more concerning to me than surveillance or weapons. We're on the cusp of people being murdered and wars being fought entirely on the back of AI-generated misinformation. If we can't figure out a solution to that, we're just *fucked*.
reddit AI Governance 1682974563.0 ♥ 67
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jihaisn","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jihf1jj","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jihtqpd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jiicebe","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jifh2bi","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]