Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
model-based AI cannot do something maliciously because there is no intent or reasoning behind them. Think Chinese Room. Here's how different things that are labeled as "AI" will make the nukes fly: True thinking machines (does not exist) - they hate us LLMs - hallucinate that we asked them to let the nukes fly algorithmic - the numbers say the best thing to do is let the nukes fly diffusion - thinks that the next step has to be letting the nukes fly Asimov robots (does not exist) - we are bad at programming automation/traditional programming - a poorly-defined if/else statement puts us into the wrong decision tree leading to the nukes fly (we are... bad at programming)
reddit AI Governance 1752784872.0 ♥ 6
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n3p5zpq","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_n3r4v5w","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_n3me7bl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_n3mfyhf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n3mqp9r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]