Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Presumably one reason is because at least one version of DeepSeek is running on AMD cards, suggesting that NVDA's CUDA library/infrastructure moat isn't as robust as people thought? It isn't clear if they did both the training and inference on AMD or just the inference (which I've been told is supposedly easier on AMD) ex: https://www.amd.com/en/developer/resources/technical-articles/amd-instinct-gpus-power-deepseek-v3-revolutionizing-ai-development-with-sglang.html
reddit AI Responsibility 1737982312.0 ♥ 303
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fg4l704","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_fg0urnf","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_m9fybnb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_m9fzqdy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_m9gdylp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]