Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Without a paradigm shift, LLMs can never be safe. That simple. LLMs are probabilistic systems. If a model has been trained on data that is statistically related to a given concept, there is no known method to guarantee that it will never generate related content. Because LLMs do not know anything, much less understand any concepts. Even lesser, ever going to yield AGI so the $3t is sunken costs. And absolutely impossible... it replaces all jobs, and then nobody has any money to buy things from the companies which replaced their jobs, and then everybody goes broke because Sam Altman has all of their money somehow. As prompts, outputs, RAG contexts etc are composed and chained, the system becomes increasingly stochastic. Small residual probabilities can resurface through indirect inference, paraphrasing, or recombination, effectively amplifying tail risk. There is no known technique to selectively and completely remove all latent representations associated with specific training data. Mitigation approaches operate by suppression, not erasure. Full retraining is the only mechanism that changes the learned distribution at a fundamental level, and even then it cannot guarantee exclusion of all functionally equivalent reconstructions. Even if all explicit references to particular data were eliminated, the model may still regenerate similar or identical content through generalisation, interpolation, or coincidental reconstruction. This behaviour follows directly from the model learning abstract structure rather than storing discrete records. Absolute prevention is not achievable with current architectures. Only probabilistic risk reduction is possible, and any claim of zero-risk generation is incompatible with how LLMs function in practice. TL;DR LLM chatbots will always give you nudes or nukes, and these government ministers are literally burning money in datacentres to get us Will Smith eating pasta videos, at a time when people struggle to afford to heat their o
reddit AI Responsibility 1767868628.0 ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_oi16mt4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nyedk36","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_nydbnta","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"rdc_nydj895","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]