Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You guys are remembering incorrectly, when I was in highschool way back in 2008 …
rdc_mhxi3xa
G
iphones, computers, cars - you know that planned obsolescence is a thing? we wou…
rdc_o5pfci5
G
Actually this makes me feel a lot better. Because you can totally tell that it's…
ytc_UgyZqWCrs…
G
@nidadursunoglu6663problem is I didn't make up this example—Gemini reportedly d…
ytr_UgyUmVNE1…
G
I doubt he is concerned with the value to humanity because that’s not pretty…. B…
ytc_UgzdvxVVc…
G
@KenzoTenma-es1ti Actually, it did, it completely change how pro chess players …
ytr_UgztsuFGm…
G
in my opinion people like him underestimate how illogical mess is something norm…
ytc_UgzyFuVvf…
G
This doesn't really align with the philosophical tone of the video, but like ser…
ytc_UgxHDxsF8…
Comment
Without a paradigm shift, LLMs can never be safe. That simple.
LLMs are probabilistic systems. If a model has been trained on data that is statistically related to a given concept, there is no known method to guarantee that it will never generate related content. Because LLMs do not know anything, much less understand any concepts. Even lesser, ever going to yield AGI so the $3t is sunken costs. And absolutely impossible... it replaces all jobs, and then nobody has any money to buy things from the companies which replaced their jobs, and then everybody goes broke because Sam Altman has all of their money somehow.
As prompts, outputs, RAG contexts etc are composed and chained, the system becomes increasingly stochastic. Small residual probabilities can resurface through indirect inference, paraphrasing, or recombination, effectively amplifying tail risk. There is no known technique to selectively and completely remove all latent representations associated with specific training data. Mitigation approaches operate by suppression, not erasure. Full retraining is the only mechanism that changes the learned distribution at a fundamental level, and even then it cannot guarantee exclusion of all functionally equivalent reconstructions.
Even if all explicit references to particular data were eliminated, the model may still regenerate similar or identical content through generalisation, interpolation, or coincidental reconstruction. This behaviour follows directly from the model learning abstract structure rather than storing discrete records. Absolute prevention is not achievable with current architectures. Only probabilistic risk reduction is possible, and any claim of zero-risk generation is incompatible with how LLMs function in practice.
TL;DR LLM chatbots will always give you nudes or nukes, and these government ministers are literally burning money in datacentres to get us Will Smith eating pasta videos, at a time when people struggle to afford to heat their o
reddit
AI Responsibility
1767868628.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oi16mt4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_nyedk36","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_nydbnta","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"rdc_nydj895","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]