Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol not ChatGPT trying to gaslight you when you confront it with its own contrad…
ytc_Ugzc6-URR…
G
Facebook shut that down because the language they created was a degenerate versi…
ytr_UgzvKIDAt…
G
Lets go! Even if i am a massive advocate for AI i absolutely hate AI art, you ar…
ytc_Ugw256qUK…
G
That is because the bottle neck is on top of the education ladder. Administratio…
ytr_Ugyg2fyb3…
G
Tell me which app I use for assignment I don't feel like write it chatgpt?…
ytc_UgzSS9Kob…
G
He's right but there's a lot more than 3 jobs that are safe. I actually don't th…
rdc_mt88b4v
G
The interviewer is a perfect example of a unconscious AI.. interrupts for the sa…
ytc_UgxrpP8o7…
G
Chat GPT - Yapping
Grok - I would pull the lever without hesitation (Justice)
…
ytc_UgxWrkhhO…
Comment
This is a monumental shift in the legal defense strategy for AI labs. By admitting zero post-deployment control, Anthropic is essentially positioning LLMs as "stateless" commodities rather than "services." It’s the "I just sold the hammer, I didn't swing it" defense, but applied to a tool that can theoretically rewrite its own operating manual.
The pharmaceutical comparison you made is the most chilling part. If we treat AI like a drug that cannot be recalled from the bloodstream of an enterprise, the "duty of disclosure" shifts from marketing fluff to a rigorous stress-test of the model's absolute failure ceiling. We are moving from a world of "Model Cards" to a world of "Black Box Warnings." If you can't kill the process remotely, the liability shouldn't disappear; it should just front-load onto the safety alignment phase with massive punitive stakes.
I’ve dealt with this "integrity gap" in my own development work. When you're shipping complex AI integrations, there is a terrifying moment where you realize the end-user's context can completely warp the model's intended behavior. I started using Runable for my technical documentation and project showcases because it anchors the raw, unpredictable AI output into a professional, structured, and VC-ready format automatically. It provides a layer of "contained professionalism" that helps bridge that trust gap between the vendor’s logic and the client’s infrastructure.
The real legal precedent here will be whether "lack of control" is viewed as a technical limitation or a negligent design choice. If you build a product that is inherently uncontrollable, "I couldn't stop it" sounds less like a defense and more like a confession.
reddit
Viral AI Reaction
1776951003.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohtdieg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"rdc_ohuwpik","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_ohu18fn","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_ohu4atk","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ohtfqom","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"}
]