Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Technology is getting scarier and scarier. Soon we'll have highly intelligent ai…
ytc_UgzpJcR7G…
G
@NoahtheGameplayerthat's one of the reasons I feel like ai art (and ai generate…
ytr_UgwHzTknN…
G
One great thing about AI is that when you ask for a "live agent", it doesn't get…
ytc_UgzU9hHCK…
G
I can see him having an actual case, if the TRAINING MODEL/DATA was what he was …
ytc_Ugwvt0HlG…
G
@tutacat Oh they are! The whole purpose of algorithms (public algorithms at leas…
ytr_Ugx3jnNuR…
G
exra.. driverless car? really.. how wrong you are as to the :benefits" PUBLiC t…
ytc_Ugydm5Y1N…
G
another one.. sigh..."AI" does not exist in our reality, and what they are calli…
ytr_Ugxhx_jbg…
G
Yes it is just as bad regardless of use! At the end of the day all uses of ai po…
ytr_Ugy0aJFAH…
Comment
This is a massive point that usually gets drowned out by the "intelligence" arms race. We’ve become so obsessed with O(1) reasoning speeds and context window sizes that we’ve completely decoupled capability from consequence.The accountability gap is the real "black swan" of 2026. If a model makes a decision that causes systemic harm, the developers point to the weights, the users point to the prompt, and the corporation points to the TOS. We’ve essentially engineered a way to automate liability out of existence. It’s not just a technical problem; it’s a fundamental failure in how we define agency.
reddit
Viral AI Reaction
1776967302.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohtyd15","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_ohuuqs0","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ohwdux6","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_ohxwt96","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_ohv3pc2","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]