Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dude there was a particular art style some prompter uses on a chat site and I as…
ytc_UgxBaTVFW…
G
china and putin dont care what you say china its best in 5 g and 6 g and going b…
ytc_UgxZDIbfZ…
G
Bro much rather follows A.I. Then the creator, without the creator humans would …
ytc_UgzCgmsek…
G
i agree i think. i miss when AI art was self-aware that it was made as a joke be…
ytr_Ugwc4viyA…
G
The thumbnail background for this video was borrowed from a channel called “ AI…
ytc_UgxB138gK…
G
Enjoyed the video, although I do want to mention that whether or not AI will eve…
ytc_Ugw7CA-1c…
G
Was enjoying this until the ideology started, distribution of goods in a more eq…
ytc_UgwYhkaYi…
G
I love having access to AI art. About 80 percent of the time I can have an ima…
ytc_UgzunTfSs…
Comment
>both responses make sense given the stakes.
Honestly I do not think both responses make sense. While I understand the appeal of the automation, I think it needs to be both practically and visibly sequestered. AI can be *extremely* useful as a tool in programming, but its error rate is significant, and likely always will be. Even with really minor tasks it sometimes just loses the plot, and with complex ones it can end up going in insane directions when given too much leeway, breaking everything in its way.
So making sure people at least sign off on anything it does gives you a place to put the blame. If someone breaks something, it was not an AI doing its normal AI thing that cause the problem, it is the person who is letting the LLM *do their job for them.* Without some kind of accountability guard rail it is just going to keep inserting itself into everything, and building more and more technical debt until nothing is maintainable.
Ideally these guardrails would be foundational to how these systems are deployed, but it has already been demonstrated that AI companies have zero ethical boundaries, and the companies that push them just want the hype. So until we have governments that feel like protecting their citizens, smaller organizations need to implement it on a policy level.
reddit
Viral AI Reaction
1776613640.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_oh3ee80","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_oh3j1nf","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"rdc_ohf5qu9","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_oi2py6c","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"rdc_oh3e3td","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]