Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
West invents a new shit that’s mostly based on double standards & lies. It’s a s…
ytc_UgxGike8J…
G
This is the best tl;dr I could make, [original](https://www.japantimes.co.jp/new…
rdc_e2vn5nq
G
This goes to show, do not believe every thing AI tells you. Moreover, if someone…
ytc_UgxIXKRSd…
G
I'm finding them really useful as a productivity and information tool. They help…
ytc_Ugx4cf1xL…
G
This is not fancy auto complete. consider how long auto complete has been around…
ytr_UgwIqGj4Y…
G
This is the right way. Ask AI questions to improve yourself. Don’t ask AI to ski…
ytc_UgyUBiNw8…
G
also, we are training the ai with every interaction. i prefer to train ai with k…
ytc_Ugw6egpJU…
G
"Who should be making the decisions anyhow? Programmers (software developers), …
ytc_Ugg5W6Ybw…
Comment
Considering he doctored the prompt, I think this is like blaming the rope, however I don't like this trend where the first advocacy for this type of stuff is just to go hard on the parents for being 'dumb' and blaming GPT.
I think companies SHOULD be required to defend things their generative models do. We're working towards a future where ai may be and is currently deciding whether someone has health coverage or not.
We're consistently seeing stories where the ai is being talked about like this independent entity, absolving the company of any liability or responsibility on their part. I think that's a bad direction to go in.
reddit
AI Harm Incident
1756298282.0
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_nasjx3o","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"rdc_nau03wj","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_nau30to","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"rdc_nau8a2h","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"rdc_naxrqoq","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"fear"})