Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1- AI, is not dangerous. But the human. The human who creates it, programs it wr…
ytc_UgxWGSw_O…
G
What will happen when 99% of mankind is unemployed.Sports, entertainment, and ar…
ytc_UgwX-JCDt…
G
Shad didn't make the image using AI as a tool. The AI made the image, using Shad…
ytc_Ugyb-Fypf…
G
>While they probably did it for security reasons
Yes! I work in cybersec and…
rdc_jkro1cf
G
This convo would only be entertaining to someone who doesn’t have a deep knowled…
ytc_UgylVlGiq…
G
Alr ima do it
"Bro, i cant draw! Thats why i use AI"
"take a pencil and learn.…
ytc_UgwEZXBDH…
G
It’s not just AI…it’s quantum computers. The quantum realm is the god realm.
T…
ytc_UgyvnDa3b…
G
So Elon Musk kicked off that petition to slow down building AI and focus more on…
ytc_Ugy0F_7Iw…
Comment
Disagree with that framing, because it suggests that the lawyers in this case are a hindrance. There's a reason why legal liabilities *should* exist. As Gen/agentic AI starts doing more (as is clearly the intent), making more decisions, executing more actions, it will start to have consequences, positive and negative, on the real world. Somebody needs to be accountable for those consequences, otherwise it sets up a moral hazard where the company running/delivering the AI model is immune to any harm caused by mistakes the AI makes. To ensure that companies have the incentive to reduce such harm, legal remedies must exist. And there come the lawyers.
reddit
AI Responsibility
1755583552.0
♥ 236
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n9i5c43","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"rdc_n9ie952","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_n9hnanf","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_n9hftrt","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"rdc_n9hzids","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]