Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When Yang proposed universal basic income in the AI age when jobs will be scarce…
ytc_Ugw74GYFe…
G
There’s also the problem of whether art is soulful and has meaning versus just b…
ytc_UgxwXAG8u…
G
If it's about effort or human input, then how is photography more artistic than …
ytc_Ugz_AqlsJ…
G
I think your comments are very valid, unfortunately I think the future is fraugh…
ytc_UgywY8dCA…
G
anti ai shit like this is just as autistic and annoying as pro ai shit…
ytc_UgzD0Z9EF…
G
Ai artist calling themself an “artist” is the equivalent of someone that cooks t…
ytc_Ugxbtaoa_…
G
The driver noticed the child too late and tried to brake, self driving nearly ki…
ytc_UgyJAefcK…
G
learn or commission an artist. Ai is bad for people, and it's bad for the enviro…
ytr_UgwS3f2HO…
Comment
Man this is heavy. The big question here is where do AI companies draw the line between privacy and safety, and honestly theres no easy answer.
OpenAI flagged this persons account back in June for "furtherance of violent activities" but didnt report it because they said it wasnt an "imminent and credible" threat. Then months later this tragedy happens. Really makes you think about what those thresholds should actually be.
This is exactly the kind of AI ethics stuff people need to understand: the real complicated questions. Like should AI companies be monitoring everything we do? Probably yes for violent stuff. But then who decides whats a threat vs just dark thoughts or venting? What about false positives that get innocent people flagged? What happens to privacy?
I offer trainings for ethical ai usage and we talk about how AI isnt neutral- theres always humans making decisions about what the rules are, what gets flagged, what gets reported. This case shows how high the stakes can be when companies get those calls wrong.
I dont think theres a perfect answer here but this is why AI literacy matters for everyone. These systems have massive power and we're all just figuring out the ethics as we go.
reddit
AI Governance
1771650420.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o6jbeg8","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"rdc_o6k732a","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"rdc_o6wn83f","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_o6ltv2a","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o6jyjk2","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]