Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sorry to be doom and gloom but UBI is not going to happen. People say if there i…
ytc_UgzRlhZBs…
G
this is just dumb, ChatGPT isn't hiding that it's roleplaying as a human - it's …
ytc_UgzfqnA__…
G
I think it's pretty obvious what the future holds: Very few humans on earth who …
ytc_UgxAMOn0r…
G
@feelingsfeelings.2848 ai is a set of tools. They used the tools to make somethi…
ytr_UgwcpE_OO…
G
The funny thing is so many of this stuff could have been made in AI. Quality is …
ytc_Ugz7Q3yzv…
G
I was supposed to go home for Christmas and see my family for the first time in …
rdc_hm9abvc
G
I will say that when it comes to generating AI just on your own it still comes w…
ytc_UgybDI-Fh…
G
@trappedcat3615someone who has heard him talk about Nomi and AI many times. He i…
ytr_UgzpM4qvr…
Comment
“The DEFIANCE Act would impact individuals, like those Grok users creating deepfaked nonconsensual intimate imagery.”
What about the platform providers? They are the ones enabling the content and with the $$ for lawsuit payouts.
The Take It Down Act, made it a federal crime to post nonconsensual sexually explicit deepfakes and was signed into law last May. yet that did not stop the content from circulating on X. I’m not confident this will make a meaningful difference.
Section 230 of the Communications Decency Act, which protects platform providers from liability for user-generated content, should be repealed. Until they are held responsible, it’s doubtful much will change.
reddit
AI Harm Incident
1768344347.0
♥ -4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_oi43aln","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_oi4alg9","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_ohzk3nq","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_nzflqa4","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"rdc_nzfro26","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]