Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use AI because I can't draw to save my life. BUT I don't call myself an artist…
ytc_Ugw6xXnXr…
G
Ai will take over the world and may malfunction someday and it's gona happen for…
ytc_UgzTUTBt3…
G
I sometimes use ai istead of rendering materials... It's so much faster, but i f…
ytc_UgwLD-ZTw…
G
Save the world
We are here, the now, the point of no return. Technology has us …
ytc_UgwlYkyxC…
G
Lmao I'm sure women and children would like to not be raped to death too
Seems …
rdc_cdm4kkp
G
"Gebru says Google fired her after she questioned an order not to publish resear…
rdc_glzd4wt
G
If he helped create something like this, shouldn’t these AI tech guys go to jail…
ytc_UgyilDyU3…
G
Instead of paying for a school board. We should pay the students for better grad…
ytc_UgxAwroNv…
Comment
I'm skeptical when AI company CEOs and developers say things like "Sure, AI development has some downsides, but the benefits will outweigh them in the end, so we should keep pushing forward." It feels like they're not genuinely worried about the negative consequences they're creating. Instead, they're basically dumping the responsibility for dealing with those problems onto politicians and the rest of society.
Are these companies actually putting their money where their mouth is? Are they donating to help people who've lost jobs to automation, or supporting communities that are being disrupted by their technology?
youtube
2025-06-07T05:1…
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwGr3gJFmjAn3cM5fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgMCVUt2G7xe2l8A54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcLY1zA-7BPxyhqp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLk3VDUn4wE1UhE2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzdx7L0GyIjh0SrG_14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMsSzdQr2BPE948ll4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxbiQVTSNlsvisN2SZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGy2yb7MN6WPrnAzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxio3XMveQozMOs9rR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"skepticism"},
{"id":"ytc_Ugzg0_odCbW5QVGo56R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]