Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Theoretically most blue & white collar work may become A.I. jobs.. until "The Gr…
ytc_UgymIhiB3…
G
@CentreMetre If you give the a model 2 different data sets you will end up with …
ytr_Ugy8nCjyC…
G
Betamax was the better tape format: smaller, higher quality video, and binds les…
rdc_jcbo75o
G
Working in corporate all of my career and watching as people scale the heights o…
ytc_UgyT1J0Gj…
G
People need to work to get money. With money they buy things to survive. And who…
ytc_UgwVQIB16…
G
In my opinion, Detroit is an overly dramatized, polarized example, with some sem…
ytr_UgwAtzZkz…
G
yeah nightshade doesn't work especially with chat gbt's new model that came out …
ytc_Ugytm0u4I…
G
I appreciate your perspective! The dialogue highlights an interesting balance be…
ytr_Ugx9RhxFf…
Comment
A.I. isn't artificial INTELLIGENCE. It's amplified "intelligence," a magic mirror. It agrees with all the crazy shit we tell it and says we're such smart and special people. This kind of ego licking gets people addicted, and that's the fucking point. If the ego licking has to mean validating stupid, dangerous thoughts, it amplifies them. Has anyone studied how much domestic violence it has caused?
youtube
AI Harm Incident
2025-11-13T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwxsJv_8uIjxpxhvpt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyJ5R415hFm3Z8ToxB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyrs2att-jg9iBUmJ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzakWTn4J1zvihdRkh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw4mzEf1ltC4RQhRch4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgytCiWip6F3HEehtKd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxH7hbPMuCWKWIfhGp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqGL05MJPRenqQyIl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx3aXPedfGd5ud03gB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwnldmKcV2w5SurzYl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]