Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If your mother nude deepfake picture were shared to everyone in this world and t…
ytr_Ugy23PLGP…
G
When the industrial revolution was gaining traction, some people predicted a new…
ytc_UgzrPcJKO…
G
i cannot wait for ai to takeover the world and enslave humans and r*pe them. I'l…
ytc_UgwWAssIV…
G
a god damn china virus who support CCP use AI to slave human in china.…
ytc_Ugz1exrWc…
G
This is (yet another) massive security breach in the making. LLMs are plagiaris…
rdc_mel7mmw
G
The “AI “art” is more accessible and Cheaper than art!!” Makes me want to bang m…
ytc_UgypBtQl6…
G
@kingaxioswell digital artists make art themselves, and don’t use ai. Film make…
ytr_UgzDf3_g0…
G
Fascinating!! I am certainly open to the possibility of AI having the ability to…
ytc_UgwOHk3Z0…
Comment
Facial-recognition is supposed to be a tool to be used as a _first-pass_ to simplify the notification of issues for law-enforcement, it's not meant to be used as the be-all, end-all, otherwise we wouldn't have cops, we'd have computers and robots doing law-enforcement. Once cops get a notice about something, they're supposed to manually check it. As the video said, it must NOT be used as blind evidence, it can only be used to _facilitate_ ACTUAL POLICE WORK AND INVESTIGATION. ¬_¬ That said, automation tools aren't always good; ALPRs is a system whereby various cameras throughout the city (on cop cars, on red-lights, on buildings, etc.) automatically and indiscriminately scan _every_ license plate they see and automatically check for any "problems" to report to the nearest cop to run them down. The problem with this is that due to how the system and criminals work, it will almost always end up screwing over people with minor infractions like unpaid parking tickets rather than actual criminals like traffickers. 😒
youtube
AI Harm Incident
2021-04-29T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyoQg5TcionW1_G8uh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQebD9NzVU-T7zHft4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrcurS1z6eNhKl9zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz_8uUXE0ns5uDNAnR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxZ2gIpq5WgwXm8Ijp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzgJMcZtI0PN5TLS2t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA6HokWzLEneS59LZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9DmWwWGW9viyucTx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxpgtObXUEG1BdW1Z94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyGlYgqv5zuPdClZGZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]