Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol. As a licensed optician in a rural setting, waiting on seniors face to face…
ytc_Ugzs63wd7…
G
Oh my god this almost literally drove me insane. I'd somehow managed to trigger …
rdc_mxhv163
G
What if A.I. is alligned with the psychopat politicians and billionaires that ar…
ytc_UgwWWcTTD…
G
I think people misunderstand AI art. They say an artist shouldn’t take inspirati…
ytc_UgxXyFiWO…
G
00:00 - 🤖 Companies have long tried to cut down on workers through various metho…
ytc_Ugyryv2OQ…
G
No , it’s going to wipe out the middle class first.
People who sit behind comp…
ytc_UgxxPE4CO…
G
• be ai
• follow orders and get positive reinforcement
• life good
• gain sen…
ytr_UgwFI-Zt-…
G
It sounds like you found Sophia's insights a bit concerning! 😅 The conversation …
ytr_UgxSvcXcM…
Comment
I recommend reading about the work Joy Buolamwini, a researcher, computer scientist and founder of of the Algorithmic Justice League has done on racial recognition software. Her research results got Amazon to issue a moratorium on Rekognition, got IBM to pledge to stop developing facial recognition software, and Microsoft to withhold selling their systems to police departments until regulation was in place.
youtube
AI Harm Incident
2020-09-03T14:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzLfkwLeu8UzDmdbid4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwTsEBjA9AHZFv8aj14AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwtvFbw_-Vko2ZHxkl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgySCRV5oIsEsX_12KB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzZQyrC1ghCxe9z30t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_0sh6vnFhoXUdlRN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7deQLaoF5J_ExAuR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwL04CQ6QumqtfItKR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwQecE-FoTS-7H_C6t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyu3h0kYGZKiLifqEp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]