Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If only we could convince them that the bones of poachers were an aphrodisiac. T…
rdc_dv6195p
G
Im just saying .. the male robot talked about drones and an army .. we finally f…
ytc_UgyeIt_jC…
G
This is complete nonsense. The models really did something they were trained to …
ytc_UgwCxeGX6…
G
Even if you denies access to well structure data, the experiment is already don…
ytc_Ugw7EgfxA…
G
Ehhhhh I dunno about this guy. He says "oh yeah AI can totally tell what is an i…
ytc_UgxaO3DME…
G
The only consolation is that Ai is being trained by the laziest dumb asses who a…
ytc_UgzCvNUBA…
G
If all will done by AI, people have no aim or meaning left for do anything or …
ytc_UgxCKkPQr…
G
No, this won't happen any time soon. AI can't drive safely yet, it has major pro…
ytc_Ugic6gl-4…
Comment
@ashleybishop7248 Also, humans are more likely to mistake two people that aren't of their own race for each other. It's not because muh wacism. It's because of familiarity bias, a cognitive bias present in all humans.
This tech doesn't have that in quite the same way. The reason these techs don't work as easily on dark skin isn't because of a cognitive bias. I've seen it explained as being because light skins tends to reflect light better, meaning facial structures are easier to identify for these algorithms, particularly certain parts of the face. Darker skin tones reflect less light- blame the laws of physics for that- so certain features are harder for the tech to map.
Apparently Asians also have trouble with facial recognition too, but likely for different reasons, as east asians tend to be light skinned. These algorithms are complex, so it may be that certain facial structures that are common across a certain ethnic group can throw it off, cause it to be more likely to match people based on that.
youtube
AI Harm Incident
2023-08-14T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugx0D3HTTSjKhTsDC-x4AaABAg.9tP1uUzhGnV9tP3ohcZrjf","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzn-QbacOYTp17B3854AaABAg.9tOzeP3AwHk9tOzxgKgbf_","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugz8TftZGzKHitQ2Q5J4AaABAg.9tOr4dC8Pfd9tP27ORSg38","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwDeQ1Br3As8JFiIs14AaABAg.9tOjCjH4GoN9tOpIeoCUhz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyE2sl3g5y75ryQvKF4AaABAg.9tOgG4Y6vGY9tOgejoh-5l","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwy0FUKa-xlXlK35uh4AaABAg.9tOWScc8Wbm9tO_gMZNLiT","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx64sUf0J0kPMTWyIN4AaABAg.9tO2GYgaNKY9tOELZbukmu","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugw_E19pffSR063l4OF4AaABAg.9tNlT0kJrdw9tNmTNnyYju","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwkL5sYdxy301uVvKt4AaABAg.9tN_ziVD-ZE9tNa79HPQGh","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugy5P6ad8nnzVCwwCNt4AaABAg.9tN_xeg1oFj9tNbioZr3pM","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"indifference"}
]