Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I highly doubt that we will have to build pain and emotions into a robot to get …
ytc_Ugi0N_B54…
G
3 months later. Headline “Google shuts down Gemini, cites the low possibility of…
rdc_jpor05g
G
I hope more people see art as more than just something to make to earn money. Is…
ytr_UgwXj1cUJ…
G
this makes perfect sense
that leaked internal research doc at google “we have …
rdc_jkfc5gu
G
Google. Charles Rankin Fine Art, this is my one-of-a-kind not using Midjourney o…
ytr_UgzEBL96T…
G
I'm all for AI helping solve big problems like climate change. I've been using P…
ytc_UgwKLyIt1…
G
You are wrong. AI WILL take out the working class, the economic system that was …
ytc_UgzJmur2P…
G
#we_should_stop_AI_now
Even if AI is not going to turn on us , what would we be …
ytc_Ugw6NS5Tj…
Comment
Suppose the bias would be removed by “better” methods of training algorithms and use of data. Would you then have a problem with face recognition being used for all kinds of purposes? I wonder if the fixing of the biases and improving the reliability and accuracy of facial recognition will only motivate the police and other agents of the surveillance state to use it even more and even more blindly relying on the algorithmic results. For example, if the detection of actual crimes becomes more accurate, or the finding of a suspect in a crowd by facial recognition has a lot less false positives, wouldn’t a police officer be tempted to simply rely on the machine and not follow their own instincts; wouldn’t the officer just go ahead and arrest a kid because the computer said so?
youtube
AI Bias
2020-01-31T01:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyaDspV6JYiRbxkdfZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWWyGsi0n9aHSxxWR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtWmawquMkFbP_Gtt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxpsCrF3p4jltD73EN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwr52N5bMB-XrscXMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLUEZaR1ZVRtJwHft4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwkiT_X9CcVd-MhVwl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzQamPyQKkeGRHiAoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxVKVGwgdWiiotOZkd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxIa2lOOkHPuX4jNTh4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"ban","emotion":"outrage"}]