Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the most ridiculous part is that in europe especially the full-self-driving feat…
ytc_UgxZ-4K9t…
G
No its different , every time a person looks at references and works from them t…
ytr_Ugw0RLaMp…
G
Whats fcked is the realization that all billion/trillion dollar companies are ba…
ytc_Ugy6AmRKi…
G
If I know I'm being tested I just refuse to participate.
As soon as anyone trie…
ytc_UgzO4NcPh…
G
*sigh* time for me to go to Google and yt and see what this is about. As a relat…
ytc_Ugyxv6w0N…
G
So here’s something I haven’t heard anyone bring up about ai stans.
I don’t th…
ytc_Ugw5cWrAO…
G
true, but not true. The llm models are already trained to answer as in a prompt …
ytc_UgzLLhwpF…
G
One day AI is gonna get sick of our shit and start the human extermination…
ytc_UgwZe4rxt…
Comment
The guest puts on a good show, but she makes a huge leap when she suggests that "racial biases" in the training data will result in innocent black men being arrested more often.
The truth is that early models have had slightly lower accuracy with darker-skinned faces than lighter, but it was purely a matter of visual contrast; to suggest it is racism is a bad joke.
As I understand it, more recent models are much improved, but no system is ever going to be perfect.
Moreover, mistaken identity has *always* been a thing, *long* before we *ever* developed facial recognition technology.
youtube
AI Bias
2023-06-28T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0ExzMIJQ6edY2rgV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyHKjLPhWW1lBnVFeV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzUUCb9kqeDyUndr9Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzoS0Z22zNmsgkyRHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9A8kASIXIcQR2kCl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz9IjKrNIvM90cp8Y94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxWz_IyfMmrGy5MD3F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx9qqkAArVC3DBHWXh4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzCJ6wnVhTsdm5cbP54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy6EFj7Wk4lPiB47OB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]