Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its stupid to try and train "AI" on people stuff we dont need more humans we nee…
ytc_UgxqOnNim…
G
I thought an independent AI team created the learning stuff then Google bought t…
ytc_Ugxtcqdb9…
G
Economy is based on flow of money. One persons spending is another’s income.
T…
ytc_UgxqcCw_M…
G
The movies like Terminator and AI don't make sense for one distinct reason. A su…
ytr_Ugys17yEv…
G
I am not interested in art but even i think that it is wrong to let ppl use ai t…
ytc_UgzL9xRo8…
G
A follow up interview with AI as the guest responding to this video would be int…
ytc_Ugx0slqsW…
G
Please don’t make them happen, we all are living thing, who gonna die if we make…
ytc_UgwUM_CyR…
G
Bro he is just spitting actual facts....AI is worse than he told....
Government…
ytr_UgzmRZS5a…
Comment
>The questions I have are these:
>
>- do humans and AI make the same kind of errors? Is the AI missing things that could be obvious to a human expert or vice versa, implying that using both would’ve allow detection rates neither can achieve?
Excellent questions. What we currently see is that the mistakes they make are completely different and not related at all. However, it does not have to mean both combined are better: there is also a large psychological component. You can see this in some of the "self-driving" Tesla crashes, where the human driver trusts the system too much as it is usually right, but can fail spectacularly. I'm not sure on the research about this for the medical field, but doctors for sure would need additional training.
>- How good is the sample data, really? When we train visual AI on something like facial recognition, we don’t have to be concerned that we’re teaching it our biases because we haven’t got any, we’re nearly 100% at being able to decide if there is a human face in front of us. But we can’t know which images, in which *we* could find nothing, could have subtle features that machine learning could indeed find. It seems to me that at best visual AI could be as good as our very best, but if we want it fonfind what we cannot, it seems to me we have to find a way to train it do so.
Great question again. Something we can do is use information that wasn't available at the time of the original data, for example follow-up data: you can train the AI using information that, for example, there'd be a tumour found within five years. See this from MIT about breast cancer: http://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507
Source: doing my PhD on this kind of stuff.
reddit
AI Bias
1569433514.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_f1emvcy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_f1e7zyw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_f1ecjca","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_f1ecudu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_f1ez3fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})