Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>The questions I have are these: > >- do humans and AI make the same kind of errors? Is the AI missing things that could be obvious to a human expert or vice versa, implying that using both would’ve allow detection rates neither can achieve? Excellent questions. What we currently see is that the mistakes they make are completely different and not related at all. However, it does not have to mean both combined are better: there is also a large psychological component. You can see this in some of the "self-driving" Tesla crashes, where the human driver trusts the system too much as it is usually right, but can fail spectacularly. I'm not sure on the research about this for the medical field, but doctors for sure would need additional training. >- How good is the sample data, really? When we train visual AI on something like facial recognition, we don’t have to be concerned that we’re teaching it our biases because we haven’t got any, we’re nearly 100% at being able to decide if there is a human face in front of us. But we can’t know which images, in which *we* could find nothing, could have subtle features that machine learning could indeed find. It seems to me that at best visual AI could be as good as our very best, but if we want it fonfind what we cannot, it seems to me we have to find a way to train it do so. Great question again. Something we can do is use information that wasn't available at the time of the original data, for example follow-up data: you can train the AI using information that, for example, there'd be a tumour found within five years. See this from MIT about breast cancer: http://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507 Source: doing my PhD on this kind of stuff.
reddit AI Bias 1569433514.0 ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_f1emvcy","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_f1e7zyw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_f1ecjca","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_f1ecudu","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_f1ez3fw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})