Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Humans will adapt. They’ll seek stuff that isn’t AI - real musical performances,…
ytc_UgzJD5wPM…
G
Great story! I would've liked more statistics, especially number of deaths and c…
ytc_Ugym8uQep…
G
Back in the 1980s, I saw the automatic tobacco cropper, and the bulk barns put 9…
ytc_Ugy_SUx1s…
G
While I'm not personally a disabled artist, my mum is, and she is amazing at bot…
ytc_UgyDnESLJ…
G
AI technology is making waves by enhancing automation and decision-making proces…
ytc_UgxbeNTHX…
G
People's raw emotion, unique style and love for their work could never be recrea…
ytc_UgzX3YUgE…
G
Before: Transfer me to an American Representative
Now: AI, transfer me to a rea…
ytc_Ugxdl6XgC…
G
This is actually a VERY GOOD middle ground video on this subject. Beyond the tw…
ytc_UgyWwj5_p…
Comment
Hmmmmm, as the wife of someone getting his doctorate in data science (statistics +) and I hear the explanations of his doctoral studies, research, projects and dissertatio, my understanding is that there is a "maybe" category, making it actually more complicated. That 4% is neither no, nor yes. Saying that the machine only correctly identifying negative results 95% of the time means that the machine must be automatically saying "if not 'no' then yes", really creates a huge problem and makes that study invalid. Also, the reason for the "maybe" changes the chance of it swinging one way or the other, so this is far more nuanced than this videos makes it seem. A test that doesn't account for a third option (ie undeterminable) is wildly invalid and over-simplified to the point that there should be a strong push to think about if it shouldn't be considered at all.
But I think the video is right that most people and doctors great it wrong. Because they are doctors, not math geeks.
youtube
2026-04-11T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1YBFXMDyrmvvejUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPS1hPvyMyM0HYdBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNgpsXVmL9Vbpk9uV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyP5fFMcQfwd1vCBbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMyvm54nMlCWTg0ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrM96f9GKnUM-S8VZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE2bYS54-Z-nz4iGF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1X-LcwPD9zeHbNkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwkw1dO0J-tSGZVJ3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNn-7yMfBAW5nulcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"}
]