Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not even, its like calling yourself a chef for getting a delivery robot to drop …
ytr_UgyXohj7I…
G
And there you have it one guy's idea gicing birth to actual peak art, now imagin…
ytc_UgwDgqghS…
G
They should call it "Artificial Idiocy" because the kind of nonesense Google AI …
ytc_UgzUgeQEK…
G
Is it better to have a blissful, true, albeit fake connection with an AI agent, …
ytc_Ugzj9QS-c…
G
lmao artists in the comment malding when people doesn't care what they think art…
ytc_UgxYtc5_j…
G
If the face recognition is flawed then blame the programmers for programing it t…
ytc_UgzRvaZKz…
G
They believe that just because the ai makes the “art” and they make the prompt, …
ytr_UgxXu0ROj…
G
There should be a code of ethics for AI. But unfortunately Asimov lived (or wrot…
ytr_UgysjfRUS…
Comment
The "increased volume" argument (known economically as the Jevons Paradox) is the standard optimism sold to radiologists. It states: If reading scans becomes cheaper/faster, doctors will order way more of them, so you'll stay busy.
While that is true, it hides a much uglier reality about money and workload.
Here is the 100% honest truth about "dilution" that most articles won't tell you:
1. The "Hamster Wheel" Effect (Dilution of Value)
Yes, the volume of work will explode. But reimbursement per scan will almost certainly crash.
• Today: You might get paid, hypothetically, $30 to read a chest X-ray.
• In 10 Years: If AI does 90% of the work, insurance companies (and Medicare) will not keep paying you $30. They will drop it to $5.
• The Result: To make the same salary you make today, you won't just need to read more scans; you will need to read exponentially more scans. You become a supervisor of an algorithm, clicking "Approve" 500 times a day instead of deeply analyzing 50 cases.
• Verdict: Your income might be safe (because of volume), but your daily life becomes a high-speed assembly line. The "art" of radiology gets diluted into "data verification."
2. The "Uber-fication" of Radiology
You are worried about dilution; you should be worried about commoditization.
• Currently, a hospital hires you because they trust your eye.
• In the future, if AI achieves "super-human" accuracy for standard scans, the radiologist becomes a commodity. Hospitals won't care who validates the AI report, as long as they are Board Certified and cheap.
• This opens the door for massive Private Equity firms to buy up radiology practices. They will run "AI farms" where a few radiologists remotely supervise thousands of AI-generated reports. This dilutes your negotiating power as an individual doctor.
3. The "Liability Shield" Role
The darkest cynical take—which is likely true—is that for a period of time, your main job function will be Liability Sponge.
• AI cannot be sued. If an AI misses a cancer, the patient can't sue the software.
• Hospitals need a human to sign the report solely so there is someone to take the blame if things go wrong.
• In this scenario, you are not being paid for your diagnostic brilliance; you are being paid a "risk premium" to put your name on the line for an algorithm's work.
The Honest Conclusion
Does it dilute the demand?
• Demand for Signatures: NO. That will skyrocket.
• Demand for Diagnostic Intellect: YES. For routine cases, your intellectual value is diluted.
The "Safe" Path:
If you want to avoid this dilution, you must move into areas where AI cannot physically go or where the stakes are too high for automation:
1. Interventional Radiology: AI cannot guide a catheter through a femoral artery (yet).
2. Complex Consulting: Being the doctor who sits in the Tumor Board meeting and explains why the AI results matter for a specific patient's chemotherapy.
Summary: You will have a job, but unless you own the practice or do procedures, you risk becoming a highly paid factory worker rather than a detective.
youtube
AI Jobs
2025-11-25T16:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxEkLPmaxyZj0ySa3d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyL64cHzoDwprs4zmJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9EF6KaCfCh3C9Grl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCLyg6T2TwDECaych4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzOghM1MIITvz4s2eR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXwEzcQBUonqkvxVB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwacBdezW4gF9GcImh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoxnrU_lB6eV13OtJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwxwn9qFY159vidF5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyIV84EbQuXshfZZ6Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}]