Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone knows self driving disengages when you hit controls. This is going to …
ytc_UgwzrqJJ-…
G
If AI is created by humans I assume it's going to have human nature tendencies m…
ytc_UgwPYhnO4…
G
You talk about AI taking the menial jobs and people retaining the creative jobs.…
ytc_Ugz9dkjef…
G
the one real issue i have with this episode is that it's focusing on ai abilitie…
ytc_Ugz44jWKc…
G
wait if a company uses ai and people are unemployed then the company would fail …
ytc_Ugw15v738…
G
So if everyone is replaced with AI and unemployed with no😮 money..what good is i…
ytc_UgztpLPrx…
G
You are phenomenal 🍕 going through the dystopian job hunt currently - I was init…
ytc_Ugyf8TvfP…
G
Bucket of water or an object that travels 2800 feet per second will take care of…
ytc_Ugxme13F1…
Comment
Which is SHOCKING to me... given I'm a computer scientist and work with AI/ML and infrastructure to support it. My closest friend, litereally, deploys AI solutions for fortune 500 companies and helps build metrics to prove the ROI on those solutions. I'm very aware of the state of the art of AI right now.
... and, fact is, it's simply NOT good enough to 'replace' highly skilled people entirely. It still needs a highly knowledgeable subject matter expert to direct it, verify results, and deploy solutions. ChatGPT, for example, seriously can't even take a table in a PDF and convert it to a CVS/XLS file.
It DOES now make people somewhat faster.
A year ago it was impressive that it could spit out reasonable code given a highly-crafted prompt... Then that code would require validation and fixing (almost invariably had massive insidious issues) to the point that even though I was creating solutions using a LLM as a tool... metrics proved I actually wasnt' doing my job better or faster due to all the time tracking down issues and the lack of maintainability within the code. (and also the lack of optimization and workflow changes a programmer might discover during the process of writing the code)
Now -- well, a well-crafted prompt can result in some "it will run just fine" code. It's generally poorly architected, inefficient, and a nightmare to maintain... but 75% of the time, it gives you code that does what your prompt requests. However, it takes a highly skilled developer to find the 25% of the time it doesn't work... and to validate the other 75% of the work. I'd say, at this point, Claude makes me about 50-100% faster at my job... but the 'balance of my job' (maintainence, validation, deployment, optimization, workflow analysis, etc) takes 25-50% longer... so it's about a 50% net positive.
The crux of the matter, as it relates to the OP, is: You still need a very good, highly-skilled, developer to do the balance of system work. (and direct the promptin
reddit
AI Jobs
1750014764.0
♥ 783
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ohfj8j4","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_mxyfgfo","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"rdc_mxza8uw","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_mxyykg7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_mxzin6m","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}
]