Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is exactly my experience. I have GPT, Gemini, and Claude sessions (which I pay for) open all day long, whether I'm working at work or working on my private projects. They're absolutely great for running design decisions past: I wouldn't trust them to make design decisions, but they sometimes think to ask or question my design decisions, which helps me refine those decisions further. They're also great at: 1. Being a pair of eyes to do a superficial PR / MR review to make sure that I didn't accidentally add a bug that I missed that wouldn't go noticed for quite some time and might be missed by my other teammates, who are often busy and don't have time to scrutinize each line of a substantial PR / MR in great detail. 2. Performing repetitive tasks that are just a waste of my time. Sometimes I can't remember the syntax to recursively process a number of directories, finding files that contain certain data patterns, and extracting them and processing them with regular expressions. I could write that code myself, but it could take 30 minutes for me to put all the pieces together. That's the kind of task AI works brilliantly at... boring cruft which needs to be done but is a one-off procedure with disposable code that I'm never going to need again. Oh, also, AI (GPT in particular) has learned how I think very well, which is great because I'm quite neurodivergent with my ADHD. I study and implement advanced math algorithms for fun, and since math pedagogy often leaves a lot to be desired (e.g. starts with axioms and definitions, builds up propositions, lemmas, theorems, and corollaries, and then presents a result at the end without ever having motivated the problem that we're looking at), there are times where I am feeling particularly lost and wondering why we're covering what we're looking at until I'm near the end of the chapter, and then the disparate pieces don't come together well unless I read the chapter again. It's annoying and it's poorly thought out, I think... AI can understand that I need the goals we're aiming at with some motivating examples BEFORE we start building the frameworks we need tor our final results. I'm not that worried about AI taking over my job. It certainly helps with my productivity, and hell, it can be a lot of fun to riff with about other subjects if you need a few minutes of brain break. It knows my interests and can draw surprising connections between them that I may otherwise not have seen. I think AI can replace some programmers, but it's important to realize that there are many types of programmers. For example, the jobs that were outsourced in the early 2000s to places like India where a lot of the education is on building pure skills rather than deep theoretical understanding, those jobs will likely be replaceable. I liken those jobs to basically being glorified factory workers: taking languages and frameworks that others built and assembling them like LEGO blocks to produce a final product. There's not a lot of skill required and most software devs can do this with relative ease. If you want large-scale quality products, though, a software dev who understands the ins and outs of problems, how to properly harvest and refine requirements (as mentioned here - user stories are particularly great for this since the people who come up with the requirements seldom know what requirements they actually want), how to evaluate solutions and come up with innovative ideas and the LEGO blocks themselves, you need a human in the mix, I think.
youtube AI Jobs 2025-12-31T14:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzUOfLfaO_UDg566LB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxll5s4zQW2Jy8Mkch4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyNa01bHMu2gdYibQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgycdefbUa1TNzrFtb14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxqCgPN9BIsJLP0Rad4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx84ka0D8tIT-HZoK14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGwci1uZuihPUXv8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwehmURupSuapzgK7h4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwq2df9DdZ0JjEhV0F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7af42qs8zT075kER4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"amusement"} ]