Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not entirely confident that AI is as great as it's being pushed to be. Perhaps the biggest problem is that, having worked as a developer for years, I'm too familiar with the types of code that AI is trained on -- and if that's what's being used as a base for "correctness", heaven help us all! But I have had a co-worker who demonstrated to me how he uses AI, and I came away unimpressed -- it strikes me a a type of "search" that doesn't tell you the sources, making it difficult to figure out whether and why the AI "solution" might be wrong, and it doesn't provide the context needed to make it right. Again, this might be the fault of developers, because there's a lot of "help" in forums and Stack Overflow that's like that, too! Overall, though, I can't help but sense that you kindof have to be an expert to use it well -- because you need to be able to tell when it gets things wrong, which is made worse by the fact that AI is really good at making text that sounds right! -- and you can't become that expert by relying on AI to produce things. One suggestion I heard, that really resonates with me, is the notion that experts using AI, when approaching problems, need to come to their own conclusions first -- and then use their AI to test their conclusions, and see if it brings up anything they may have missed!
youtube AI Jobs 2025-03-10T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxud4LqcRrweQhTGEJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4WKaEYC4BRSFtAt94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1eZvFyhad2orxFDh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzRChARNE5Ewn0Lg_Z4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXNQsjbsORlDwLghF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz07B8twbpWWcg9SoN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUrBmxa2jgOY9jg554AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKsmRwKOGNll29n8V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbLlIry3bEXi3UO9F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyn_0Ezmp66TDp0OXB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]