Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I tried getting AI to program something for me, a switchlist generator. I wrote out the entire logic of what I wanted the software to do, set limits, explained what to do if there were endless loops, and basically provided a piece of document I could have provided my programmers and that would have been clear enough to build the software since flowchart and logic was clearly laid out. It took well over 35 tries for AI to eventually get to something that was somewhat close but nowhere near what I had asked in the first place. I don't think we're quite there yet for it to take over my job. Not yet, at least. Not well enough, for sure. I definitely do find AI has made my life much easier: I can get a second opinion on data and do some other stuff really quickly that would otherwise take me a long time -- it's definitely WAY better than searching for an email in Outlook and even then sometimes it's faster for me to look for it myself. When I see endless click-baity YouTube videos telling me how I am going to die at the hands of AI, I can't help but think just how that's going to work out. Perhaps AI will have a great start killing us for the first hour, then its robot army, or whatever, will get stuck pushing a wall or forget what it was doing in the first place. It's a great piece of software for sure but if AI can't handle Outlook either, then it's either more human than we think or absolutely inept.
youtube AI Jobs 2026-02-26T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx09uuVgn2i12SbWI14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw0i1-J_M992mamd2x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxAE4a1r2vSCMTW8YB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwxsKasPmsaodW8Cf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzGOVqQ5fiaHosfn494AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw_bs2RBnYwu4RvtOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxhhqMR-qsCp5u8DNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJ3ikTo36tdVl15Sx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzQGvrMNYckuSTq_Id4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzmVAkRR_b9w9oStyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]