Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Intelligence has at least two components: (1) Regurgitation; (2) Anticipation and Invention. Obvious AI can only do (1)... and if given biased training, can be easily distracted... as you can see in ChatGPT as you try to lead it to invention... or take exception to its biases. It not only can't think outside the box, you can put it anywhere inside the box you want just by slanting the training. Most minds can only do the regurgitation part and are paid well for it. Lawyers and politicians come to mind. Teachers too... but they're not paid as well. The invention part is accomplished by very few minds... and they are typically castigated for it. Witness Tate (not Tait) for (1) and Heaviside for (2). I must be in (2). I couldn't remember how to spell Tate. But ChatGPT knew (and his full name) once I gave it the information it needed for context to remember and distinguish. Interestingly I couldn't give Wikipedia enough information to find Tate or Taite or Tait with either spelling until I gave it his first name (which I got from ChatGPT). That really surprised me. As code comes back to looking more like COBOL, regurgitating AI becomes absolutely necessary. It looks like German. New words made up by gluing words together. And now totally dependent on the English Natural Language... oriental languages need not apply. BTW: You can usually tell good coders by how fast they type. The faster they type the more they prove themselves in the (1) category... because they've typed the same thing over an over. The (2) category coders can't type as fast because what they're typing has never been typed before.
youtube AI Jobs 2024-01-14T14:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz-fo82PtqDlyXUdC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzngCRoKQzoqxBBULV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy9fPpL9vGODYhQSPN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXQ4waBsF5PBrxGVF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzFP3kDDiAkYzXtdJl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgznDKPbLo3r6PGoJB54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw_srfSfyiDCXPfjmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZxhI-T_xiBZFubZN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgymMMChTbY3VZTi15t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqZ8iNIokba5HAaTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]