Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For the most part you are using it correctly. But many people (especially managers and the like who don't directly do the work) often use it heavily to just write the code directly, or take firtst suggestions or otherwise have it do most of the work and think they can just proofread it. The basic flaw of ChatGPT is that the data it provides is not genuinely reliable. Much like you could cut down a tree with an axe as readily as with heavy machinery, the real effectiveness of ChatGPT's responses is highly variable. So using it like you are is great as it's just an assistance to your work. Yet I've seen business outright replace teams of people on the basis that chatGPT systems can just do the job, or otherwise radically overestimate it's abilities or underestimate it's problems. ​ One of the myriad of issues ChatGPT introduces is it will do things wrong an employee never would, causing many problems it creates to not even be considered because you wouldn't normally have to worry about it, leading to management applying it in error.
reddit AI Governance 1684575155.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_jksgo9t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jkro1cf","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_jkytrfb","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_jkvut3l","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"rdc_jkss7ef","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"})