Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One of the older forms of AI (and highly valuable) is the expert system, to which I had exposure in grad school about 30 years ago. TurboTax is an example. (In fact, the very definition of "AI" can include something as simple as a light switch, as our professor explained. Don't assume ChatGPT is the only, or even most significant, possibility.) The thing about an expert system is that the algorithms and data come from actual experts, and, done well, it can perform highly accurately IN DESIGNED SCOPE. The other important thing to know about ANY instance of AI is that it can be defeated by GHS (genuine human stupidity). Worse (and this is true of all software), you can have failure modes that don't look like failure. And this models the similar properties of humans: we can "know" things that aren't so, or have subtle but devastating bugs in our calculations. (As but one example, consider the multiple-root problems in calculating IRR for a business.) So do NOT trust any system for which known, fixed, legal accountability is not established, and known to the developers, BEFORE any development begins. Now, for medical stuff, it's much less dangerous if your AI helper suggests, "Did you check this also?" than if it confidently says, "This is the problem, and here is the solution," and you blindly (pun not intended, but appropriate) believe that. The doctor is the one who put in the years of study, and MANY examinations of MANY patients, and the one who will be sued if an error occurs. That's why her parents named her "Dr." 😊
youtube AI Harm Incident 2024-06-12T21:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugzaf8fXlKv0i_Cwo8x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-CGvyU76UsmQc1Gx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw4l1YzIp6NjqmVMGB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxYufgTsolUkBpbCHt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgypJ60xt366IbVdMpV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyePHui_mV3FDpQ2D94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz_ZDXbZEXvA_f0qrB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgybOB9RRi6_u1meerd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwisFlo5Byk8likc8R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzFAdtA49VOFYgHrPF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"})