Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its trained on a large amount of data, some of which is likely incorrect. But it’s not on purpose and the correct data likely outweighs the incorrect. But it’s not really built for the task of medical advice or economics. It’s just a general chatbot that is able to “guess” what to say. In the case of more complicated things it doesn’t always guess right. So if something *sounds* right then ChatGPT will run with it because that satisfies it. But it’s not purposefully built to get things wrong. That would be bad for business. It’s just not built for that and not advanced enough yet Ensuring perfect accuracy is at this point impossible.
reddit AI Governance 1681069629.0 ♥ 7
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jfipad4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jfm1eq5","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_jfmoggo","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jfopvm6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jflx7iy","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]