Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Were you using GPT-3.5 or GTP-4? GTP-4 passed the bar exam. That being said, I could totally see GTP-4 hallucinating wrong answers. That's it's real limitation. It doesn't know if it is right, wrong, or even the probability of it being wrong. As impressive as ChatGPT is, this is obviously a huge problem. I believe if it could recall its training data that it could solve this, but OpenAI currently won't allow it.
reddit AI Responsibility 1684372796.0 ♥ 9
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jkkweqs","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_jklauv9","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_jkku1kc","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_jkldfz1","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"rdc_jklp2k6","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"} ]