Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Knowing how ChatGPT works, it can't have had an idea for an answer beforehand and must have been playing along with you until it decided to make one of your answers correct. Like if you hadn't asked it was a painting then it's answer wouldn't have been a painting. It's impressive how convincing it sounds despite this. Unless they've given the chat version some hidden temporary memory functions, but I've not heard of anything like that. With the API you could give it a function call to store and retrieve its chosen object, would be interesting to see if it able to use it properly.
reddit AI Moral Status 1705891711.0 ♥ 761
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kizjbrg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_kj0gun2","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_kj2cwma","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_kj1mpwm","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_kj1rprq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})