Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It is not pure mimicry. It mix and rephrase, the result doesn't need to exist from before. It can be something new. You can also get it to play games like 20 questions, make it be a dungeon master for an extremely simple and short quest, or ask it to invent new words. But yes, at the core it is a sort of mimicry. But the worst part is that it lies when it should say it does not know. When you ask for something exact that requires a little logic then it is to often wrong. Unless someone else has asked your exact question many times before, then it might answer correctly. It is kind of stupid with logical tasks, but I guess it will be better at it when they figure out how to make language models work together with more logical AI methods. Meybe mix chatgdp with something like alphago.
reddit AI Governance 1676259029.0 ♥ 25
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j8btnv3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_j8aqlm0","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_j8cptw3","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_j8azf85","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_j8az0m7","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}]