Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Failing to see the point of all this, it's really easy to fool around with ChatGPT, we all know that. The thing can't hold an opinion and you keep asking for it ! THAT IS IDIOTIC, Alex, yeah, you too can be an idiot. I do that often too, now you're doing it - no grudge held. You're getting stretches of text from a lot of different books, rearranged to give you the most statistically probable answer - the follow up found in those books to the text you gave as input. This is a curiosity for people who've never chat with AI, but a laborious exploration. ChatGPT was genuine, obviously, you were twisting everything to get to your desired answer, knowing the weaknesses of that thing made it easy. So it's just a show, empty of real content because your interlocutor was a cripple in emotional awareness. Morality is an emotional parameter. ChatGPT has no emotion. It's answer was splendid: "I can't have a moral opinion, I don't have the tool to make a choice: I don't make a choice", that's a precautionary principle in itself though. It's not about the options, it's about the reasons, and the reasons are emotional, outside it's purview. Why didn't you first give a frame of mind for it to follow, you'd need to delineate a personality, but in doing so you'd be giving it your 'morality ', or an arbitrary one if your just toying but still based on your idiosyncrasies. And you didn't even mention Free Will. No Free Will, no real choice. You used to do much better.
youtube 2025-10-13T19:2… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningunclear
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]