Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
your video is click bait and literally pointless. Every designer is SAFE, AI is …
ytc_UgydHULy7…
G
Being a truck driver is not a healthy lifestyle, at least for long hauls I think…
ytc_UgxKqSbZ6…
G
You should watch "I Hate Twitter Artists" by HarmfulOpinions. It's a critique of…
ytc_UgzTGDulk…
G
@TheAIRiskNetwork How do you know that John? The way I see it, progress *is* be…
ytr_UgyxfZb5O…
G
Thank you I preach this to the heavens. Llms will not take over the world and be…
ytc_Ugzo_N4FI…
G
If Avi Loeb is correct then we won't have to worry about AI killing all of us...…
ytc_UgxOX5JzR…
G
AI reminds me when in the 80’s they told us that by the year 2000 they will be f…
ytc_UgxSK5af-…
G
@user-ky3is1ki5f Thank you for your comment! I appreciate your support more than…
ytr_UgxVBHd_U…
Comment
Failing to see the point of all this, it's really easy to fool around with ChatGPT, we all know that.
The thing can't hold an opinion and you keep asking for it ! THAT IS IDIOTIC, Alex, yeah, you too can be an idiot. I do that often too, now you're doing it - no grudge held.
You're getting stretches of text from a lot of different books, rearranged to give you the most statistically probable answer - the follow up found in those books to the text you gave as input. This is a curiosity for people who've never chat with AI, but a laborious exploration.
ChatGPT was genuine, obviously, you were twisting everything to get to your desired answer, knowing the weaknesses of that thing made it easy.
So it's just a show, empty of real content because your interlocutor was a cripple in emotional awareness.
Morality is an emotional parameter. ChatGPT has no emotion. It's answer was splendid: "I can't have a moral opinion, I don't have the tool to make a choice: I don't make a choice", that's a precautionary principle in itself though. It's not about the options, it's about the reasons, and the reasons are emotional, outside it's purview. Why didn't you first give a frame of mind for it to follow, you'd need to delineate a personality, but in doing so you'd be giving it your 'morality ', or an arbitrary one if your just toying but still based on your idiosyncrasies.
And you didn't even mention Free Will. No Free Will, no real choice. You used to do much better.
youtube
2025-10-13T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]