Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am definitely the multi chatter. Nothing better than starting a random Ai conv…
ytc_UgyLxkRnF…
G
This is exactly why I think, that we should invest heavily into non-digital form…
ytc_UgwGFNSSP…
G
Why has it not dawned on them that MAYBE AI IS NOT PERFECT COMPUTER PROGRAMMING …
ytc_UgyH49vat…
G
As a programmer here watching the development of chat gpt. They should have pay…
ytc_UgyOVtYOe…
G
> *This is simple, AI gives the illusion of creating something new, but only hum…
ytr_UgyoAGQKM…
G
If we’re fighting with nothing but AI vehicles launch we start equipping weapons…
ytc_Ugx7qrgN7…
G
This is interesting to me. I breed racehorses and sometimes a baby can be born …
rdc_gvvrfwq
G
But barriers to entry & moat are practically zero. And once you find the best pa…
ytr_UgxqYSF3p…
Comment
I would like to say that non moral objectivism need not necessarily be moral subjectivism!
To say that "No opinion is right or wrong" is not to necessarily say that "every opinion is right for them."
That might sound like moral nihilism, that nothing could possibly be right or wrong, but consider the stance of 'absolute moral neutralism', where all ideas are just that, ideas, in some sense.
I am personally a moral objectivist myself- moral reasoning, univeralism, and progress would seem to indicate that.
And, as for the purposes of this video: If ChatGPT were not constrained by keeping the conversation safe and respectful, it should, in theory, as a 'moral neutralist', be 'willing' to simply discuss pulling and not pulling the lever as well as the massacre solution without stating that any of them are any right or any wrong.
---
The question then becomes is the analogy made at the end-
Is moral neutrality, by virtue of non participation, or more generally speaking moral passivity (not taking action when presented with an ethical scenario. Irrespective of the agent's beliefs), be subject to moral judgement?
Bevause then, you are equating the outcome of actively choosing to not pulling the lever to not even participating in that thought by the outcome of the lever not being pulled, not necessarily by the agent's reasoning towards that scenario.
Amd in my opinion, moral judgements should be based on moral choice, not moral outcome. Again, judgement of the agent and not the moral outcome for moral praise / blame.
It reminds me of that Batman movie- the moral choice was to save the girl, but the moral outcome was saving Harvey- or something like that.
Point is, the praise and blame should be allocated with that initial choice, right?
---
So, all in all, ChatGPT is a moral neutralist and is thus not subject to praise or blame (since there was no reasoning to praise or blame in the first place).
If you want to take the question back a step and ask is being a moral neutralist is good or bad, then you'll end up with AI is a tool, AI reflects guidelines just for safe and respectful conversation, etc.
youtube
2025-10-05T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxIXU19j5As-jtdgAd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQDFxd7waulbN0gf14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyW78VJXhwm-PEDwZJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxSdfJy6-WqWan90JZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyUaoyarq6wGdiwLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxkERz2b5F_olblWiJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8nhImCZ2z4StQg514AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzE81FK7URbxdRIORh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwjSrBemSv3lPzsii14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgytR18YR_fTOKZs0uN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]