Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So I shared chatGPT your video link and I ask her if I ask you same questions wi…
ytc_Ugz2hlIcw…
G
After 10 years she didn't get older she is a robot that's why chat gpt doesn't k…
ytc_UgzJzUQCD…
G
I prefer a round robot, unsexualized please. Like R2D2 but more physically funct…
ytc_Ugz0jJ5GM…
G
You have to tell AI it is a famous mathmetician like Hannah Fry before you ask i…
ytc_Ugw81Tm4X…
G
I never considered there was a difference between art and entertainment. But you…
ytc_UgwMJSuN7…
G
Yeah AI requires so much hand holding while writing code or doing anything I don…
ytr_Ugy0epJfV…
G
You don't need to research that much. Just link to a neural net that you trained…
ytc_Ugwl1YzBj…
G
People talking about "turning off" AI are missing the point. We're already seein…
ytc_UgzZDoyRr…
Comment
Well, my only point of contention here would be the final point, comparing the inaction of the incapable to the inaction of the unwilling. There’s a very subtle, yet very real distinction to be seen between these two concepts, despite their identical outcomes. Let’s instead substitute ChatGPT for a rock, and insert it into the trolley problem.
You set the rock in front of the lever and give it its only two options, which can be boiled down into “Action, or action through inaction”. The rock, of course, won’t act; not because it has some sort of conscious aversion towards making a decision, but because the capacity to choose was never there to begin with.
A human, from the moment they consciously comprehend the idea of the trolley problem, is already involved in the situation. The moment you realize the consequences of either decision, you’ve already been forced into said decision. The difference lies in conscious awareness and understanding of the situation at hand; otherwise, would we stipulate that all inanimate, unconscious objects are choosing not to pull the lever?
youtube
2025-10-15T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx-epRa3w5FfCNs-Lh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz2PnJOa8dM8arkrVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw9ml2DzUggVkdJ-4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz241Cy9m3-fqmcn354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyIFGk6tCItgBp7V4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugww43cHU9ErtCnvRZB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwxwFr__8Gur_VzsnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz0j1AgtucfAjX79gl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWSO0QwXrdr1u8iVx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFlOe4NQwrqBxfX4F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}
]