Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think ai should only ever replicate things that cannot be done by humans, beca…
ytc_UgyfiNC7Y…
G
I graduated in the early 2010s, before AI, so I actually put in the 'real' hard …
ytc_Ugw_2AbSt…
G
AI is just to assist but cannot evolve by itself they are just sponges waiting f…
ytc_UgyR3nm94…
G
A counter to the scientific application as a scientist: look up rat d*ck. Someon…
ytc_Ugxa2AUGq…
G
This video is fantastic. I am an author and my first book was a sci-fi novella …
ytc_Ugx6w_TIs…
G
One of today's most prolific philosophers having a conversation with an early ve…
ytc_Ugx-v1WxH…
G
There are robots do humans have that blue thing on the ear so that’s of course a…
ytc_Ugwgy_kQ0…
G
i can't remember who commented this somewhere, but i'm going to quote them and i…
ytc_UgzHLiM05…
Comment
This is FANTASTIC! The problem we’re observing is a design problem: ChatGPT is designed to seem human-like - to comfort human users. Built into the UX is an inherent deception: common colloquial language we all use all the time, but that is often not true when we humans use it. Yet, in polite society, we forgive it implicitly with each other. Like…
Nancy: “hi John! I’m excited to run into you here. How you doing today?”
John: “ oh hey, Nancy! Great to see you, too! I’m good. Things are going swimmingly well.”
These cliches are sort of expected, even when they’re not true.
youtube
AI Moral Status
2024-09-01T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgyjIbMoiHklaBheb4Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzAM_M9wr0ZP5nHSk54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyE1y5G5I-qjiRW3AJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOJvlk54eDi_RML-94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNt1kJWsJqq7BcZr54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyX0vbIUdAQHHTwWbF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzyMEkStB8ya03dqKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5vNvQDEKYcT9WAgZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzKihi0_7wgbRH1vUZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwliKgQLwYhAIleP-14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"})