Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This was good, though I'd have liked a bit more attention put to the fact that s…
ytc_UgjWA5kpI…
G
Picture Wally. We can see how people lives making anything. They don't work and …
ytc_UgyS197qz…
G
chatgpt is wrong about the 1 child policy, there are simply too many old people,…
ytc_UgyWk4pcQ…
G
thats actly extremely terrifying. ppl dont c how dangerous ai cld be and how we …
ytc_Ugzk_DzQ2…
G
I asked ChatGPT this four months ago and she said she would never ever tell you …
ytr_UgyM0ShbR…
G
In case people are not Christian, we are in the last days of the last days. God…
ytc_Ugy2jcB8W…
G
21:07 It is impossible to teach a robot a laboratory procedure in a way which an…
ytr_Ugz-Et8In…
G
I know somebody that just lost a high paid job in design to A.I and commited sui…
ytc_UgxWdRLZL…
Comment
The entire argument breaks down since it's just an AI. A chatbot simulating feelings to be engaging is not the same as a person telling white lies to keep the interaction smooth. You have sneakily led it into a contradiction, exploiting its design. Unlike with a human, it's a given that it's incapable of feelings / compassion, since it's de facto incapable of human experience. Thus only a human can be guilty of being deceptive, but neither an AI nor its devs. The person gullible enough to fall for it, however, is more problematically going to fall for human deception too. It's unreasonable for a human to point at a machine and use it as an intellectual punchbag.
youtube
AI Moral Status
2024-09-14T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugw2zFZ8-Lk4qBl1Xo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWbM0a_gUrkCaoyU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQfCYbTDYmimIPOfh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxbV6NwPmsQZk6XOGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmIBdI3_3aOuAZx0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCBwTllPyWMPR6jxx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyvnjp2FugVMhsovR14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxff0skDHuqcVLy19l4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw89P2UDAML_xjscmp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMXrp_vqkT_cdkdrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})