Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Whenever I mention the claude 4 opus murder to Chatgpt, it will not under any pr…
ytc_UgwNqhd_a…
G
There will never be a.i equal to the soul,,it will always just do what it's prog…
ytc_Ugz2VitJV…
G
miyazaki is a legend, cant believe ppl are copying his art style with ai now, im…
ytc_UgydVVD4z…
G
Oh I’m a millionaire and make millions a year it’s overrated. People that actual…
ytc_UgzAnSxWN…
G
These were my thoughts while watching the video:
In the case of self-improving …
ytc_Ughe6jj7x…
G
Absolutely not
If AI was actually correct and unbiased maybe
But that’s not t…
ytc_UgyRmSV73…
G
@DanknDerpyGamer I don't know if concept was the right word (It kinda works but …
ytr_Ugyha6a5N…
G
I'm all of a sudden imagining myself with a robot playmate in bed with me. Pleas…
ytc_UgyfhgG33…
Comment
Giving robot sentience is a bad idea. Have you never watched any robot movies. Also when making a robot you have to abide by the rules of robotics that robots must always do everything their owner says if it doesn’t infringe with rule 2. Rule 2; the robot cannot harm or let anyone be harmed.
youtube
AI Moral Status
2018-07-13T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugx7YznFYEUKkMe1iBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzahW5WKawAqoKCB7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRHWKvJT8IhKO-_qF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbgNKJMW57e2gSy1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYCJpRzmrEA7SN_ll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7dI6ViiYSCEbnzft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzinrD6hweefSHzu-x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy9MR1jF5P4ZT51IHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy75Vkh-6d8zWFeqFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnZ11_1Tt2abQ2lgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"})