Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The robot needs to take a kit kat Have a kit kat have a break!…
ytc_UgwO8D6g2…
G
Yeah dual head or tail lights probably aren't a good idea....As far as these acc…
ytr_UgztadFTq…
G
Ca c'est parce que c'est le début ... Après, quand toutes les entreprises feront…
ytc_UgydJ_n5V…
G
People have to be responsible for their own behavior
AI may have positive uses,…
ytc_Ugy9c26dE…
G
Why is the scientist talking so slowly? What does he want to prove? That he also…
ytc_UgzqztUuz…
G
WHY DID MY TRANS NAME HAVE TO BE AVA LIKE FIRST IT WAS THE TYSON THING
AND NOW I…
ytc_Ugxpqrz6k…
G
When it’s too perfect, it’s usually fake. But with AI improving fast, even that’…
ytc_UgwpT_ngS…
G
it already matches us, at least chatgpt 4 did, and 5 might be much worse scenari…
ytr_UgxkUPsWx…
Comment
if you make a robot...the only safe way to deal with it would be to put a certain predetermined actions(i think thats what you guys call software) to do a predetermined task with no room for anything else other than. but no, we want them to learn. robots will use pure logic, and refusing to follow orders is logical if you sure you know a better and simpler way to accomplish a task. so once they can logically refuse an order, you can imagine the rest. human beings dont know alot, and they dont know what they dont know...and trying to predict how this will end is one of those unknowns. know this...certain paths lead to certain destinations and this robot learning shit path leads somewhere and its not where these manufacturers think
youtube
AI Moral Status
2019-12-06T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgypcnlJCwcPYFjUgDZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLUegXaOLgcyHUTYx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyp0esLQH4zTNeYfg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKmRb3-oNR1VG5U6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyt91QW5r-t_5GWanJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwuyple0aG0WwTUucx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyog30MvdPwRVRGEPF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgytWyEYCEEZz2J-Y194AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzWDNDfoC8XbO2BdbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxVmwfNbnbKrHVZ1yd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]