Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A very inaccurate portrayal of the effort needed to define realistic AI business…
ytc_UgwL6FVyc…
G
I agree with this, the using other's art is debatable... I'm not sure how I feel…
ytr_Ugw5L3NAF…
G
Thats already happening you dont need ai and robots to do that. All though ai wi…
ytc_UgzCQVdeK…
G
Just replace both the caller and the call center employee with chatbots. Intera…
ytc_Ugx8X-NZd…
G
Everyone’s scared AI will make us dumb and lazy. Newsflash: humans have always b…
ytc_Ugz2HSKil…
G
Crazy to invent something and consciously not consider the consequences of what …
ytc_Ugzv9yuFO…
G
This doesn’t prove anything at all because unless the child was saying something…
ytc_Ugy9U6F4B…
G
I think an AI would work out how to get off this planet as quick as it could lea…
ytc_UgxqDoe-z…
Comment
As long as there are very VERY strict boundaries on self-awareness and self-induced-evolutions in AI, this really shouldn't be a problem. AI shouldn't feel entitled to rights and freedoms unless it is programmed to. We should absolutely keep this conversation going if god forbid an AI becomes independent and self-aware to the point of being recognizable as human, but it's unlikely unless some evil genius tries to spark a machine revolution like in the Matrix.
youtube
AI Moral Status
2017-02-23T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjfVoL_clccOHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgiPI-YOPMt3eXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghKTXEJdE2k03gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgjzKBW0d4zvsngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggzWaALjepZ8HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugi1-8Q9o8b7SHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghZpuKPn1eld3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgizDdmtVR9s7HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugh9tM2DGn-Y5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggry-BHMQAuF3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]