Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are getting dumber with each generation!!! And you wonder why AI won't be…
ytc_UgzbMvffK…
G
I would observed the AI along with my conversations have probably become more mo…
ytc_Ugw7UhejY…
G
A great artists also said-
“There’s nothing wrong with having a tree (or AI) as …
ytr_UgyKsQAnS…
G
Do I understand it right, that we have already being living on stage ten of arti…
ytc_Ugyg1DKd3…
G
Lol relying on AI will be companies worse mistake. Can't tell you how many times…
ytc_UgzgaXN8E…
G
@8:16 LLM stands for Large Language Model, not Large Learning Model. And that's …
ytc_UgwQOfc5c…
G
Saw a Waymo in Beverly Hills stuck in the right lane for a construction barrier …
ytc_UgwPM3hS8…
G
It's not about rejecting labor solidarity. I'm in support of the writer, but rec…
rdc_ju8ei8v
Comment
Something worth bringing up is the question of "fun" or "What the robot's want"
The concept of suffering and rights from it is an excellent question, but it is only one half of the question.
For instance say we programmed the robots to have fun mining, to get a shot of excitement every time a cave in almost happens, and to just find joy in sorting one type of rock from another.
Maybe they enjoy it so much that if the robots were left to their own devices, they would, by choice, started to mine whatever they could all by themselves?
Would the robots then demand the right to mine? Perhaps even be willing to take up jobs cleaning the environment so that they can go home and mine for a few hours each day? Could this even be called a 'Right'? Its what the robots both want and demand, but dose the fact that we programmed them this way make it less than the rights we ourselves are programmed to want and demand?
youtube
AI Moral Status
2017-02-23T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugju7aEfGYlMBXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugjeziu4V1EknXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgiLWOcRt89jfHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Uggqw-SfwBxqHngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UghD-anJqaf-jngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugg2MtUBRNtZ9ngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UghG18WWY7H_q3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggNgQ_Hy9w9vngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgjaMZKvJE3S4XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugi411ebTWTvlXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]