Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This matches what I see on my team too. The gap isn't "can you use AI" - basical…
rdc_oael5l2
G
AI was created and Run by The Kings of Terror, The Russians .
AI controls the …
ytc_Ugy4jFuu1…
G
why are people who defend A.I. stuff always so god damn insufferable like even i…
ytc_UgwJLhNeS…
G
he's got dem crazy eyes
please don't feed the AI with info that could lead to j…
ytc_UgzkWKTXK…
G
This all seems super scripted, like a bunch of terrible comedians wrote the man …
ytc_UgwS4EXf3…
G
Americans ate headed for AI TROUBLE just like in IROBOT!!! FILM MAKERS DONT MAKE…
ytc_Ugz4RSAgJ…
G
I think the way that art is marketed by galleries and auction houses inadvertent…
ytc_Ugx9LpEJy…
G
Its that face with the evil laugh *meh meh meh ai meh meh" with the cigar 😂…
ytc_UgwLRo0uv…
Comment
AI is just that: *artificial* intelligence. Programmed robots specifically to be intelligent won't happen. Consciousness is an emergent property. Human beings might be a mass of instincts, but what makes us aware is the fact that our consciousness emerges as a byproduct of the complexity of our our brain's interconnected nature. So intelligent robots will probably emerge accidentally once they get complicated enough.
Will these robots deserve rights? While a robot might not suffer physical pain, they might certainly suffer fear and dread if losing a piece in an accident. They might suffer depression if forced to do a job they hate. In cases where robots can have a sense of self or emotions, certainly the definition of person would be appropriate, and rights should apply. Which to me is a good reason not to make computers too complex.
youtube
AI Moral Status
2017-03-08T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UghKkbKM7RfTSngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgjI9lR0B-QpLngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghPZpawqsXIxngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg3YIAoHWeF73gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghZs-vx_DY4WngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj4TBYHcuy8QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Uggj6wVem7oUqXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghZ2Kej7Awjx3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UghHQZM9DEXzg3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiGAv21OsCOaHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]