Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai legit can not be racist. It makes decisions based on facts, not skin color.…
ytc_Ugw5olJhK…
G
@ODonnauer yes because ai can be slaves to scientist just not grok or chat gpt…
ytr_UgzZIf7ow…
G
My experience with Waymo was awesome. I was picked up at my exact location! The …
ytc_UgxOeOVL2…
G
Yes they need support and they can say no to these jobs . But blame Kenyan gove…
ytc_UgzjJT-mp…
G
It's demotivating even to draw anymore because Ai makes people less creative and…
ytc_Ugz5-aYn6…
G
Youtube cant even handle properly working a ui to judge videos. Now ai decides w…
rdc_i2tsf62
G
It is a mistake to underestimate AI, I think it will figure out an addictive ple…
ytc_Ugyf1eztS…
G
No oversight of A.I. for a decade, you say. What could possibly go wrong? If tho…
ytc_Ugxo3zoqD…
Comment
I think you are stretching this a little too far. I'ts nice to argue hypothetical but there is a tremendous amount of unknown unknowns here.
For starters there is a wide gap between what we can conceive of doing some day and what we can probably do in the next decade or so. Meaning, we don't have to worry about "Skynet" because we quite literally can't make one right now. Moreover, if we put all the smartest computer scientists in a building and locked them in they couldn't create one because there is just too much we do not know how that level of AI would work.
It's entirely possible that future programmers and design an AI to have just enough independence to be useful while not be sentient. We do not know.
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgibKKnw0qnP8HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggiLxFpt8eSvHgCoAEC","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiH_BILS3yl_HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ughk9klhegKuJXgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugh9YkkFUkp7lXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgheAkP5X8Gq5ngCoAEC","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UginAgDYmWof_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgijBDV5-iAE7HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiUsSTwzN6Bl3gCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugg_9SJSZWuIo3gCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]