Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The thing is that if we do create AI that is conscious we could just ask it what…
ytc_Uggvb1v6x…
G
A survey showed that on average, AI alignment researchers are more optimistic by…
ytr_UgzitJmAB…
G
Ah yes, the ai ‘artists’ are mad that artists are ensuring that their art can’t …
ytc_UgzzrIYLs…
G
The " lack of adoption " as you put it, is because most of the things executives…
ytr_UgyoP_Fe1…
G
Elon Musk once said, if AI poses any threat to humanity, you simple unplug it! W…
ytc_UgwjGkak5…
G
We're past the threshold of turning this back or controlling it. AI is now open…
ytr_UgwSNE47Y…
G
AI general intelligence is (going to be) so different from human beings, why wou…
ytc_Ugzi5Vas8…
G
AI is the means to end humanity.
God will not be mocked.
He designed humans to…
ytc_Ugz-r82ts…
Comment
A good example is the game, Detroit: Become Human. when AI become self conscious to the point of being almost human like they will need some form of rights, maybe different rights, but some none the less. but that is assuming they become almost human. the problem with AI having feelings we couldn't program them with true feelings, why? because feelings are tied up in our consciousness, which some would debate it is a soul, and other simply neural chemical responses. whatever the case may be, they only "feelings" would be preprogrammed responses. take siri for example, if I "insult" siri she will say "ouch" or "that wasn't nice" because she has feelings? no, if I say the words "hey siri, you suck!" she will search her database not too differently than this, inquiry/%you_suck%/cmd_line.624//run (yes ik that's not programming language). and when we do get AI that is self conscious I believe it will be purpose built tech, and not your refrigerator. but as the old adage goes, "We'll cross that bridge when we get there."
youtube
AI Moral Status
2017-02-24T16:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggJIup0iIlZVXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugiqorz5t1QhRHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UghZ5Le5QNo9W3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj2YPylz7gmH3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiIQ5CNwZV0VXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggW5A_hvTuZv3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj0GWYELnqn_HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_Ugi37YvVMkNA3ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjFDOQXOgm_-HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgjVqIuTCm8kfngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]