Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
scammers using AI is concerning enough already how much more to something relate…
ytc_Ugzqz5xNO…
G
hey varun, nice vid . but i just didn't get the context of the statement in your…
ytc_Ugw8iWe7c…
G
As someone that's been working in tech for 25+ years, I know not to believe any …
ytc_UgyQ_ilqb…
G
Art; The conscious use of the imagination in the production of objects intended …
ytc_Ugwsiaakt…
G
God some artists are just insufferable. If your work is so bad that you feel leg…
ytr_UgwfVTKlH…
G
While I agree that there is some concern with AI taking over, the story about EC…
ytc_UgwXo4Npq…
G
Gemini mightve been confused on 11:11 because it said "I would pull the lever TO…
ytc_Ugyo-25kL…
G
So here's my view/dilemma with this whole issue. I don't have time to get a band…
ytc_Ugw4hxbrh…
Comment
There is absolutely no reason to give a robot any more functions than is necessary to acomplish whatever tasks we require of them.
Even if we were to be successfull at creating robots capable of interacting with us in ways that seemed human, they would still lack intelligence. Why? Because they wouldn't need it to fool us into thinking their behavior is natural or willfull.
I can see ourselves be EASILY satisfied with robots that would seem almost human in appearance and behavior whilst, in the back of our mind, we'd still be (admitedly or not) seeing them as little more than amazing pets.
My bet is, that as we get better at creating advanced/complicated AI, more and more we'll come to grasp the sheer magnitude of what it takes to engineer something we deem to be TRULY concious... from scratch.. without billions of years of evolution to program countless subtleties and intricacies.. and ultimately, without knowledge of the true nature of our own concious mind.
Thank you if you've read all of it, I can always apreciate someone who isn't dettered by one my wall of text xD.
youtube
AI Moral Status
2017-02-24T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjkJ5oGO9Wrg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugixgzq73KpX43gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh5JFZ79nf9MXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgicBH5REIL6ZngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgisEJ6s7i1KOXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggmBsI9cRijcXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UghdMxvyt73s-XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjF9I1mY-z9s3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiL4ECa6MeGC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh3qhnb7IodFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]