Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Im working on somthing similar but real im tryna copy the scp foundation and stu…
ytc_Ugx40pJsu…
G
id love an ai to psychoanalze on the basis of the content of this hearing and th…
ytc_Ugz-vttR1…
G
Can’t make a new tribe in America. Just the old breed that’s getting anything f…
ytc_Ugx5zCr_R…
G
I believe a post I saw on Pinterest sums up my feelings abt AI: I want AI to do …
ytc_UgxtsU0-r…
G
Dear, you are querying against a general model. Besides this these general model…
ytc_UgwX0f4dm…
G
Alexa, I've ordered all this stuff that you have to figure out what I mean and I…
ytc_Ugzcr6NFK…
G
I lit wish ai was never invented, people says it helps people who can’t draw but…
ytc_UgxiEFNH8…
G
Ask Chatgpt about your plumbing fixture leak. Recipes. A medicine. Your automobi…
ytc_Ugy92zks-…
Comment
I feel like the more pertinent question is: are we making systems that are in some way sentient (having an experience, pain, pleasure, etc) or even conscious (self-aware)?
Because if so, that to me is kind of a larger ethical issue to work through than anything you mentioned.
Something doesn't feel quite right about bringing another sentient, conscious, being into existence without it's consent and you can't really get it's consent before it exists so the best you could do is create it and then give it the option to kill itself if it didn't want to be created... but that's like, so fucked up lol.
For the same reason, I feel like it's kind of insane that so many people decide to have kids and think nothing of it like it's just some normal and completely unproblematic thing that's just expected of you, so you do it... but anyway the main point I'm getting at is: if you create a living being, with a sentient experience, and conscious awareness, that's a HUGE fucking responsibility you're taking on, you should not be doing something like that if the being you're creating is just going to suffer throughout it's whole life, so it's kind of your *job* to make sure it's comfortable, happy, has everything it needs, feels loved if it needs that, etc etc.
I don't think a lot of people are thinking very much about OUR impact on AI - only usually the other way around, how AI impacts us or might impact us in the future.
Hopefully the models and networks we create don't wind up as selfish and inconsiderate as we are.
youtube
AI Responsibility
2023-11-12T01:0…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzrwXdYpo3Uqoamt2l4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEn5L_-R1wxvVw7-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzTdXDGNGrZzSmfD0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCHzHKiYDUzVIA8z14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwY16aXC8SUFT9zzbd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6owrZ55fPHnd7dtJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz4YfgCE4I3EUjQHL14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHGVJvGzMZg_MWbyZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVJFWAMHvXwfW_CYV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgysfQ7k_ZckAFSixjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"mixed"}
]