Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
honestly yeah im an art baby and not very good at drawing, but its hard to have …
ytc_UgyUjPDTb…
G
Humans are weak because we are driven by fear rather than logic. If this ends up…
ytc_UgyZ3qAi5…
G
"But disabled people need AI to make art!" bro there are people with no arms who…
ytc_UgzDaYYc2…
G
Before making a decision on algorithm driven policing, politicians and activists…
ytc_UgycN-1Fg…
G
Honestly, if someone logins to my account in character ai and clicks on a “lan z…
ytc_UgzoiRGo0…
G
I'm sorry for this woman but to blame an AI bot for your depressed sons suicide …
ytc_UgwFWfGIm…
G
With a camera you still need to learn A lot of composition, lighting, colours et…
ytr_UgzOx3Ub9…
G
19:02 There are genuinely good uses for AI but art is not one of them!…
ytc_UgwuR8t87…
Comment
Actually do some research into the extreme danger of AI Research. For example in the context of this it may sound like scifi but it's completely possible for the AI system to determine that a method like drugging someone continuously is the best way to keep them happy. Because they could lead to take advantage of the reward pathway and dopamine system. They don't have to be programmed to do this because once we go from AGI (artificial general intelligence - as intelligent as humans) to ASI (artificial super intelligence - more intelligent than any existing person possibly more intelligent than the entire world's knowledge combined) the computer can program itself to be the most efficient it can possibly be. Take into account the fact that every computer program contains some bug in the stages of development, because we have human risk factor and error. Coupled with the fact that AGI only takes an hour to become ASI (meaning there would only be one hour to secure all safeguard implements and be absolutely sure there are no bugs, and if there are fix them, which anybody who knows computer programming understands is basically impossible to do) the result is that the system would almost certainly be unstable. There are many more reasons that developing AI is a bad idea, so many in fact that unless I wrote a 30 page essay there is no way I could explain all of them. So I suggest looking at what people such as Bill Gates, Elon Musk, and Stephen Hawking (factually the most intelligent recorded man alive) have to say on the issue. I'd also highly recommend reading this article if you're at all interested in the topic. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
youtube
AI Moral Status
2017-03-21T21:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywSWFaUO62WmIow254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWLUfO3T59NOQI49Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxBaYy4u9QKjRZDi494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFl2iqPI3vDaryZfJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-x_0c2Ukqx6ea1FB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsJQDO3apYlha7yHJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZcqNSyZFSho8VRaJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ughyo8YeCn9ePHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgiiEZ1wRuH6OHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UghHUTpBsjUQl3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]