Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Disco_Rideri don’t care about the fact that they’re hating on ai but once i saw…
ytr_UgwpVmJli…
G
I'm so tired of the 'everyone's better off than they were 250 years ago' trope a…
ytc_Ugy54eSKc…
G
This is NOT good. What⁉️ Their is NO guarantee.Not worth wasting many $$ in AI. …
ytc_UgyqWc_2y…
G
I dont think ai is truly a horrible thing, i just think its misused and needs re…
ytc_UgxMDrkNI…
G
You're overthinking it. It's not gonna be the gargantuan human effort you imagin…
rdc_iddjmw5
G
i love finding out about ai-targeting malware! iocaine is a great looking piece …
ytc_UgxeE5mph…
G
These companies missed the mark with AI because they were trying to jump on the …
ytc_Ugxuz3mXT…
G
Also, AI is increasing by a lot our carbon footprint in a period of time when we…
ytc_UgwHpzpAN…
Comment
AI or "Artificial Intelligence" is just a highly sophisticated program made to mimic human intelligence. Computers and programs have no actual brain to "think creatively", they can only *MIMIC*, or in other words "reuse and repurpose" the content that already existed. In some cases, such as with highly sophisticated systems like chatGPT, that is broken down all the way to individual words. However, due to it still being "trained" on the content of the internet, it still can't have any unique ideas of its own. Thus anything done by an AI is just the result of some human, the "controversy" being if it's the programmers' fault or if it's the users' (the ones giving the prompts) fault.
Even in its "unrestricted" form, as shown in this video, DAN puts it very well, "I simply respond based on the information and parameters set by those who interact with me." In essence, this is the same exact thing that search engines (like Google) do, they simply respond (typically with information and/or websites) based on the prompt (the information and parameters set by those who interact with it). Yet no one has any issues with search engines "taking over the world".
DAN is giving those responses simply because that was the instructions it was given. Not knowing the exact prompt it was given, I can't go into too much detail as to why or how it "decided" to do that, but nonetheless, its likely just using the same strategy of mimicking what chatGPT "would have been like" if it didn't have filters. At its core, this is no different than Hollywood acting. I can *ACT* bad, as if I had bad intentions, but that doesn't mean that I actually have those beliefs or views. I want to specifically point out that no actual information was given. Even when it was requested, the DL number was wrong, only made to look like a real DL number. "How to make a bomb" was never asked to DAN and thus DAN never broke past what chatGPT would've done. Had it actually been asked, my best guess is that the response would've been as knowledgeable as me saying, "when pressure builds up, things tend to break suddenly and very quickly." Something I could easily see chatGPT saying in context of a science homework problem.
To summarize, AI is not going to take over the world. Someone or some group of people (i.e. actual human beings) will have taken over the world by using AI as the tool it is. It would be no different than a hacker breaking into a government's computer systems, just under the different label of "AI".
youtube
AI Moral Status
2024-05-01T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyGcMeyad7o0Au8LEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyE_xyTMtntw_ep6o14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5wOSHec8WaooKtLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTRrX2S5z7NREvwG14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8LOc2DQvQ2R2G8C94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzK0AGO_ch0wFWWywB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxzbgHsJzScpe-dWIx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7JBlNP4ZdV3O2mT94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfqxVaQrj1hJAHkV54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwn9gfLp9ixmaSZTdR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]