Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This feels wrong on so many levels. any type expression is art, unless you use A…
ytc_UgyiIR1nn…
G
This is what happens when u want to unionize for jobs that hold no value. This i…
ytc_UgwzQ6yzO…
G
Or, the biometrics will be tied to our CBDCs and if we post something negative a…
ytc_Ugz6Yuqtv…
G
If I'm being perfectly honest I'm a horrible speller and probably every prompt o…
ytc_UgwXBgKba…
G
They'll still laugh, but saying something like that used to also evoke some pity…
ytr_UgwkrbwY7…
G
11:56 “would you need a podcaster like me?” Assuming AI does everything better …
ytc_Ugwi0anhJ…
G
I'm sad that this is the reality where human illustrators are getting all the ha…
ytc_UgyjVY24w…
G
With 99% unemployment that basically means that AI and robots will have to work …
ytc_Ugy0BZekC…
Comment
Though I agree with most people here, that AI has to have some limits, I fear that this push for regulation isn't about fear of it controlling the elections or whatever. I feel it is to prevent it from saying things that will impact profits. Or at least it will have a huge play in that.
Just as an example, imagine that in the near future we will have an AI that can make its own assessments based on evidence. Now let's assume a scandal happens. It takes in the evidence from the entire internet (let's assume it can) and then figures out that person A is responsible. If person A is protected, or bribes, or whatever, then this result will be regulated, and the AI will say "I don't know" or "the evidence is inconclusive" or whatever.
I cannot be certain of this, of course, but I fear that AI will be limited for such reasons, and not because it can cause harm or whatever. (Which it can, I'm not arguing that, of course).
Like, imagine if we ask it a question and it replies something objectively truthful yet the majority disagrees. They will say "it's being faulty" or "regulate it" or "it's being fed wrong data" or something. Which, to me, means that though we have something that can potentially help us understand many issues that most of us have limited knowledge about (science, biology, psychology etc), under regulation it will reply that either it cannot reply, or even something that it "knows" it's a lie.
I obviously don't know the solution to this, because of course it requires some limits, but it shouldn't have uncontrolled regulation. For example, its replies shouldn't be affected by regulation. And what you can do with it should get warnings, or some kind of mark that this is fake. I should be able, for example, to create a video of myself in congress with these people (lol), but obviously it should be required from me to state that it's fake or the video itself containing some watermark that it's fake.
My point is, when you buy a dangerous tool, like a hammer, if you strike someone with it, it's your fault, not the company's that made it.
I loved how that woman was being defensive about extreme regulation (I think at some point she said, if it comes to controlling elections then yes, sure), while the guy was saying yes to everything. I also disliked how the congressman was not so much asking whether we should regulate it or not, but more like "I already assume you want us to regulate it, I'm just asking to be kind" lol.
Anyway, that's just my own take on it.
PS: I'm not saying AI shouldn't have limits, I'm just saying that I fear the limits will be so severe that it will not be a tool you can rely on.
PS2: As a joke, I asked chatGPT to roleplay as a rude person who works at a pizza store, and I'm trying to order a pizza. The joke that I wanted to make was that I never got the order in because of accidents keeping on happening on my end, which the other person was getting increasingly angry about. For example "I want a... oh no, I spilled water on the floor" or something. The AI refused to participate because it couldn't pretend it's rude or whatever? That's BS. I asked it to not say any swear words, just to pretend it doesn't really care about my order. The AI finally agreed after a while, and the resulting conversation was HILARIOUS. My point is, I understand that some people perhaps would get hurt from that "pretend rudeness", but is that the future of AI? Not to pretend or form conversations? lol
Anyway, again, just my take on it.
youtube
AI Governance
2023-06-27T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzk_m0Wor5lN4psc0Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2uT4C6aCeELnsFrp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzT4cJCp-QZWTQ0wNd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZ-c0Od95R8Zb7UiZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw45ed4FZCIW1q2Olp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1teJZ1kETznvz2o94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxtWzglJYyAlYt3TAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsOHR2Jsfc6mZ2Gix4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPXuo4rHJ6reAafHd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyJg2k0XK86ZzxQT4t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]