Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am sorry to break another doomsday scenario in this channel, which I still love, but AI is not sentient. It doesn't want to conquer the world: it just doesn't want. It's an advanced mathematical and algorithmic technique to optimise very complex functions. Both the input data and the functions to optimise are provided by the programmers. Now, if we ever find a function to express conscience, then AI may become self-conscious; but there is no market for that. I want an AI car that just drives me home, and doesn't get distracted by thinking at the last fight she had with my microwave owen. And in the case that we ever develop a sentient AI that, for not well specified reasons, wants to kill us all, that is not as threatening as it may seem. Sentience is sure to be a very complex computation; an AI programmed to kill the sentient AI on a laptop and loaded on a drone would be infinitely more efficient, and would get rid of it in zero time flat. A human brain has an immense computational power. It was estimated to be superior to all the combined computational power of the whole internet just a few years ago, and even if the raw computational power of the Internet were to exceed that of a single human brain, that doesn't take in consideration the slowness of data transfers across the world. AI is able to perform specific tasks better than humans because it dedicates all its very limited resources to a small, very specialised task. Give Chinese jet something else to think about other than dodge a shoot, and they'll be worse than even a toddler on a luna-park ride. Also, while it "appears" that AI wrote the echo novel, what actually happened there is that AI computed a sequence of letters and words that were taken from texts that were written in contexts matching the prompt you gave. That is, it recombined texts written by humans about AI domination, and generated a plausible output. That's all. If you're curious about how it actually works, google "attention is all you need", the 2016 paper that explains this algorithm. It's an 8-ish page description of how to put together a very simple set of neural networks -- the brilliancy was indeed that of simplifying much more complex models that existed before. Once you understand how recombining texts gives the illusion of conscience, the whole AI business is totally demystified. Last but not least, as for the experts issuing warnings. First of all, most of them are expert in the BUSINESS of AI, that is, in how to make money out of it, rather than engineering experts. Even Musk, for all his business acumen, I doubt he ever sat down a wrote a GPT model from scratch in an afternoon (a thing anyone can do right now). Moreover, since we're in the realm of conspiracy theories here, the rule #1 is follow the money. Who would gain from government regulation of AI, and a 6-month head-start on the latest technology? I think you guessed right. Creating an artificial monopoly on AI while it's technology so simple you can make it on your gaming PC in an afternoon (yeah, there's more to it than that OFC, but that's the level of "science" required to step in the field) is in the interest of all the companies and CEOs currently rallying another fear porn in order to get their way. Stay safe.
youtube AI Governance 2023-10-21T11:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxX60GAs9PcA7pukNl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwQU6AEuwl6fPFn11x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyUi73Zz6U25uEDL9x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy6yHcJ72JyUdzLP5d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx8QM0O8UDurrvHW9F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzQ4mY5ZmqnOYZS1pV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwYfSUnJwisSAx6tVN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwLhJlgOT-zVNmOKQt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyKYTudoDNmPv-hBGV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxF5Yi7TAYhd9OoE1l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"})