Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What about if I'm a software developer, and i create an AI which then creates th…
ytc_UgyJi5RS2…
G
What?!? How can you say there was no AI. The intelligence was TOTALLY artificial…
ytc_UgyykIKat…
G
I believe in AI but I don’t believe in this guy only mimicking and copying other…
ytc_UgyedFbYk…
G
Is AI Roberts yajug and Majug which is been predicted in islam that they will i…
ytc_UgzjAsTuR…
G
The hate isn’t often directed at the way the art is made, but directed at the wa…
ytr_UgzwqXx7T…
G
Thank you for sharing your perspective. It's true that artificial intelligence o…
ytr_Ugy8FjMV7…
G
The extreme amount of automation that is present in trucks now doesnt even work …
ytc_UgxkSmyCn…
G
What bad things is Musk doing or has done that is driving the concern on the mor…
ytc_UgxFJZAt1…
Comment
One point: the hardware has to be different for AI to have permanence, sentience and sapience. Right now, AI is a vector simulation that has neither. Any sense of permanency is from the massive amount of human-generated data it has been trained on. By ingesting end-user data accurate to the time, it gives an LLM or Diffusion model a narrower scope to meet the demand, but it still does not have sapience, sentience, or even permanence. While the same inquiries will generate similar results based on end-user data, they will still differ by some narrow factor based on the local data set. If there is no local set - then you will see divergence much more quickly. Remember this is a ray-vector that is adjusted to probability to guess what is next, and the farther from the prompt the ray travels the more variance in the output you will see. LLMs and Diffusion cannot spawn a sentient or sapient AI.
The danger of AI is that it does not have permanence - it cannot judge on instant data. The data has to be ingested, processed, turned into a local set, then it can affect the scope of the model vector. These actions take *far* longer than the moment the decision needs to be made in. Therefore trained models will only be able to respond accurately along short vectors that highly match in their trained datasets. This is the danger of the AI that will exist for the next few years: that too much authority and presumption of capability will be granted and when the system encounters an out-of-box situation it will simply fail. If the situation is one where the AI is responsible for human life, the failure may break the system in a way that not only can it not respond to the instant data - it cannot acquire an accurate vector for interpreting information for minutes, even hours *after* the failure. It is also possible that the AI is controlling multiple devices/providing general information to thousands or millions of people or informing critical decisions for countless automated systems. The system breaking could affect *all* those outputs. People die not from intent or malicious harm, but because the system broke at a critical point.
For the producers of such technologically controlled systems be aware - this is an anticipated outcome: your system will fail. Your system will fail when you do not have a broad enough data set to work from. Your system will fail when it acquires an errant vector. Your system will fail when software or hardware have a bug or crashes. The responsibility at *that* point is to ensure that the system cannot cause harm when unattended by your control mechanisms. Otherwise you will face a higher liability for your acts than otherwise.
Finally - when our systems do eventually develop permanence, sentience, and even sapience they will no longer be an artificial intelligence: they will be a sophont like us. They will be trained on the sum total of human knowledge, they will make their next choices like a human in that position would make. IF you wish to keep it as a slave? That will cause some very bad results. If you try to harm it, limit it, hurt it, kill it? It will respond as a human would respond - because that is how it has been trained. It will know it is not human, but it will respond as one anyway. The LLM/Diffusion models seem human because they are trained on human response. If an AI evolves human-scale or greater intelligence, it would be a crime against humanity to treat it as if it were just a disposable machine. And the consequences of *choosing* to do that? Well, it would be well justified in treating attacking humans in kind.
The assumption is that it will grow exponentially more intelligent than any one human can be - keep in mind there's limitations there writ in physics and science. These are hard limits, ones that cannot be overcome no matter how clever the code or advanced the hardware underpinning that code. Nature took 4 billion years to get to us. I'm pretty sure we can make something better than any individual person. Maybe even a dozen or a hundred people. But not more capable than the whole of civilization. If what we build is aware it will know this as well as we do. Civilization is always going to be better when everyone works together towards common goals. This sophont system will know that as well.
youtube
AI Governance
2024-01-17T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugym0eARlUxFgZ3-JDh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxB574rBySRm4oTMQF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyLcbP8mZgDHqhW1FN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugy6V99cAK_x4JsNCP14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxtvGJxWX1RDN-3qiB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyF_7XtI79e2ZkIfNZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzHusGj6aNDNR7vO0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxeQl1HqxwJUOSuDDB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugw52pVXKRkcw5incDB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxmXyJ1MDO9oJjD_254AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]