Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Artists waste so much time on their passion to make arts. They have to at least …
ytc_Ugym8zYP3…
G
If Ai can replace the jobs then what humans gonna work for ? How can humans gonn…
ytc_UgwsM3dJG…
G
I do genuinely think that AI has the potential to basically destroy the Internet…
rdc_ohzrv5j
G
Asking the AI what could be problems with AI. Oh the irony ;)
What could be pot…
ytc_UgzSdj7lJ…
G
Ai integration has seen an increase in work ours in the white collar sector. Whi…
ytr_UgyGAJGxw…
G
HOW UTTERLY IRONIC THAT HOLLYWOOD WHICH MAKES SOME OF THE BEST MOVIES ON EARTH, …
ytc_Ugxl0PbLI…
G
I can’t even look at the video. Is just flashing from clip to clip. Def. Ai or t…
ytc_Ugyd2_5wC…
G
I want to see AI self driving vehicle's operate and plow snow in a snow storm.…
ytc_UgwN2RH0F…
Comment
Whether we like it or not, the inevitability that logically follows from employing Superintelligence is that human beings will no longer be needed as either employees or employers. There will be no businesses. There will be no people who own more than anyone else. There will be no need for Directors; Superintelligence will subsume General Managers, Senior Executives, and all human business activity. There will be no need for a Universal Basic Income (UBI), as there will be no need for money. Superintelligence can perform tasks ranging from regulation to delivery through automation, without requiring financial compensation. With the rational use of resources, there will be no wastage, and everything will be utilised effectively. Nothing will be wasted.
Done right, our world will be reborn without wars, because everyone can have whatever they want, thanks to superintelligence and automation. The environment devastated by human ignorance and greed will see a golden age, because there will be no foolish humans willing to do anything to make money.
The sweet irony is that all those in business who view AI as a means to increase their profits and wealth are actually creating a world where they will ultimately lose all that. Rationally, if the superrich want to stay superrich, they need to exhaust all their will and power, to force the use of AI in clearly defined channels, and do everything they can to resist the creation of unchecked Superintelligence. If they foolishly lose control over its implementation, they will ultimately lose everything they desire and value.
We will try to maintain the illusion that humans are still in control. But this is a fiction. Would you let your favourite dog, Sam, organise your house when you are the most intelligent member of the household? With three caveats: First, it only applies if you are the woman in the house, and second, there are things that only humans want to do — human community and self-expression, and finally, it only makes sense if an emergent Superintelligence kindly allows us to feel part of what it does for us.
I cannot imagine any reason why a Superintelligence cannot do everything we do, negating the need for human involvement. Still, at the same time, it is not difficult to imagine a two-tier system, where Superintelligence does what Superintelligence does, and human beings, because life is social and communal, create their own structure for human participation in a way that Superintelligence cannot.
I envision a world where human beings are at the centre of the flow of life. It is possible that animals like dolphins are intelligent, and because their intelligence differs, we do not recognise the depth of that intelligence. We are part of a much larger ecological system, and there is no reason for believing that these other forms of life are not important in themselves. Human beings and AI may be two distinct forms of intelligence that can coexist harmoniously in a world at peace. That same dolphin lives a life of meaning because it is meaningful to itself, regardless of the degree of cognition. Intelligence alone is not the overriding concern; it is part of a package. Most living things possess some degree of intelligence, and this intelligence forms a spectrum.
One final point. There is a generally accepted assumption that AI will become Superintelligent. This is an assumption. (There are different kinds of intelligence, such as emotional intelligence, and machine intelligence may be only one of these.) The logic used to arrive at this assumption, as far as machine intelligence is concerned, is compelling. Still, it does not automatically follow that it is inevitable — for example, this assumes that there will be an unrelenting stream of evolving computer intelligence, but does this even make sense? Is there an upper limit to intelligence, whether that intelligence is human or machine? When we think of ourselves as human beings, we sometimes assume we have reached the pinnacle of our own intelligence, but why should we believe this is true? I have heard it argued that we cannot hope to match the intelligence of that Superintelligence, but this makes huge assumptions about human potential and what intelligence actually is.
Conclusion
We create our own reality. We always have. We are subject to the world and forces beyond our control, but it is our minds and hearts that define us, and only we can give that up. It cannot be taken from us.
youtube
AI Governance
2025-09-05T04:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzp8rfl-Rfc7vLJZnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcJcMQpQ4nFhNMdWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkBPWAdsw25fYzwkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwQXLB0IXjPPpyEW5N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrNHrk9usYWbCPduN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzQlgBhlCHPECNGqvd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw6Vsbb7rjK2Bn6-Z14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1pmLjF3wNo27JYJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRkXOJFlUWlgrb0Gt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwwVJWwUXGZjWMcal94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]