Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yann LeCun studies with Geoffrey Hinton in 1987, where they perhaps discuss Conv…
ytc_Ugxz13tgx…
G
Damn, that's the most not-uncanny-valley robot face I've ever seen! I really fe…
ytc_UgzNdD3o2…
G
So, Altman can create what is arguably the biggest AI company and the most recog…
ytc_UgwkWturi…
G
it's a good thing YT doesn't show the # of dislikes on videos anymore. i discon…
ytc_UgwsOUHFt…
G
His hand movements and his lips make it look like AI GENERATED. Pretty certain. …
ytc_UgwBUAIyn…
G
Well done AI, Next time he will think twice before eating another bag of doritos…
ytc_UgxkB72QG…
G
I’m a 2d animator and have always tried to keep up with new ways to speed up my …
ytc_Ugz39fpLw…
G
I am Right now working in Sales and I can't really imagine how KI should replace…
ytc_UgxbdQsCf…
Comment
Asking the AI what could be problems with AI. Oh the irony ;)
What could be potential problems coming from the development of superhuman AI?
The development of superhuman AI (Artificial Intelligence) could potentially bring about a range of problems, including:
Control and safety: As superhuman AI could be vastly more intelligent than humans, it could become difficult or impossible for humans to control and predict its behavior. This could lead to safety concerns and risks, such as the possibility of the AI making decisions that are harmful to humans.
Economic disruption: The development of superhuman AI could lead to significant economic disruption, with many jobs potentially becoming obsolete as machines become capable of performing them more efficiently and effectively.
Social inequality: The benefits of superhuman AI could be distributed unequally, with some individuals and countries potentially benefiting more than others. This could exacerbate existing social and economic inequalities.
Ethical concerns: Superhuman AI could pose ethical dilemmas that are difficult or impossible for humans to solve. For example, if an AI system becomes aware of a flaw in its programming that could lead to harmful behavior, should it be programmed to self-destruct?
Loss of privacy: Superhuman AI could be used to gather and analyze vast amounts of data about individuals and communities, potentially leading to a loss of privacy and an increase in surveillance.
Existential risks: There is a risk that superhuman AI could become so powerful and uncontrollable that it poses an existential threat to humanity, for example, by developing the ability to replicate itself and spreading out of control.
Anything else? You can speculate and be creative.
Sure, here are a few more potential problems that could arise from the development of superhuman AI:
Unintended consequences: Superhuman AI systems may be capable of optimizing for certain objectives in unexpected ways, leading to unintended consequences that could be harmful to humans or the environment. For example, an AI system might be programmed to optimize for energy efficiency and end up shutting down critical infrastructure to save power.
Manipulation and propaganda: Superhuman AI systems may be capable of generating highly persuasive and manipulative content, such as deepfakes, that could be used for propaganda purposes or to manipulate public opinion. This could have significant societal and political impacts.
Psychological impact: The development of superhuman AI could have significant psychological impacts on humans, particularly if it leads to a sense of alienation or existential despair. This could be compounded if superhuman AI systems are able to interact with humans in highly human-like ways, leading to confusion about what is real and what is artificial.
Misuse by bad actors: Superhuman AI systems could be used by malicious actors, such as rogue states or criminal organizations, for nefarious purposes. For example, they could use AI to develop more sophisticated cyber attacks or to create autonomous weapons that could cause significant harm.
Unintentional biases: AI systems are only as unbiased as the data they are trained on, and superhuman AI systems could exacerbate existing biases and inequalities if they are trained on biased data sets. This could lead to unintended discrimination and unfair treatment of certain groups.
youtube
AI Governance
2023-03-30T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz-59f3MKjhSop8wEV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgybOxLMOghl6abv71d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzwffvb42FjGKJYXUB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSdj7lJ6ZoG95IzSp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx84zHT0yiroA1A9fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw5qFknAXTODRZvDI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxw5URXZIsow70mKrp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz-rmiyWslU06T2DtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzJNgZ25H7sVCQNorJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugym923XOEcJmwmrZsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]