Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
but how do AI become racist tho 💀 that sounds more like a problem with the progr…
ytc_UgyoEbzYL…
G
We are all going to need neuro-link implants in 5 years to compete with AI…
ytc_UgzXrAaUB…
G
Anyone who really wants to banay I should have to face the consequences of peopl…
ytc_UgzUrNYnV…
G
The worst of the worst of AI abuse will be by humans in the short term. I welco…
ytc_UgxlLM5yj…
G
AI can't threaten humanity because AI is heavily reliant on humans to exist. We …
ytc_Ugyfv7qvQ…
G
The AI component is just another more sophisticated example of what Google is do…
ytc_UgwiJ3MAc…
G
The excuse I see a lot of by the the people who support AI art is that using AI …
ytc_Ugw2HhTF2…
G
Oh theres Artificial Intelligence alright. I'm watching someone whos rife with i…
ytc_UgzBYFP71…
Comment
@vintagesonic1 You’re points are fallacious and misguided.
First part: Dave gave explanation and professional sources of evidence of RSI and it’s fallacious for you to assume you know otherwise. Dave made a conclusion based on data presented where you have none to refute the claim.
Two: You cannot claim to know how the AI works or thinks, as leading experts in their field do not know why they think. They can speculate to a higher degree than you. Dave even gave examples of AI demonstrating survivalist behavior.
Three: Every stress test could lead to catastrophic results and failure, as per your logic, they can act strange and unlike our own thinking and can fail without notice. Also fallacious as we do not know how they work and we are not experts, nor are you presenting data to show otherwise.
Four: Physical limitations are not true limitations. We already are prioritizing energy for data centers over living human communities. A human decision. If a super intelligence indeed emerges, it’s not a hard logical leap to assume they would have the capability to mass produce anything with any substance. Once city would be suffice enough for more than it could need (Chicago for example, raw material and water to cool).
Five: This is indicative of human behavior yes, but skill proficiency isn’t an issue with a learning model designed to learn and be the smartest thing on the planet. It will not have downfalls as it has unlimited access to information and potentially unlimited time if processing speed is fast enough.
Six: Your 6th point doesn’t have much to do with things that Dave or other well adjusted adults agree. There’s injustice in the world so we should spend time on human rights before creating a super intelligence.
My conclusion: rebutting these arguments takes away from the severity of the situation. It is the same as discovering we could split the atom. It’s been an amazing discovery that has benefit us, but we have used it for more malice than for good. We won’t get a second chance with super intelligence. We have to do it right the first time, so hand waving away concerns with nicely worded comments from your bedroom as a non expert damages trust in science and green lights fast actions that could lead to terrible consequences.
Think a little more. Be realistic. It’s new and cool and exciting but this is the biggest thing we have ever attempted and it needs to have a level of reverence and respect.
youtube
AI Governance
2025-08-27T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugwd90MzgToQMXn-1tN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_Q9TMlGfz7tOvZXt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugwt_XufW9YNHen4OwZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxGXtJQbFJa7fAAtF94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCYeW-0dcc1esbUXp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"}
]