Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AlexGoldringAi steals from artists, and takes soul from art. In this industry,…
ytr_UgyuQYter…
G
"Hi Sabarishwaran, we are sorry to say that you got the wrong answer but in any …
ytr_Ugxi3Z8Gi…
G
@kkjppt5359not once AI reaches Artificial Super Intelligence. Once the AI progra…
ytr_UgxRjBwAv…
G
Maybe build an artificial intelligence or trying to do it can leed us to underst…
ytc_UgjzzPCJo…
G
Can AI help humans understand differences in cultures & perceptions around the w…
ytc_UgzqYZ5ZD…
G
I really don't understand our obsession of developing technologies that make our…
ytc_UgzTCzZZ9…
G
AI will still need managers? Aka top level technicians at a worforce scale of …
ytc_Ugy5iUBKo…
G
@lebe220 You have no clue. Chipmakers, AI infrastructure and energy companies ca…
ytr_Ugw9JwkKs…
Comment
Not a mystery at all. Starting with the great decoupling in the 1970s, human labor has been and will continue trending toward zero. Think EXPONENTIALLY, not LINEARLY. Automation and robotics will necessarily replace human workers due to the current economic system. Today, AI is the WORST it will ever be going forward. It DOESN'T MATTER if AI "creates" 200 million new jobs...AI will solve and execute these new "jobs" much faster than humans will be able to adapt, learn, and fill them. Capital/employee model-it worked great...for quite a while. However, now companies that automate lower their costs, become more competitive. Those that keep higher-cost human labor necessarily fall behind and will go out of business. This creates a system where replacing people with machines is not just likely, but necessary for company survival. There is a way to deal with this humanely, but it would require a full revamp of the social contract and our concept of work. Even capitalists will want this change when unemployment reaches a tipping point and there’s no one left with money to buy the crap they’re selling.
I know, LLMs are leveling off, "what about the energy?," blah, blah, blah. Do not conflate LLMs with AI. LLMs are no more AI than vacuum tubes are computers or AOL dial-up is the internet. So, LLMs are leveling off. History shows technological advance is not just one sigmoid curve. It's a series of sigmoid curves with each successive one reaching new capabilities and new heights. To think that after 200,000 years, this pattern is all the sudden going to stop is absurd. I also understand there will be new jobs for humans created, it will go in this pattern: Automation creates 5 new jobs this year, lays off 30; creates 5 new jobs next year, lays off 500; creates 5 new jobs the following year, lays off 1500 etc. In the macro economic picture, do you not see this as an issue? I also understand there is more opportunity for entrepreneurs. Again, from the macro perspective there is a MINORITY of people that have the entrepreneurial ability/spirit. It takes that, and a strong majority of people don't have it.
Reasoning AI Is the Next Big Leap. The future of AI is not just about generating answers, but about “reasoning” breaking down complex problems step-by-step, more like a human brain. Breakthroughs like DeepSeek’s open-source reasoning models are already being incorporated into platforms like GPT and xAI, marking a shift toward more advanced, thoughtful AI. Think EXPONENTIALLY, not LINEARLY.
There’s a persistent and misleading narrative circulating often perpetuated by short term goal-driven institutions like politicians, universities, and tech company executives, that the future of work is secure. The claim goes: “You just need to learn how to work with AI,” and humans collaborating with AI will outperform either one alone. But recent research, including a landmark study from the University of Virginia, tells a different story. In diagnosing medical conditions, AI alone achieved far greater accuracy than either human doctors or even doctors assisted by AI. The median diagnostic accuracy for the docs using Chat GPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. Sounds like “a win” at first. However, Chat GPT Plus alone achieved a median diagnostic accuracy of more than 92%. In other words, the comforting idea that humans will always be essential partners to machines is crumbling. In many fields, AI is not just a tool, it is the superior performer. This has staggering implications for the future of work, especially to the often used solution of prior technological displacement that go “You just need to be retrained.” To the group advocating the "retraining solution," I pose to you this: You have a medical doctor, just finished 8 years of higher education, a low-paid 4-year internship/residency, is hundreds of thousands of dollars in debt, and just got outperformed by automation; what would you have this doctor retrained into that will offer a secure job?
The result of this, since the United States' moronic system is based on taxing employees for about 70% of all tax revenue, this will further exacerbate the problem. Tax needs to be based on what's going on now and in the future not what worked 75 years ago. Oh I know it's so evil to think, but we might need to tax the companies who are doing the automation. Even if you do that, it's a win-win. For instance say a company had 1,000 employees in automated away down to 100. 1,000 employees paid federal taxes, including payroll tax, of about 20 million a year. Now the 100 people pay about 2 million a year. However, by automating, the company sees two things: they are no longer paying approximately 60 million in wages and their share of payroll tax, AND since nobody automates to lose money they also have higher profits. What's so wrong about taxing that company for the $60 million dollars they don't have to pay anymore and a fair percentage of their additional profits they now make through the automation. Not all the additional profits, but some of them. If you do that there's a higher tax revenue and more profits for the company. That's a win-win for the government and for businesses. If humanity, as a whole, weren't so stupid, we would just adjust as tech progresses to where EVERYONE benefits somewhat from each stage and just move smoothly into the post-labor economy.
To understand this shift, consider the historical role of the horse and oxen. They were once essential to the economy as sources of power for agriculture, transport, and industry. Technologies like the horse collar and the steel plow dramatically increased their output, making them even more valuable. But then came the steam engine and, later, the internal combustion engine. These machines didn’t just help the horse, but outperformed it entirely. Trains, tractors, and trucks could do the job ten, even a hundred times faster and more powerfully. And from that moment forward, the horse was no longer critical to economic output. It was, in essence, unemployed.
This is what AI and robotics are doing to human labor. They are not simply replacing the repetitive or the physical; they are increasingly outperforming humans in cognition, strategy, design, and communication. And just like the horse, our value in the economic system will diminish, not from lack of talent or effort, but because machines will do the job better. That’s why we need to start thinking not in terms of retraining, but about restructuring society itself for a world where human labor is no longer essential to production. When the very essence of what makes an entity useful to the current economy is superseded (be that strength, endurance, dexterity, analysis, creativity, empathy, intelligence, persuasion as well as many other qualities) that entity becomes irrelevant to economic output.
Potential solution. Decentralized AI Ownership: A Bridge Toward Post-Scarcity. While Universal Basic Income (UBI) may serve as an initial economic stopgap, bridging today’s economy with tomorrow’s abundance consider an even more equitable and long-term solution: Decentralizing the ownership of artificial intelligence itself.
Rather than concentrating the power of AI in the hands of a few corporations or governments, envision a model where AI entities are open-source and collectively owned. This could take the form of assigning each person a digital "share" of an AI system, not unlike a stakeholder’s share in a company. As the AI produces value, whether through intellectual labor, creative output, or autonomous economic participation, those shares would generate dividends distributed directly to individuals.
This model would ensure that as AI becomes the dominant productive force in society, the wealth it creates is shared universally, not monopolized. It aligns with the principle that the fruits of public knowledge made possible by centuries of human collaboration should benefit everyone, not just a few early adopters or private entities. In practice, this could look like:
- AI systems running Decentralized Autonomous Organizations (DAOs) that distribute profits algorithmically.
- Smart contracts ensuring transparent, equitable income flows based on collective stake.
- Global “AI cooperatives” where users vote on how AI is used, improved, and deployed.
- Universal basic compute.
If AI becomes the primary driver of economic output, then equitable access to its benefits isn’t just fair, it’s necessary to avoid deepening inequality and instability. It provides a scalable, technologically aligned extension to UBI, ensuring people not only have income, but true economic participation in the age of intelligent machines.
Ultimately, decentralized AI ownership may be the last transition phase before full post-scarcity, when advanced technologies like universal nanofactories or matter compilers render even economic structures obsolete. Until then, however, models like this offer a way to reclaim agency in a world where production no longer requires human hands, and where wealth can, for the first time in history, truly be universal.
youtube
AI Jobs
2026-02-25T04:1…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgygXaiDoeex19MOX_54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy8mnhOd54n03Lu9zN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"confusion"},
{"id":"ytc_UgyQVJxX-iVSPvAzv594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9Ua6vQ_nPyyjYzVd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkRhWBlAp1C20jQ114AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3RU3zGcOhNlaQLIN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxiNmUJRHt0fnSPMiR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmWL_Hk_ixABpaqYx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBD72wdfPGST1Zwfd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyO7e1iqKCYvO-mJTl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]