Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Racism is ignorance!"
AIs are litterally trained with such an amount of human …
ytc_UgzQuq7Mt…
G
The fear of AI is due to humans assuming AGI will still have human like thoughts…
ytc_UgzdjrbkR…
G
AI is already creative. Just give it scope and ask it to write a poem. Pretty …
ytc_Ugxq9CRqf…
G
Flawed technology is everywhere impacting everyone whether it’s face recognition…
ytc_UgwyMAHb9…
G
If you want toneducate yourself more on the topic of AI, listen to Ed Zitrion. H…
ytc_UgyczWR0e…
G
@redbird8888 so did you, for our life time will be the golden age of building ou…
ytr_UgxW8au-F…
G
The term "bias" has different, but related meanings in statistics and machine le…
ytr_Ugw8-jz4m…
G
Fun fact; Did you know that a Google’s GPT called Gemini insulted a 29-year old …
ytc_Ugy3Re5LL…
Comment
I'll stick my neck out and say this: Everyone arguing in favor of the workers is either outright wrong or is upset with the wrong people.
Deleting jobs through automation is not a bad thing, regardless about how those losing those jobs feel about it. How many people do you see decrying the traffic light? How many yearn for the days of standing in the hot sun or rain for 8 hours a day while hoping the next car through the intersection doesn't just drive right into you? Nobody, because they know, in the back of their minds, that that is a shitty job that nobody wants to do, and now those who would have had to do so are free to do other, likely more fulfilling work.
But you can bet more than a few of the traffic cops doing those jobs back in the day weren't terribly pleased about having to find new work. This is just more of the same (and it's not unique to trucking). Automation, as it has always done, is merely increasing the efficiency of human labor. In this case, the goal is simply to move goods from A to B. Now, rather than requiring hundreds or thousands of drivers, this can be done with the oversight of just a handful of people keeping the hardware and software working. The labor resources previously tied up in trucking have been freed up to do something else that needs to be done. Preventing this from happening would be society actively working against itself. Put shortly, all of this is just a more wordy way of pointing out: "Make-work economies don't work."
More efficient use of labor is, objectively, a good thing, when taken at face value. However, there's a major caveat that's already been pointed out by plenty of people: Automation (not just in trucking, but in general) tends to funnel wealth to those who already have it. Those who can afford the upfront cost can undercut the competition, get more business, and consolidate their industry. This is fine, when it makes goods and services less expensive, but we already know that isn't what tends to happen with industries toying with monopoly. More typically, prices change little (or even get worse) and those at the top simply make more profit. Never even mind what happens to the displaced workers. Helping an entire industry's worth of workers find new jobs is basically unheard of.
This is just going to keep happening until it reaches a tipping point at who-knows-when. Automation, especially automation powered by capable AI (never even mind the upheaval that would/will be caused by a true general AI), WILL eventually replace virtually every job, and quite probably at a rate too fast for society to deal with. The only way to deal with this will be for everyone to collectively realize, agree, and actually ACT on the idea that an economy is a _byproduct_, not an end goal. You can't have people or companies with horrendous levels of wealth that don't actually do anything except play financial games. Nor those companies that do provide useful goods and services, yet play those same, manipulative games in order to maximize profit in ways that don't have anything to do with what they produce. They tie up and extract way too much capital as it is, never even mind what they'd be capable of if they needed only a tiny fraction of the human effort they currently do.
I hate seeing videos like this. I love seeing new, powerful, emergent technology that has the capability to be incredibly useful. And then that's always hit with the immediate come-down of "oh, yeah, great, the wealth disparity problem is going to get even worse". The only way to head off what increasingly looks like societal disaster, long term, is for everyone to have both a comprehensive understanding of the organizations they do business with and (I cannot believe this is probably a bigger issue than the former) have the will to NOT GIVE MONEY TO COMPANIES THAT DO SHIT THEY SHOULDN'T DO IN EXCHANGE FOR SHIT THEY WANT BUT DON'T NEED.
Is that possible? Widespread hardship *is* one of the things (possibly the only thing) that can instigate significant change in a culture. Could it get the average individual to be willing to learn about and be responsible for the consequences of their actions outside of their immediate, personal impact? When such things are immaterial, possibly even not well-defined? I don't know. But if I had to guess:
lol
lmao
youtube
AI Jobs
2025-05-28T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy3uh64BoODBc8dFip4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKkn_4liRAjvGfald4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxMFBxq5k9oR6GEtdl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzX0PGbxp16t-ECqQx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzrCQCqc1DilbFL9q14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz4UV1eLERz2jilD914AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzAKhMC0NPBAmf6_Xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJMqCHVm1WOdW7oGh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxLt1cRUOqtowMV0UF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVybH9mEd7UehAbdZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"})