Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way self driving vehicles will work is if every vehicle on the road is …
ytc_UgxtQY-2R…
G
It's so obvious how Trump-Nazi-coded Diary of a CEO is. "And the issue with thes…
ytc_UgxCQpTtI…
G
I understand your concerns about AI's advancements. It's important to remember t…
ytr_Ugw3Q0YXN…
G
12:30 “I appreciate your patience”
I’m more impressed by chat gpt’s patience he…
ytc_UgxYZq-Nn…
G
It's like the chatbot listed the google search results for 'tell me a secret' al…
ytc_UgyooFoav…
G
is there really an expectation for self driving cars on secondary roads? Even i…
rdc_d1kmdd1
G
Ultimate "fuck you" move, literally teaching people how to mimic your art style …
ytc_UgwlwCIiG…
G
Fear AI as you fear your own child, for you are the one who guides its actions, …
ytc_UgxQJDWZc…
Comment
Trick will be the same as the last dozen times a new technology threatened jobs.
Combine harvesters, cotton gin, all sorts of developments have people up in arms to prevent the technology for existing, and in many cases (like the Luddites) violently smashing up the factories and equipment to stop the thing from happening.
That never ever works.
The trick isn't to try and prevent a technology/development which has massive potential from existing or developing, it's that we need to finally make sure that the massive increase in productivity and profits which something provides are used to improve the lives of all citizens (and, at least in many cases, especially those for whom these developments mean unemployment or other job-related disasters).
If a company is going to obliterate a ton of jobs and gain a massive increase in profits and productivity, some portion of that needs to be distributed to those in need, especially those who are immediately impacted by said technology.
The idea that people could have prevented industrialization from happening is - in hindsight - truly preposterous. Increasing "AI" (because I think people use the word "AI" about 99 times out of 100 it's not anything like real artificial intelligence... like why the new chatbot microsoft made me use that they said used "AI" was in no way better than a chatbot support program from a decade ago) and eventually actual AI is impossible to prevent, and in many ways perhaps not even something we would WANT to prevent in certain sectors.
It's not about hamstringing new technologies, it's about finally making sure that when the ultra wealthy find a way to greatly increase profits in a way which harms citizens and workers, some portion of that increased profit doesn't just go into the bank of the ultra-wealthy, but to those in need. Not all of it, there should still be a motivation to improve productivity and progress, but things need to be more fair and balanced.
It might be universal basic income (which has the benefit of being ultra simple and therefore not going to have a gigantic drain via the paperwork administration required to process things) it might be something else, but right now a dozen or two people (depends whose numbers you take, and in what year you choose) own more wealth than 4.5 BILLION HUMANS.
The future can not be anything like what we would want it to be while a few dozen humans own more than half the humans on this planet.
As always, thank you Bernie. Can't even begin to imagine what this world would be like if you'd been elected...
youtube
AI Jobs
2025-10-09T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz8oDCp3o_ZeeUNpol4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxzr4FTrgmbXLGKdKJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxEPiF5mzbrPho8EFF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQRtYqnc3iOl_q-ah4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwt3mTFVUySCpx8sh94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvcDXgC7-xUx_NCAt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzM392l4VwKRTDApAp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxVR3lafyXR1cpSMJN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwVQvCcpKBtZE0cx8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxtvpZ4Y-EC17H61HR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]