Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know what artists you see, but I see plenty of artists shit on ai art. M…
ytr_Ugxa6WcbW…
G
The thing is, the ground rule of keeping AI locked away from the rest of the wor…
ytc_UgxNcb1WA…
G
Hm however people are forgetting people like him at openai have never ever solve…
ytc_UgyoZ_xy_…
G
We're doing a disservice to the AI by starting with this post, Alex? We need to …
ytc_Ugxok1dQ6…
G
If most labor in the future will be far cheaper with AI and robots than humans, …
ytc_UgwVcMMTU…
G
So I’m not sure about if I caught what was happening in this video but I think I…
ytc_UgyXLx8hJ…
G
Something I hate about ai art is how they eyes always look big and lifeless, the…
ytc_UgxiwGdS3…
G
good tip. If an ai bro ever posts a meme image with "Stop using AI it's bad, QUI…
ytc_UgyAz_NW9…
Comment
I recently did a bunch of suspension work on my car. With the actual service manual in front of me, I wanted to see how Gemini would handle this. While it actually saved me a ton of money by suggesting I get the parts from a known website instead of a brick and mortar store, the rest was extremely concerning.
While it was generally helpful when I ran into snags getting stuff apart or together, a lot of the technical details were just completely wrong. One crucial thing is torque specifications. These are critical on suspension components. Too tight and you might break things. Too loose and things might come off... And break things. The problem is, it won't tell you I don't know. It will, with 100% confidence, tell you you need to do this much. In one particular instance, nuts that hold the strut onto the body of the car, it was insisting on a value that was half of what the manual said. Or by the way, this number was completely different than a number it gave me much earlier. But due to getting the wrong parts, I had to circle back to this later. I wrote down everything it said and days later, somehow the torque value for the same part is magically different. It was literally telling me, if you do this number, you will break it. So I uploaded a sheet from the manual for my exact model, and suddenly it's apologetic. "I stand corrected. That is the official value from Mazda. You were right." With all the things wrong with open AI these days, I tried the same thing. It actually had the wherewithal to say, "these are critical components, I'm unable to provide exact numbers. You need to consult a manual." I was blown away because for the first time ever, AI actually admitted it didn't know and wouldn't give me something.
These were both using paid models. If it's this bad at literally just finding and referencing published information, how on Earth did we ever think it would replace people?
On one hand, technology needs to be used for it to progress. People need to play a video game or use software in order to find and report bugs. But at this stage, we still don't have reliable autonomous cars. And what's published here. The difference is, these things are effectively in indefinite paid beta stage (or maybe even alpha), in the industry has already made the decision that this technology is set.
The first scheduled revenue passenger flight occurred in 1914. A full 11 years after the first successful flight of an airplane. Chat GPT was released in 2022. And within 2 years, we've decided to fire everyone and replace them with something that has not been tested.
youtube
AI Jobs
2026-03-30T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwxHUMafu1HzOHiLZh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwGX-YYIylpLnV_Uml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxiWNuQTMU56U_ZLa94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj02fpYrJDMvcntMh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOhFen-YeFTJbp1YR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNIKO_3sgMVy-l-U94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgwSAZPbtqz9VOJO8d14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ8En8N_uo8lpBlBp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDdig-ISqFAJFESsl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-pQ126xm-9O7IcZR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]