Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is what artists have been crying out about this whole time!!! Finally an en…
ytc_UgzKEJs2G…
G
AI will never take my job. I'm unique and so is my set of habilities, knowledge …
ytc_UgxaFP6AG…
G
I saw a video yesterday where nurses are already being replaced with AI. As a nu…
ytr_Ugy6-pbSg…
G
@fullyfb3847 ChatGPT literally produces fake court cases, and scientific papers,…
ytr_UgyPOQp4M…
G
One of the concerns about AI is the replacement of people and job losses. Howeve…
ytc_UgxIasiFF…
G
That’s another thing I haven’t even thought about until now.
Skill transfer!
Alm…
ytc_Ugy_wn1Bq…
G
I am nice to ai the same way im nice to other inanimate objects. Like u apologiz…
ytc_UgwgA4Y_w…
G
A Pukei Pukei! So cute! Also, I agree. AI art is honestly just really boring... …
ytc_UgwVz0p2p…
Comment
Coding is, perhaps, one of the last things the current conceptual basis of AI should be doing. The current AI paradigm is extremely effective at taking a very, very large dataset and condensing it's content down to a series of relationships it can use to respond to prompts.
That makes it very useful for something like "tell me what your training data says about..." Or "what is said about...?"
The further you get from this group of tasks, the less capable AI becomes.
I do electronics as a trade and hobby. One of the things that comes with creating a board is routing. You have to do layout and routing that not only makes point to point connections correctly, but also factors in concerns like ground plane/return path propagation, data bus lengths, capacitance and inductance created by the arrangement, noise/crosstalk, etc. There is a lot of back and forth that has to be done as a change to one part of the layout implicates decisions made in other parts of the layout.
These are tasks which current AI models just collapse at - where the tools exist to allow them to try. Coding is very similar. Decisions made in one routine can have drastic consequences for the structure of other routines. The many different factors which can be optimized around are going to confound the current models.
The current models can do some crazy things and are "smarter" than most seem to give them credit for. That said, they are almost completely incapable of attempting an instruction like "count the number of 'b's in the above paragraph." Because they are unable to adjust what their frame of a 'token' is, and we have no earthly idea what a token is in practice - AIs are very deceptive in just how limited they are.
They can do some absolutely crazy and abstract things, coming out with solutions to problems that work which they were not trained on.... But turning prompts into effective engineering code is something that is dubious.
Where AI could potentially be useful in coding is if you dispensed with higher level languages and instead trained them on assembly instructions for an architecture and then asked them to construct routines optimized around x, y, or z. They might make for some absolutely wicked compilers and decompilers when working under very specific constraints and very atomic sets of training data.
But.... Then read the volumous output of an AI using assembly voodoo that would likely include self-modifying code to verify it did, infact, optimize appropriately as opposed to go on an existential rant.... You may as well just use the compiler or your own assembly routines for the architecture.
Now... If the fundamental AI paradigm changes and we get algorithms which produce computations more easily able to adjust their framing and perform reverse-logic tasks .... Then that has a lot more potential, but is an infinitely more dangerous AI to hand controls to.
I do generally agree that if AI fell off a cliff, we would probably be better for it - but I see this less as a threat from AI directly and more as a threat brought on by human errors. It's not the AI being an AI that will harm me, necessarily. It is the human who thinks the AI is unquestionably correct or superior to their own reasoning abilities which will cause me and society the most harm.
Two of my long time friends are English teachers at the highschool and college level. Institutions have been completely unprepared to deal with AI and students are turning in papers written with AI left and right. Some teachers even encourage them to use it. The most destructive impact of this is how many of them are basically illiterate. It's not even a matter of having the AI think for them - they are letting the AI speak for them and they are not developing their own narrative voice/prose. It's worse than the mid level manager who can barely navigate a keyboard and talks in email in barely restrained factory floor speak. It's a complete inability to develop and communicate their own ideas. Spelling may be all over the place and a lot of not-words may be hastily stitched together for the guy who is clearly not used to professional ends of writing - but you know who wrote it and they have an idea they are trying to get onto the table.
Kids asking GPT to write a review for them and trying to change a few lines around here and there to not have the same review as everyone else don't have ideas to struggle with spelling out in the first place.
That's the most concerning thing about AI - how people will use it to weaken their own skills/abilities, and how they will defer to it without understanding its limitations.
youtube
2025-03-12T19:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwyDCMH30kGqHFdJRF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyw39MhL7MblnXhoph4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBQ4CCTJxaIPIGd7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2fxktcWSc3zGfvpF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8URsZJD1xOiKplxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypeNZ8b-cvL2Bp3DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyV4YjUP4E-Ky7TYkt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwn-Wn2cjP3ZeUj5Bd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwztBAjbt3luKApKU94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTOdj0llFDtxHvMsJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]