Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ai coding currently is just a glorified autocomplete. it reads your code for similar patterns and guess at what a pattern could be given the context of your question. It would often miss novel parameter names or event names that currently doesn't exist in your code and then would guess it using common web search. For example, if your web component throws an event called "someValueChanged", but you have another component that consumes other web components that catches "valueChanged", it would create a consuming component with "valueChanged" listener. Since it's a probabilistic model, the more common your "valueChanged" pattern exist in your current codebase, the more inclined it would be to always assume it's "valueChanged". Idk if that makes sense. It does this with everything. Another example is if you add a dependency with local path in your package.json, it will guess the path for example it would often adding the path to "../<library name>" even though that's just not what it's called in your local directory. So basically you have to review every line of code because this happens a lot. So you're basically writing your code anyways because de-AI your code is a huge pain since it's very spaghetti and verbose.
youtube AI Jobs 2026-01-28T21:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxFRZoTv9S4WMNC0qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwf3YI1gQ5M9-RrMR94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxU7Lmh8a6Cr51-67R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2OqO_EBC_Wv-1dgl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxUgOZnQgVMm3zJog54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFew5jk3OQBdiwo1J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwQH6jFycSJ5B_y3l14AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzz43cUpyI-1vpYWgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1mPQInigOxtKyetN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzwZUuRhFDCyVXid5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]