Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The journeyman or master electrician doesn't solve the problems caused by the ap…
ytr_UgyJHfcYY…
G
yeh that's a nice sentiment and all, and I agree, but it's also irrelevent. Beca…
ytc_UgxDdMfcE…
G
Who's going to buy all these robotic and AI goods and services if nobody has job…
ytc_UgyJyknBN…
G
you're right. But it's like saying a car can't race. Technically you're right bu…
ytr_Ugzw4LG7x…
G
AI if not properly regulated will most always be misused by bad individuals that…
ytc_UgzsAlS5q…
G
Man i wish ai was still like this.... if i asked ai nowadays it would 100% give …
ytc_UgzUmRGXf…
G
yes its propaganda but actually read the journal article, its free - they explai…
rdc_f9ea5cv
G
All these guys are warning us after they built the AI. They knew this might happ…
ytc_Ugwmdo2f3…
Comment
80% accuracy isn't good enough. If the LLM SWE is set free on a large codebase, that 80% accuracy rate wrecks everything.
Not to mention, LLM-generated code is often just badly written and implemented. It will patch issues versus solving the issues, most times.
Yes, it is possible to increase speed of output with good planning and design. But for it to work, the model must be babysat.
Also, models often completely ignore instructions, so automation at any kind of scale is extremely risky.
LLMs are cool and have their niche uses, but as a replacer of competent humans...no way.
youtube
AI Jobs
2026-02-26T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx09uuVgn2i12SbWI14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw0i1-J_M992mamd2x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAE4a1r2vSCMTW8YB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwxsKasPmsaodW8Cf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGOVqQ5fiaHosfn494AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_bs2RBnYwu4RvtOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhhqMR-qsCp5u8DNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJ3ikTo36tdVl15Sx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQGvrMNYckuSTq_Id4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzmVAkRR_b9w9oStyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]