Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
0:00 🤨 Author is skeptical of AI hype but aims to objectively test coding tools in 2026. 0:19 📈 Notices respected developers like Linus Torvalds finding AI tools useful. 0:32 🎯 Goal is to honestly evaluate if AI tools make a meaningful difference over time. 0:50 💡 Believes understanding software concepts is now more important than ever. 1:03 🔍 Shares learnings after giving AI coding (OpenAI Codex agent) a fair chance. 1:13 ⚙️ Starts test with a greenfield project (BJJ gym management app) using Spring Boot & React Native. 1:41 😅 Admits many projects fail due to time spent on basics like auth; hopes AI can help. 2:29 ❓ Agent asks clarifying questions about security (JWT) and database (Postgres). 2:48 ⏳ Compares waiting for AI generation to waiting for Java compilation as a junior dev. 3:08 ✅ Backend scaffolding is good, follows standard Spring Boot patterns (Controllers, Services, Repositories). 3:40 ⚠️ Notes AI doesn't truly understand code; it's a black box guessing word combinations. 4:24 ❌ Test fails: Agent creates a custom status endpoint instead of suggesting Spring Actuator. 5:02 🔄 Rejects AI's overcomplicated solutions for adding a deployment timestamp. 5:41 ⚠️ Main takeaway: Using AI without understanding the code creates maintenance debt. 6:08 📱 React Native scaffolding is worse: a single 1000-line file with duplicated code. 7:22 🔧 Spends much time refactoring and explicitly prompting for reusable components and UI fixes. 7:50 🏁 After ~20 hours, has a usable app, concluding that bad software is better than no software. 8:22 💭 A duct-taped app shipped fast beats perfect architecture that takes months. 8:41 👨💻 Developers aren't going anywhere; knowledge is more crucial than ever. 8:59 💸 Questions financial scalability: current cost is low, but future pricing could change the calculus.
youtube AI Jobs 2026-01-25T13:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzjuolDWDfeCeLcifl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyY5y-N_QFaQgvloZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyTYvhCheaAkdP_Dx14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5ubYiisFHT7J4dhp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwr7Dqewl-aPKP0FfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx-CfyipgtvRlr8mHJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyMQ-TzNLOMPfbXSAd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugz0xuMPJpAiqCTjTQ54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw_6B9Lt5jSxVnbT0p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAmozXCx0fOArs1bl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]