Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately us milenials too! We will work with Ai: I already made code with i…
ytc_UgxQBYZj7…
G
@jicalzad
Point is, by current data even current FSD is safer than average driv…
ytr_Ugzwx2fJq…
G
So it's fine to (and what should we do if):
1. Use an AI Synth where I use it a…
ytc_UgzQr5_fL…
G
How funny is that we invented automation tools to make our life more slacking, y…
ytc_UgwkQe-kl…
G
I may sound very utopic but truth is until & unless female body stops becoming t…
ytc_UgzakRMod…
G
The thing is, AI has to become intelligent, but also needs to figure out how to…
ytc_UgxF4bXUc…
G
fact, due to the elements and the major lack of a comfy life most women looked o…
ytc_UgxZBhvCz…
G
At first i thought the ai still generate the kids because they are not a real hu…
ytc_Ugw67tQsG…
Comment
1. Builder ai is not example of ai lie. It is example of Indian lie. I worked with them. Basically this is the way they work and exist. They create a story, promise god knows what, and behind the scenes there are dozens of underpaid indians with huge pressure and abuse put on them. ALso no qualifications.
2. AI is used incorrectly most of the time. Additionally it performs chaotically. Sometimes it does great and makes wonders. Another time a simple task will make it go bonkers and it will destroy your code and all you were working on together before.
And it is with Claude. Cant imagine what happens with other LLMs. So if someone is senior and really knows what he is doing, then might be better to just do it yourself.
Me myself I would never be able do to what I do without AI. But I also spend a lot of time revisiting modules, API enpoints, frontend, and optimise, make it more robust and safe. It takes a lot of time but for me it is the only way. If someone is an expert, he can do it correctlythe first time.
youtube
AI Jobs
2026-02-04T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxIr8x1or4wyEj49Rt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzbWBrNxSqyk-Y1fqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxEEt0y_E2xD5VAuQJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhUairUyeNsMEm0xR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5bBMSHjjYTTvm0_t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwAW4iF1J9ttU76rQt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwQ8hHZnW2oKRstb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzPY8lSNcC9HHtgXqx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0FCydsu0BmmFSdu54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8iU6MGqts646CNlR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]