Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine if there was an automated way to drive vehicles on a fixed route that ca…
ytc_UgwgV4Xnj…
G
I think the Bible was written by humans and AI was created by Humans so neither …
ytc_UgxddEk-3…
G
I see only one solution to the deepfake problem: make deepfake porn of the polit…
ytc_UgxMKzkzs…
G
It's no different if u have a smudge tool or not it's still ur hard working art …
ytc_UgyV5F7P9…
G
Humans are so stupid. They help build things that will make them useless lol how…
ytc_Ugy_D0RaH…
G
@MichaelFlynn0 I know.
The problem is I've just come out of years of postgradua…
ytr_UgzPmOhaX…
G
the ai bro voice is killing me.... Appreciate the humor to help with such a SAD …
ytc_UgyqWwq2C…
G
like honestly who tf is this guy? why give him any credibility at all if he does…
ytc_UgwbxB-00…
Comment
There are fairly easy fixes that can be made to make a sustainable project if you utilize the rules/memory function:
1. Tell it the basic structure of the app
2. Tell it the architecture style you want it to follow
3. Tell it that every code change needs a corresponding test with 100% coverage. This steps allows you to make much more confident refactors because it prevents accidental deletions.
4. Every time it does something weird, undo the changes and tell it to store to its rules to never do that thing again. After some time, you start to forget you had to tell it not to do things and then you can start moving those rules from project to project.
5. Whenever you and the ai both get stuck going down rabbit holes that finally lands on a solution, tell it to store what it learned so it skips the rabbit holes in the future.
I can’t tell if any of the commenters are looking for a solution or just wanting to believe AI is not as good as it is, but I thought I’d add this for anyone looking for a solution.
youtube
AI Jobs
2026-01-19T19:3…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugyu06ySTxitcSgmJS94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwveSlcyP4HVPPIZqh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzBy7HWZxd7nVUF1xN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgzBJNJ4j95F_4YY1Xl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgwpBRuULCP1DPmgjQt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgzIh8QgxDm1TBRrD3N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyCu4_WSNNzeIZ4HnJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgyV1N5OLaGqJCd3XbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwP9rbjjP9UjEOm0_d4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugx5ePGK3L5PhLY1y_d4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}]