Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@syckles I think deep down this worries woman. Many of them only have their look…
ytr_Ugxs7ommF…
G
I think Americans should start sabotaging AI at there jobs. It's crazy how some …
ytr_UgxDRDs8j…
G
i dont think a business can afford to let everything in hands of AI is too risky…
ytc_Ugz3XDwwJ…
G
The elite aren't worried that AI will brainwash the public, they're worried that…
ytc_UgyvIzE1F…
G
Trump is basically showing America what happens when Republicans actually get to…
rdc_e2vwb5g
G
Sorry, but I don't think the best facial recognition tech in the world can't eve…
ytc_UgyxYo3B3…
G
When you build AI with no connection or empathy twords humanity, you will build …
ytc_Ugx1NuFiV…
G
Ive done unspeakable things to your ai bot after you pin of shamed me for wantin…
ytc_UgwR_iZ5V…
Comment
7:25 As a developer, I can assure you that “speed of code written” is not a valuable metric in of itself.
Has the code been properly tested? If an AI was used to generate those tests, were the tests manually reviewed thoroughly? Ensuring that it didn’t make a suite full of garbage tests, just so that it can get close to a 100% pass rate. A multiplier of this makes work more challenging, not easy.
Is it maintainable? Did it make a bunch of unnecessary abstractions that make it difficult to understand? Again, a multiplier of this makes the work more challenging, not easy.
Did it generate code using outdated dependencies that have been subjected to numerous security issues? This multiplies the number of security risks, which again, makes the work more challenging.
I understand that the speed of delivery is important. However, if you develop at a speed where you disregard these concerns, they’ll come back to bite you eventually and cause greater harm than if you took the correct precautions in the first place.
youtube
2025-10-27T08:5…
♥ 820
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw9yujWssnrywnZcXJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxh8UEbT1CtNj7vq-N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyPkZplPKE0rQ7b5pB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwW06K4He0e1VQgadB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzAaW2TVQVKa5ofJhh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxmz7yXi0kr3VFSLWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFL-KcjQghJlYtPi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"confusion"},
{"id":"ytc_UgxjTdeP2iGmudLH_Bd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwGMbi2bh-d1riqd0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx67HtkbcDRngm0zjp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]