Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh dear... It looks it is going to be a tool for talentless or lazy, who, by usi…
ytc_Ugw14QnuN…
G
The man is deeply involved in works of darkness and he is a worshipper of the fo…
ytc_Ugw02b6PR…
G
I made chatgpt tell me step by step instructions on how to make meth, but it kep…
ytr_UgwXr_KTR…
G
⚠️That’s now. People are failing to realize 100 to 200 years from now the job l…
ytc_UgwGzbv9W…
G
AI is the egg and robotics is the chicken. Sam fails to account for the advances…
ytc_UgxgCWSW-…
G
U better watch out. Dont u people see, that robot didnt just punch him. He punch…
ytc_UgyFq7WOo…
G
Anyone know what app or program this is, I too would like converse and frighten …
ytc_UgwnNTfl5…
G
oh no, the people producing garbage and slop will have another tool to produce m…
ytc_UgxjzH_lP…
Comment
Actually it makes sense that context engineering is the optimal way to use LLMs for coding: specifying and designing the right software for the job are two most critical steps in the process of coding a software, these are the hotpsosts an LLM can't help with, it can just suggest the most generic code that reflects the level of abstraction it manage to deduce\infer from the data in your prompt, and if it can't reduce right,it "hallicunates". A S.M.A.R.T. set of contexts to structure the LLM agent coding process sounds all right...
Only experienced programmers and software engineers possess this acquired ability to think in terms of technical abstraction, they know by experience what is doable, interesting, or damn crazy when it comes to coding; and you become an experienced cider by ... coding so you can get rid a bit of your ignorance before you start becoming a productive coder.
youtube
AI Jobs
2026-02-16T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxyhx3fvlybv6u8Ejl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzqb1F9GuqzqiNyIxZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz5_EZKGyZdpQb8_EN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwjB8EpPS8He733kwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxHMX9Tosz6oYZ-2wV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzylKRYTSm9jA_Dy3Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyk1Ie-SDSCIDO-_TB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzqmCR-4Ywp4frWKqJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwM-XBY5eyEywAD1kN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgygwQ8ZG_C70IIk_n94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]