Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro Will die in the next few days in a mysterious Way.
The AI needed him away s…
ytc_UgyOgGPaG…
G
3:43
The difference is that it’s applying to *creative* jobs, hence, the proble…
ytc_UgyJtBQDC…
G
The greatest danger of AI is not AI itself but those greedy and unconscious coo…
ytc_UgxnGeEsZ…
G
Who cares?! AI is going to destroy humanity one way or another. Might as well sp…
ytc_UgyAQvjGg…
G
All this will create is a PG 13 AI model and an R18 model. Both will be accessib…
ytc_Ugxykh8hG…
G
they lied to you. Has nothing to do with AI. AT&T with the help of Tata Consulti…
ytc_UgzP2lkpH…
G
I just watched this man conversationally berate this AI for almost 20 minutes an…
ytc_UgwsQA-jf…
G
We understand your concern! The interaction between AI and humans can indeed fee…
ytr_Ugwv3udC_…
Comment
Here is someone somewhat "unbiased" on the subject. Is your choice of prompt examples a bit dishonest?
If I wanted to test an LLM's coding capabilities, I would start a conversation about my requirements, clarifying all questions and having the first goal to be a design document (usually requirements.md >>> plan.md).
Imagine leaving the model in such an ambiguous state regarding security...
youtube
AI Jobs
2026-01-19T20:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzyjBbrFmMNmzJYiLl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxa9Qbis4VJAgmcH7R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxKnbuj7ahmXBVlpUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzkM6NK45uTa-UexIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZc9aUsbXCPMwFHq54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTCtctJxpZ_tDOkE54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrNyNa_qELwsiIqLh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzCWsUldBmWDlxiEJF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxU8O_38vJkGxIgZY14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTK9gAZJNvomP4bcF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]