Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is someone somewhat "unbiased" on the subject. Is your choice of prompt examples a bit dishonest? If I wanted to test an LLM's coding capabilities, I would start a conversation about my requirements, clarifying all questions and having the first goal to be a design document (usually requirements.md >>> plan.md). Imagine leaving the model in such an ambiguous state regarding security...
youtube AI Jobs 2026-01-19T20:3… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzyjBbrFmMNmzJYiLl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxa9Qbis4VJAgmcH7R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKnbuj7ahmXBVlpUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkM6NK45uTa-UexIh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzZc9aUsbXCPMwFHq54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTCtctJxpZ_tDOkE54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyrNyNa_qELwsiIqLh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzCWsUldBmWDlxiEJF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxU8O_38vJkGxIgZY14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTK9gAZJNvomP4bcF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]