Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It has been established that AI will give wrong information and AI devs simply call it hallucinating and not a bug. It’s also been established in 2024 that code generated by AI is flawed and needs to be reviewed by a person first. So yeah this is not a surprise to me at all.
youtube AI Jobs 2026-02-06T00:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6YGvSW-kJPE073-Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxgWBHm4MgGnhsSNWV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-Ivia_oVh9daKqYZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwufL0DvV18pvopv1t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmK9Cy5AxmBpoNmAp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwYUivfjpNzUo3lO1t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQMCU3R0wfEDznfZB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxAdylIraue7XujKxV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxx9dMfPGHOSBfDoTx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7DFFuBoAUg4EzSMN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]