Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hallucinations can be minimised during the training phase. It occurred because previous old training methods trained AI on what is the correct response. The AI would then abstracted this information and to get a general behaviour that could provide a correct response in wider cases at inference time ( during actual real life runtime and being out in the field ). What AI researchers today realise is that you need to teach the AI what a negative response looks like also ( since it is not always just the inverse of a positive response). Otherwise, just like humans, it will just guess........which hallucinations are. I said this method reduces hallucinations because even with the above , just like humans, there be cases where it will not know or there is no clear right or wrong response. In these new ormodern training method, the AI response will not be to guss but to admit that it does not know the answer or can not decide what the optimal response should be. It will then be left to the human to decide or give the AI the option to guess. So humans can and should never hand over critical decision-making where the consequence are great if there is a failure and where accountability is required. It seems very obvious, but it is that simple. Humans are accountable to others because at the base level we have empathy and guilt that way on us when we fuck up. AI has no guilt and zero empathy and will slit your thoat as easy as washing your dishes.
youtube AI Responsibility 2025-10-09T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz1ulVObLo5Xhbz3SR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzzUcdq3a1terGtASZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxJfTYuhvOkkLnAy-J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyghzBSdDeI4iCJP0d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYSgiBhIt6kTdBJqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwRmMShOC72uyDiA8l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzd-jEKd3XziUP2spJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxh4LeGFnacbZ87qIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwoZoTxamBlWTTOMY94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyukMN_cEaSERNxIyl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"} ]