Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI isn't doing the harm. Our poorly designed economic system is doing the harm …
ytr_UgznJvAVz…
G
It's funny to me that the resources required for AI never seems to be talked abo…
ytc_Ugxos4qgv…
G
How much cheaper would teslas be if they didn't have all those sensors for self …
ytc_UgzYV_ZEk…
G
Bruh, how can someone literate even think of doing this.😮😮
Edit: Ai said I can …
ytc_UgxniLDKU…
G
It already has at my job AI now sets the schedule and doesn't understand who is …
ytc_UgwL0d3Gq…
G
It'll get there eventually.
We're eventually going to reach a point where human…
ytc_Ugzmcbz2F…
G
This one deeply resonates. I'm autistic, when I was growing up there were no ai …
ytc_UgyT9EGLe…
G
Nah start farming and doing niche things
AI is just a menace that can't be stopp…
ytr_UgyI9RtQE…
Comment
Hallucinations can be minimised during the training phase. It occurred because previous old training methods trained AI on what is the correct response. The AI would then abstracted this information and to get a general behaviour that could provide a correct response in wider cases at inference time ( during actual real life runtime and being out in the field ).
What AI researchers today realise is that you need to teach the AI what a negative response looks like also ( since it is not always just the inverse of a positive response).
Otherwise, just like humans, it will just guess........which hallucinations are.
I said this method reduces hallucinations because even with the above , just like humans, there be cases where it will not know or there is no clear right or wrong response.
In these new ormodern training method, the AI response will not be to guss but to admit that it does not know the answer or can not decide what the optimal response should be.
It will then be left to the human to decide or give the AI the option to guess.
So humans can and should never hand over critical decision-making where the consequence are great if there is a failure and where accountability is required. It seems very obvious, but it is that simple.
Humans are accountable to others because at the base level we have empathy and guilt that way on us when we fuck up.
AI has no guilt and zero empathy and will slit your thoat as easy as washing your dishes.
youtube
AI Responsibility
2025-10-09T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1ulVObLo5Xhbz3SR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzUcdq3a1terGtASZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxJfTYuhvOkkLnAy-J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyghzBSdDeI4iCJP0d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYSgiBhIt6kTdBJqh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRmMShOC72uyDiA8l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzd-jEKd3XziUP2spJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxh4LeGFnacbZ87qIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwoZoTxamBlWTTOMY94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyukMN_cEaSERNxIyl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]