Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They get really close to discussing something I've been thinking a lot about wit…
ytc_UgyfqT-dD…
G
We have to do it or China will take us over. I don't like AI but if we don't rea…
ytc_UgxdI2P_3…
G
What is scary ia that it is taking world experts so long to realise the huge fak…
ytc_Ugwnn2aiM…
G
We are so naïve as a species to think we are going to be the ones deciding perso…
ytc_UgyJOTc9w…
G
The terminator didn't seem to need charging up and I've never seen a computer th…
ytr_Ugw0gAPgG…
G
Can you explain how this is different from your art being entirely based on the …
ytc_UgwkgSder…
G
@SusCalvin I'm not talking about books. Books are a small part of the industry. …
ytr_UgwSnuTQk…
G
Not to say this is fake or something but when I talked about abortion to chatgpt…
ytc_UgzPJQGvB…
Comment
LLMs also give wrong answers due to post-training. In post-training, humans provide neural networks with a set of questions in which an answer is always available. As a result, LLMs are not exposed to null responses in the data. Once human trainers begin presenting LLMs with questions where the correct answer is “I don’t know,” the models start responding with “I don’t know.”
youtube
AI Moral Status
2026-03-01T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz8K7gIffnKEMKSnNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyHhli5R6UqJ0qsfTJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyL797_M71m5hQW-PN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyillgr3oYJn_d_FnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5Juih4UDG8Yij1MN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzUqHajhQLOQu10Pr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwjLJk5tZcfPpq5q7N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9S4Kpf-J-OMVdrWd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy9avnzUN7G8NPX67t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8zuQBCFBUGuXyjcJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]