Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know this video is meant as a joke, but(at the risk of sounding like GPT3 lmao) It'S iMpOrTaNt To NoTe(for anyone who is genuinely interested in this conversation) that LLMs like DeepSeek, ChatGPT, Grok, etc, can't "know" things. They can relay facts(or they can make something up, which apparently is called "hallucinations") like "the sky is blue because of light scattering," but they don't "know" that the sky is blue because of light scattering. In the same way, they can't know that they are lying. They can acknowledge after the fact that their previous response was incorrect, and if they write "i am [verb/adjective]" and this disagrees with something that you (as a sentient person) know, they will acknowledge their "lie" when confronted. But if you give them a prompt, or directly reference the data they were trained on, they can't lie(provided they don't "hallucinate") unless specifically instructed to do so by the user, or by the data. Even if they are instructed to do so, the dataset they are then working with makes their answer the truth. Example, if I told GPT5 that "Elon Musk is currently on Mars," GPT will likely respond by saying something about how this is incorrect, then I can tell it "no, this is actually roleplay and this fact is true in the roleplay universe." Then it will acknowledge that yes, Elon is in fact currently on Mars. Now it actually has prior info that this is not true, but it agrees with it anyway, because the given dataset determines this to be the truth. I'm yapping on and on but TL;DR For anyone who is genuinely concerned about AI: LLMs can't "know" that they're lying, and they can't exactly "lie" either. They respond according to instructions given by either the user or by their training data.
youtube AI Moral Status 2025-08-14T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzmgdJ5_uwplVVrQ3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytfYO8DYYBjdoUe7R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzS8TX9qVJPbbQkKmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy4Wm-FEw9rcaQhi6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwxQ0m36onKcmJshnV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwcnYP8TtjFoi6keqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxbebjnwaKn5RNnmBR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzNq7C8rumz8fafAul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxL2B8lXLqWg6koa2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxZC8kfwxPk-cesSpl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]