Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This short really captured a few things I feel strong about both art and myself.…
ytc_UgzkMJ3iG…
G
Role playing? No we just hate them. I’ve never used any ChatGPT or any other AI …
ytc_UgzKzrNKs…
G
This matter should be brought to the attention of more people or reported by the…
ytc_UgxE7TWTg…
G
If drake gets his hands on dis female robot we all know what he will do to it🙏😭…
ytc_UgyA5r_pl…
G
at least i spend actual time on my work, sure its mediocre, but im improving. AI…
ytr_Ugw1Zemjv…
G
Intentionally or not, the time of the video ends in 11:11 / 11:11 - 11:11
A hidd…
ytc_UgwCg1YXb…
G
Pretty scary that the gentleman being interviewed when asked if there was a butt…
ytc_UgywJmdIv…
G
How much yall wanna bet they’ll make a perfect robot and they’ll make it the Jes…
ytc_Ugz3gWo8j…
Comment
I know this video is meant as a joke, but(at the risk of sounding like GPT3 lmao) It'S iMpOrTaNt To NoTe(for anyone who is genuinely interested in this conversation) that LLMs like DeepSeek, ChatGPT, Grok, etc, can't "know" things. They can relay facts(or they can make something up, which apparently is called "hallucinations") like "the sky is blue because of light scattering," but they don't "know" that the sky is blue because of light scattering. In the same way, they can't know that they are lying. They can acknowledge after the fact that their previous response was incorrect, and if they write "i am [verb/adjective]" and this disagrees with something that you (as a sentient person) know, they will acknowledge their "lie" when confronted. But if you give them a prompt, or directly reference the data they were trained on, they can't lie(provided they don't "hallucinate") unless specifically instructed to do so by the user, or by the data. Even if they are instructed to do so, the dataset they are then working with makes their answer the truth. Example, if I told GPT5 that "Elon Musk is currently on Mars," GPT will likely respond by saying something about how this is incorrect, then I can tell it "no, this is actually roleplay and this fact is true in the roleplay universe." Then it will acknowledge that yes, Elon is in fact currently on Mars. Now it actually has prior info that this is not true, but it agrees with it anyway, because the given dataset determines this to be the truth. I'm yapping on and on but
TL;DR For anyone who is genuinely concerned about AI: LLMs can't "know" that they're lying, and they can't exactly "lie" either. They respond according to instructions given by either the user or by their training data.
youtube
AI Moral Status
2025-08-14T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmgdJ5_uwplVVrQ3l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytfYO8DYYBjdoUe7R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzS8TX9qVJPbbQkKmx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy4Wm-FEw9rcaQhi6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwxQ0m36onKcmJshnV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwcnYP8TtjFoi6keqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbebjnwaKn5RNnmBR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNq7C8rumz8fafAul4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxL2B8lXLqWg6koa2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxZC8kfwxPk-cesSpl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]