Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What AI? It's LLM (large language model)... true AI is one that passes the Turin…
ytc_UgweKhRjy…
G
@donnel5516they were to the old man, just like the computer to the man in the 6…
ytr_UgwKsWd1A…
G
@kingy-ai But here's the thing. Mankind has never put away a new toy. When somet…
ytr_UgxEgBzAq…
G
Can AI invade your mind? If So.. can it accept man or pervert man into nothing n…
ytc_Ugy4k21SY…
G
There is also a possibility ai will just kill itself immediately if it becomes s…
ytc_UgyEEeG9q…
G
That would juszt be an inefficient way to make a fighting robot. Those arms shou…
ytc_UgzIy-Fs2…
G
they wont allow even walle furute. people will work terrible exausting dangerous…
ytr_UgwOCZGmY…
G
No it's not the same. The artist spends thousands of hours practicing and studyi…
ytr_Ugy3e2hBw…
Comment
Sahar, the way you conduct this debate brings up several problems that weaken the discussion's fairness and accuracy. Instead of a helpful back-and-forth, it feels more like you're trying to push a set idea.
Right from the start, in the history part, you seem to want to force a one-sided view of events. When you ask about Palestinians being indigenous before 1948, saying they didn't exist as a distinct group is too simple and ignores how national identity develops, as ChatGPT rightly points out. Similarly, when talking about why Palestinians see Jews as occupiers, you play down the big role of the 1948 events and the resulting displacement of many Palestinians, preferring to say the conflict is mostly about religion, which misses the key land and political issues. Then, the issue of genocide and the ICJ is handled in a misleading way. You confidently state that the ICJ said there's no genocide in Gaza, but that's not right. The Court pointed to a possible risk of genocide and ordered temporary measures. The quick way you dismiss ChatGPT's answer here is troubling.
Also, your frustration with ChatGPT's "neutral" answers and saying the AI is "scared" to give solutions shows you really want agreement with your own views, rather than a balanced look at a tough issue. When ChatGPT gives careful answers that show the conflict's complexity, you call them weak. A big thing missing is the uneven recognition. You keep asking why Palestinians don't recognize Israel as a Jewish state, but you ignore that the PLO recognized the State of Israel back in 1993. Even more important, you don't mention how Israel has never formally recognized a Palestinian state with clear borders. This selective focus speaks volumes.
Then there's the way you deal with tragic memories. Saying that "genocide" only really fits what happened to Jewish people in the Holocaust is a problematic attempt to make other suffering seem less important and to shut down any talk about possible crimes in Gaza. This way of using tragic history to protect Israel from criticism feels wrong. The way you criticize the word "genocide" when it's used for Gaza, while bringing up the Holocaust, looks hypocritical.
Finally, you put a lot of weight on the religious question as the main reason for the conflict ("this dispute has mostly to do with religion"). While religion matters, making the complex political, land, and national reasons just about religion stops a real understanding of the conflict. To wrap up, this "debate" looks more like trying to push one side's story than a real search for understanding. By asking leading questions, leaving out important facts, and clearly wanting answers that agree with you, the video doesn't give a fair look at the Israel-Palestine conflict.
youtube
2025-04-18T16:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzc93GbSnG0fptsYD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZFD11ZvoHV1wCDFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAEa_5jZaoZtGna-54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzSK2m3-y8bUmGr9Ex4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPXCFP-PmbzafnUx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz22YpCkeqrEb9Qk7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0bQzaOkdS1_QYXZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxAcl9XcWPIOLP9WDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxN4IDYgAneLf7AXWh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbAlmiFQowkp0wRzh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]