Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's 2026.1.27 and ai really helps us write every codes, what sam said would be …
ytc_UgyQQ3djz…
G
History is ugly but we don't have to be. We have this thing called progress wher…
ytc_UgzG_qiyV…
G
If Ai kills a person what is it’s punishment? Nothing! What if Ai decides to use…
ytc_Ugy47QXYl…
G
Haha, fair enough! See you in 27 days—unless you change your mind sooner. Take c…
ytc_Ugyv7C65e…
G
Well, yes.. just like how jewelry is valuable based on its provenance. Humans li…
ytr_UgyY1EbiL…
G
AI will make a better and cheaper YouTube video as the “content creators” are si…
ytr_UgwX_D8zB…
G
Come on over to the federal government because it'll take them 50 years to do co…
ytc_UgzewfBAy…
G
I will ask the "father of ai" how he can have a clear concious" if he agrees its…
ytc_UgxfoSj70…
Comment
No, they are not thinking. If an LLM was truly thinking we would be witnessing the Intelligence Explosion, where an LLM would be able to create another, more advanced AI, which would create an even more advanced A.I. And extremely soon every single possible mystery in math and physics and the entire universe would be solved. This clearly has not happened in FOUR years of having LLMs. So, NO, they do NOT think.
youtube
AI Moral Status
2026-03-12T18:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyp_WEQk4EQLsy69rB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykPeDiPliCkKTeKjR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-hT_Sht2YEMasglh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxTBq6PAwifQBd-X7p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxOq2NvQ8QPoKhuXw14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDJJ26yFhuNGwdb7p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2eHcAOUOhOlDnb4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyz2ZcL7Rh1bLI5za94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8qX7eXqBtj4uuvTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxljXyF8_sAcYkv6kV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]