Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Right about what? It was clearly stated that the info the ai used was biased mea…
ytr_UgyhD2Ar8…
G
AI is *_not_* a silver bullet. Don't entrust your eyes to an AI doctor. Above al…
ytc_UgwEVTinA…
G
How do you deal with war crimes committed by autonomous drones? The owning count…
ytc_UgyB44mLV…
G
Band ai completely
If ai can't value life or better the life of the user.
Consci…
ytc_UgyZtFPTb…
G
You can do RIGHT NOW with chatgpt, copilot Gemini or deepseek, it's not some obs…
ytc_UgwgGf3GA…
G
This why doctors constantly study. So, I am not surprised because ChatGPT sits …
ytc_UgxpQvDzy…
G
What Elon is saying about Larry Page is alarming. Unfortunately, I believe the t…
ytc_UgzUNyLDn…
G
@angelicloli9381 It doesn't. AI generates completely new images. As Sam explaine…
ytr_UgxoNa13g…
Comment
I really like Hank, but this episode wasn't it. A whole lot of nonsense was taken at face value even though I MOSTLY agree with the overall premise that actual super human AI would be very bad for the species, LLMs ain't it, but it sure does sell books (and get investors investing) to say the LLM is plotting to prevent it's own shutdown rather than the obvious of the LLM regurgitating one of the millions of AI stories it ingested about trying to prevent it's shutdown.
LLMs are stochastic parrots, not even "reasoning" has stood the test of time and has been pretty conclusively proven through white papers to be bullshit.
youtube
AI Moral Status
2026-01-16T15:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxoBKYHDOlWXw2ukhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8PsmMoMXCDsLLgpN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnM7NPE-qPsMeK-fZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQehEonz8RsHB83Fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyeki81sRIbPn6AJZh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYNq3S3jmPlxF6O0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4-BKGrUI2xFRkQj14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWvNBxMQ_SS3fX_-94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlT3GJxYFK42tj9fF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlmMjeDWzJMao02OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})