Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey, at least it looked like ChatGPT had finally learned how to use a vacuum cle…
ytc_UgyF0nmzn…
G
I have tested the limits of the AI camera :D & gladly never got caught. Come ove…
ytc_UgwuxL8EY…
G
0:30 I think the boy had some pre-existing problems. I play character ai almost …
ytc_UgyzH7QWd…
G
There is a reason that the saying "Safety code is written in blood" and we will …
ytc_Ugwfg6xTN…
G
Automated manufacturing. Any foreign company that builds a plant in the US will…
ytc_UgzVh9Rpa…
G
I don't believe this has any significance. Without LLMs they would have develope…
ytc_UgyXpusOM…
G
I keep saying this — why aren’t people organizing? What are we all waiting for, …
rdc_o4hao8v
G
Wow it’s almost like people don’t say Ai will replace artists, just the artists …
ytr_Ugz6Bd5h9…
Comment
What a load of fear dressed up as caution.
There is no straight-line graph that shows AGI will kill humanity.
There’s no modelling, there’s no scientific argument to be made.
P(doomers) are always making up numbers.
Is there a possibility the AI kills everyone? Yes — but we could also die from an asteroid hit tomorrow.
We don’t have models for the probability of any of these existential threats.
But there is only one existential threat that has anything resembling an upside.
Any claim that it will most definitely kill us is wild speculation, and is no better than the wildly optimistic Utopians.
Neither have any real methodology to support their predictions — they’re about as reliable as Madame Zelda and her tea-reading booth.
youtube
AI Governance
2025-12-04T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwfuJldpu13N5yIjgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwQtPiShjExt_Mm1Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzd-yLBfi9WMHa4g0p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnLFQDQY47429ZVih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyENebu1tHpusFjJtd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0siumGK2Szqinj4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDEYCalO3RoZEuB_J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdMVHYcOIlKj399Gl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsQj-ugSeNf558p_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-wjLHTVGqJ0RSS-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]