Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
..WHAT THE HECK..THIS IS BYOND *WOWW* ! Can be helpfull, or danger..Any Way its …
ytc_Ugwf-qTtv…
G
Slightly off topic but has anyone seen the tiktoks that ask you what was your da…
ytc_Ugy4Nvtas…
G
Imagine.... There is an industry that is over 100 years old. And you want a rob…
ytc_UgzAkBrmK…
G
@gagewalker770if a robot can do it ,you are gonna be fired too New world order…
ytr_Ugwd7qjdn…
G
its weird because just last year chatgpt achieved much higher scores on bar exam…
ytr_UgxCoVeGl…
G
I can understand why some might find interactions with AI a bit unsettling! The …
ytr_UgwnNZcOe…
G
This video was made three weeks ago and it’s already outdated. Have you seen the…
ytc_Ugz-Lknqi…
G
Ignorant and to much time and technology humans,destroying human life so people …
ytc_Ugz8HvUdX…
Comment
Control Ai will fail. Capitalism, techgnosis, teleology, and chains of suspicion are a volatile cocktail of motivations and why this research will continue despite our best efforts to halt or slow it down.
The volatility stems from how these motivations interact and amplify each other. Capitalism’s competitive pressure pushes for speed for profit. Techgnosis provides the lofty, often unhinged, spiritual and apocalyptic visions of salvation. Teleology frames the quest as inevitable. That the progression from simple tools to complex machines and, eventually, to ASI is a natural evolution. Finally chains of suspicion (from Liu’s 3body problem) ensure a relentless, paranoid race dynamic based on “if I don’t do it, someone else will”, that tolerates no braking. The combination creates a self-reinforcing, chaotic dynamic that makes slowing or stopping ASI research all but impossible.
So even if we were a rational species, which we demonstrably are not, we won’t be able to stop what’s coming. Buckle up my friends.
youtube
AI Governance
2025-10-23T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxBB3Xqmff18XUSgkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0Ev9XTwmXmnS0KzR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxJyWuFstWjp9vxZ7x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz7YCXFAtUQR2Ayiph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUQT32-k_p3Z01Fl54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]