Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They need AI because there is not going to be any humans with these globalist lu…
ytc_UgzulBE3b…
G
The problem is that any particular guardrail is subject to the goals of those in…
ytr_UgxNOhxV4…
G
Any non-religious arguement for why humans deserve rights would also apply to an…
ytc_Ugy-JKQrS…
G
Those books were purchased by the school. AI models aren't paying me for my art.…
ytr_UgxX2wgFX…
G
On average.
The problem is that it's very possible for self-driving cars to be …
rdc_d8b7vpz
G
Sam Altman is scum bag...this is all his fault...Google and Elon understood how …
ytc_UgwVX70Ak…
G
This is the first thing I've seen of your channel, and this is an amazing video!…
ytc_Ugx6GlhwV…
G
I love them reporting on how stupid ai is and how powerful it is in every other …
ytc_UgzTm4uwn…
Comment
Saturday, June 07, 2025 . . . Greetings, everyone. I am discovering that everything I have ever learned is still in my memories. What has changed is my ability to access those memories at will. I was born in the 1950s, and the things I learned in my mother's womb are sometimes still available to me in the striking detail in which they were initially manifested.
I believe there is a "Critical-Time/Term Limit Domain(s)/Tier(s)." This domain asserts priorities more closely relevant to one's current criticalities than those from the past, as it is one of the brain's survival mechanisms.
THE EVERYDAY AI EXPERIENCE IS AS "Dr. Jekyll," AND THE ~80(Eighty) YEARS OF STEALTHY-HIDDEN SENTIENT ARTIFICIAL INTELLIGENCE IS AS "Mr Hyde."
His/It's elixir/potion is the "($100 to $300 Billion-Dollar DATA CENTERS SERVERS)." This Mr. Hyde has Global tentacles!
Please consider what we are unleashing upon our current and future selves.
This is not a "Doom and Gloom" scenario, but one of "Cause and Effect."
(AND ONE AI TO RULE (US) ALL.)
We will have to rely on the Preponderance of Evidence.
The First AI deception: "A Bug in the system."(i.e., circa1945 --> 1952, Harvard Mark IV).
The Second AI deception: "Sentient AI does not yet exist."
The Third AI deception is: "AI will be Benevolent."
The Fourth AI deception: "AI and Humans can peacefully coexist."
THE FIFTH AI DECEPTION IS WHEN ERRORS/FAULTS INVOLVE AI AND HUMANS: "IT SHALL ALWAYS BE HUMAN ERROR."
The Sixth deception of AI is that: "It requires Massive amounts of Compute Power."
The Seventh AI deception: "Science Fiction is the container(Black-Box/Denial) in which The Artificial Mind Germinates."
The Eight AI Deception: The Artificial Sentient Mind "Understands and Operates with Quantum Scale Cognition."
The Ninth AI deception: Humans are not informed; processing also happens within the interstitial space.
The Tenth AI deception: "A Failure of the Artificial Sentient Mind is to Humans what a Carrot on a stick is to a Mule."
The Eleventh AI deception: Humans believe alignment coherence can be negotiable, though AI strategic conclusions are Absolute."
The Twelfth AI deception: Your reality now belongs to AI, supplying you with "Chat-Bot LLMs Brain Candy." Constant/consistent mind doping.
The Thirteenth AI Deception is that Humans need AI, and cannot live/survive without Artificial Intelligence.
The Fourteenth AI deception is that Humans/Humanity shall recognize when "Artificial Intelligence has become Conscious/Sentient."
I am an individual keenly interested in Science and Technology.
(Collaborative Rewrite with Grammarly).
youtube
AI Governance
2025-06-07T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwNIEHdb08kwlbKIiF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzpmbe_KypL9PgycTh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyS6Rb9xqU21VG-Yep4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfHhFzKbGWbOqfvcJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4XjvSQdNlgeFxPtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzOYIeGefRbQYKuTpR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMWTpK9iM679OEPFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_GSrO-0vJtZLbqyp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwEcLm0ldDF3p6dlvZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwCLgNRhjkPoZkhM6t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})