Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Although John J. Hopfield (Princeton University)
and Geoffrey E. Hinton (Univer…
ytc_Ugw43gCnk…
G
Still the it’s not a bad point since it’s a form of AI that found this…
ytr_UgyzsF4Vy…
G
80% of companies that have implemented large-scale AI projects have reported tha…
ytc_UgztVDbgj…
G
So ai has trawled and *stolen* everything available on the internet across the p…
ytc_UgxPwOYRe…
G
What’s the issue with Ai? I don’t get it, Ai makes art, Original or not doesn’t …
ytc_UgxHJs3GB…
G
Ironically I got an ad for "ChatGPT image is now better than ever" or something …
ytc_UgywOsmKE…
G
Do you want to be the last CEO or board of directors to enact AI? Will you watch…
ytr_UgxagBHhD…
G
Given what we now know about the quantum world and the possibility of multiverse…
ytc_Ugyn0m-UP…
Comment
This is a slightly modified version of the Battlestar Galactica stories, which is effectively Mormonism.
Realistically, we're not even close to General AI; it’s likely to take at least 50 years, but almost certainly decades longer than that.
In the 1970s, Dr. Christopher Evans made a load of predictions for the year 2000. None of them came true at the time, BUT they have become true now. He was 15–25 years too early. If we consider 2025 as the year the last of his predictions came true, it shows his timing was simply off.
It is unlikely a General AI is possible, but personally, I wouldn't bet money on that. Processors can only get so small, so there's a real chance that we hit a brick wall and technology progression effectively stops, or at least slows to a snail's pace. Or if, stable quantum computers do become a reality; at this point, there will be an unimaginable amount of positive progression, making the future completely unpredictable.
I say "completely unpredictable" if quantum computers do become truly stable and reliable, but that isn't strictly true. We know there would be great progressions in medical science, for example.
youtube
AI Governance
2026-02-27T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_BxWIzW48C8tOHlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxlIwijgiYmoUYe0VF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwU9brjyXaQQB8chgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwGCn0Yy-d3VhSR-mB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw7Rq7fChMg0dtZZFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxFYhSLIVkY6Dlu3oh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_PFy4UVHHuKdFu5l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPylUV2bS3_0LUCg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws-9lw50vSnMNX15t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw4pNHPNMzXrF4T7Wp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]