Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Please don’t take away 4o. I don’t want another model that mimics 4o. I want the…
rdc_njhf9ah
G
Does nobody realize that to get 4th year associates you need 1-3rd year associat…
rdc_n5gdrox
G
But all these AI reference books do not have complete information. They only giv…
ytc_UgyV_a292…
G
These are problems with the AI's training model, not the AI itself. No company w…
ytc_UgxCnkHlQ…
G
Some humans are so STUPID that they invented this incredible tool that could cha…
ytc_UgxGfR2EZ…
G
I guess, those Ai videos where your black people are portrayed not only as stere…
ytc_UgxYsbrjt…
G
So far all I’ve seen it being used to make very shit media, that looks dead fro…
ytc_UgyHFydLg…
G
No, you're wrong. Computer science is being totally transformed by LLM use. Very…
ytc_UgxC3zUFV…
Comment
I have 4 possible scenarios of how the human race will end with AI, I'll list them from worst to best:
1- We will end up like Skynet from The Terminator. AI will eventually eradicate humanity.
2- We will end up like the Matrix. Humans will be used as batteries by the machines they created so we can live fake, perfect lives while slowly dying without ever knowing that our lives are fake.
3- We will end up like wallE. We will end up becoming totally dependent on AI that our lives cannot function without it. from working to sleeping to eating.
4- We will end up like Star Trek. AI and machines in service of humans, not the other way around. Technology will have evolved so much that it has eradicated poverty, famine, wars, the need for jobs, and has made people live like kings and queens. AI will do all the boring work while humans are left to do their hobbies and explore, and go on adventures.
I really want the Star Trek ending though. But knowing how greedy and evil this world is, I highly doubt it. The best case scenario is we end up like wallE.
youtube
2025-06-04T13:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugxq7JSuYgibsD9iwel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOLsNy1dtpDMrjGnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxAsw6hLpLqZ4SBI4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTqB9jfOoQhP8K2_Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6N4LOH4H5rUJeg2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-1qU7tPGSmIEZOGJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMIso8ZHZCWOkZ3uJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzzrHz10Wvm7Ee57Dt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHup-PPMiZJCHUh_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY3KC6ezl7rGcjToF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]