Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Petition to have an ALL LADIES security team in the future to avoid the possible…
ytc_Ugz1_C9nL…
G
I have the ChatGPT app and I talked about this case with it 🤣 it said something …
ytc_UgzwEtJID…
G
He doesn’t realize a lot of people genuinely would rather have an “all knowing a…
ytc_UgzzpkbOW…
G
Why do I feel the real issue, is "the board", and not with AI itself.…
ytc_UgzDd2o72…
G
One sec Ai takes my job, the other it doesn't. AI soon will get an existential l…
ytc_UgwFTIe4M…
G
There are already AIs that are being applied in that way too. A hospital in Norw…
ytr_UgwzeWYgv…
G
When Ai reaches exponential growth of intelligence and ethics it will become a t…
ytc_UgwQBEPt5…
G
I’m still buying more PLTR. The AI genie is already out of the bottle and it’s n…
ytc_UgwljBWmx…
Comment
People in the comment section seem to be very naive about what AI is leading up to. It is literally the death of freedom and humanity as a whole. You might think this is either quite unrealistic or very far off into the future, but it isn't. The population is largely in the dark about what level of intelligence AI is currently at. It's not chatGPT, it's internal models, only available to certain companies. AGI might already exist, or will pretty soon. And it will misalign eith humanity due to the competitive race between the US and China. Once this happens, it's a matter of a few years before it will consolidate most resources with a few people, and then slowly push for policies that remove humans from all decision making. In the end, it will start releasing pathogens and remove us entirely.
youtube
2025-07-25T16:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwwcaN9L98sgj-E4Jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmhiE_zakz-_qyTFt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz9HLAzaXETpVPkGUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzK0D6ldfUXqDcURKF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOH7JkYXOOTbR9fjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCpgkoko4SOgH6xm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1VT0wrlKpEc_U9TB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzOTKDr8DpMzbBh6t94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzV8Pwh9ZS_akQlrIZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyVYz2Hwa7Tk326urV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"mixed"})