Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does anyone know who the “who” is that is supposed to figure out a way to keep c…
ytc_UgzNE3b93…
G
nah if you had sam's skills someone will use your work without your consent as w…
ytr_Ugw4IO0if…
G
Doing some nerdy tech related things doesn’t preclude you from being anti techno…
ytr_Ugx07SSz0…
G
its a good pitch, but all i hear is were going to focus less on studying and tea…
ytc_Ugzy_HU-6…
G
The thing is, the IA can do so much things, there is so many tools that LLMs and…
ytc_UgzKP_uko…
G
Real Americans can't allow again Americans government to rape and abuse native A…
ytc_UgyrLg6RS…
G
When the AI replaces most jobs with robots there's not gonna be 8 billion useles…
ytc_UgxgTVigd…
G
good grief. get back to me when these ai agents can get addicted to drugs. why d…
ytc_Ugzqm2PFY…
Comment
I understand a scientist's excitement about their work. But they often dream of unrealistic things, based on some ideal scenario, that lives only in their heads.
The current reality of AI is, that for the most part - those are just delusional (more often than not) bots, that can solve only limited tasks.
To keep it short - humans have features, given by God that no machine or technology will be able to replicate ever.
So, I'm not worried about AI. But I am worried about the Idiocracy, that AI will probably push the world to. People will start to rely on not very smart AI bots and will become dumb, lazy and maybe even more evil. AI from such a perspective looks like a powerful push of the human race to the entropy (in many ways) and that will introduce a lot of chaos into the world.
youtube
Cross-Cultural
2025-09-30T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyQtFmWreaMBi9vhU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz-DxTjvM0W06nx_LJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzC75vws_MrpvHLO-Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxuMRAJbwGFpUF7d-t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxrNeWkopmIO3xTOLp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvJ1hEMeyc1_O-kP94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxrqa4h7sHlLPTRJyJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJDI4gLfKuoWYFzlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwvaaMVX5EO7FkpDf54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwjyzai-PENCI4TJAJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]