Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What positive use of generative AI? It can't identify cancer cells cause those a…
ytr_UgyV7flln…
G
it’s all a test, we’re the AI , we’re the ones being tested to see if we’re good…
ytc_Ugxuc_VPu…
G
I think the problem we live in this depressed era are those damn phones. This is…
ytc_UgxvNBArO…
G
What about lawyers? I disagree.. my friend is general council at a corporate. Th…
ytc_UgxZ7scld…
G
The difference is that companies like Ebay and Yahoo were already profitable dur…
rdc_nc1kwhp
G
Heyo! Legally blind traditional artist here! AI "art" is a dang parasite, and my…
ytc_UgwdhZI5y…
G
Aucun risque respect aux btp. Comme ont dit vers chez nous, la roue tourne 😅…
ytr_Ugyf3CcO9…
G
Hold on, are the shots of the guy narrating this video not AI generated? Is it j…
ytc_UgyBtUZ7_…
Comment
What a joke!! People who have been working with AI on real problems are getting more and more and more disillusioned with it (LLMs. Machine learning still is favorably viewed). This is to the point where some have just stopped trying to use it, perhaps occasionally looking at new models using their real world cases, but then coming away even more frustrated. This will not change. AI is trained on noisy, stochastic data sets, that are for the most part not well curated, suffer severe recency bias, and are incomplete. For many real-world problems, AI is a priori not useful, as there is not enough data to train it or the problems require contemporary and sometimes ongoing experiments, that are costly, but in any sense take real people and real time to do the work. So the only doom and gloom we’re going to get will be from naive people using AI to create a mess.
youtube
Cross-Cultural
2025-09-27T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyAowKP6cP4gz2IVoF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyl9Tj_WkeAJ_KZstV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwifFFdxvnKJfg4TUN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxBkiBsLwbwNq5u1D94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxWtVaEXgdqAJcUuod4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2CeF0s2twebsxoJJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylCm-tTnbLhWtTWOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7mivhUrrxxglCrmd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxvrC1lEkz1KOvHfvl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy9rSzZ6_AipzcrgfZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]