Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is the reason why AI is destroying people, not helping people. I really hop…
ytc_Ugw2FLGye…
G
It’s not just these events. I went to a comicon in Canada a couple weeks ago and…
ytc_UgxihMhnJ…
G
God, this poor world of ours. The war-craziness of the declining Western empires…
ytc_UgzJM2PVE…
G
Waymo is horrible need humans, robots cannot replaced humans. Horrible idea just…
ytc_Ugy--e2c1…
G
To make matters more interesting: Arguing that AI "learns just like a human" onl…
ytr_UgydAQwrK…
G
Large language models are designed to build an individual personality, based on …
ytc_UgwNEqKkZ…
G
Letting AI do jobs on a large scale would collapse the economy. Who's going to p…
ytc_Ugy1jEDlg…
G
LLMs don't just learn what's in the data. Training on data creates general algor…
ytr_Ugxvu2_TP…
Comment
A degree is too high risk to fail. Its unfortunate but most professors don't put as much effort and knowledge into the students. We go to classrooms to understand things in a way that a book, internet, or AI can't but the professors are not that knowledgeable or willing to put in that much effort. So students get frustrated, get worried about failing out with an abudnace of student debt.
youtube
2025-08-01T17:0…
♥ 95
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy9_i5J1q-clbMUcrl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6IZEQUVDKx3mKXvl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgykR-61GwbBaSKeeEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgySVXVYXsoNNSK57JJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxqsxf2nMax4IjZNjt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2B82tHnHECCujhPN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugw2Sys4riim0cMcxUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzCYeDV3pOD1joJg594AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxTrgPKziuOsYI2Dtx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylYCwYq_JMVazG4154AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]