Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not just that, but these big AI companies are gobbling all of one of the ke…
ytc_Ugw8NrXfl…
G
I can see how it might come across that way! The blend of AI and human-like inte…
ytr_UgxGn7Pof…
G
You have to be a complete fool/Idiot to not figure out that people like Billy Ga…
ytc_Ugx5irCzn…
G
I think self driving cars will be more safer by 2035. Current technology hasn't …
ytc_UgyBkdfvL…
G
You sound like an AI about yourself since this is probably the 10th time I've se…
ytr_Ugy056FzT…
G
Why do you consistently pick the dominatrix GF AI that is intentionally abrasive…
ytc_UgyXA_bBT…
G
AI is not a person!!! What is wrong with you people! AI is and will continue t…
ytc_UgwaGPXNR…
G
Hating AI and using creative insults as an outlet is not about "economic anxiety…
ytc_Ugw8-1lvy…
Comment
The hopeium around AI/technology is concerning. Our present trahectory is not that technology truly helps us live better. Not really. We have to work harder and garder, despite improved productivity.
It really seems like AI will be forcefed to education, and we'll have to learj the hard way that learning requires direct, explicit human effort, whereas AI removes opportunities for that hands-on experience, that time and effort required. And so we will create a generation (more) that are helpless, just fully helpless to think for themselves.
youtube
AI Moral Status
2025-07-24T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwn1FyEI7IrTAbYGA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzZ3gwtw_Po1WDKxh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx4sslyG8q4ROJ3kyl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw-3e2if2ZvgmhajdB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQz9rot6gK2GqnaNB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8byu1XmUUxz3hdPh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwSEG7B-Q5zuvbgX6V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugywa20mxvkwTIh24k14AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxc4B6bCl5g9HaoPgl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhIDif3NcBXj4EHR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]