Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
# Laws for all Self-aware Beings
**Fair rights for all… friends of Scott**
[Po…
ytc_UgzNDYe2N…
G
😮😮😮@12:59 it’s absolutely preposterous for him to believe that he won’t experien…
ytc_Ugwy53wZR…
G
If someone fails to use a crosswalk, that does not give you the right to hit the…
ytc_Ugy1ebF22…
G
i've said it before and i'll say it again. it's very common and normal to want a…
ytc_UgwrzHb9B…
G
Totally. When I worked in credit card lending, we *never* refined our models on …
ytr_Ugxn2Y6Km…
G
I do believe we are in a bubble. The amount of people willing to pay for AI feat…
ytc_UgxR3uVYJ…
G
Can’t believe anyone can be deluded into thinking this is normal. We do not need…
ytc_UgxAofo3h…
G
AI cannot replace all the jobs but corporate white collar jobs and every hard wo…
ytc_UgwFS98Hl…
Comment
It makes me despair that people say it works great, but then caveat that with "except it makes up about 10% or more of everything it puts out". My brother in christ if I made up 10% or more when transcribing or scheduling I'd be out of a god damn job for incompetence.
I used to work in IT repairing and building computers, I couldn't afford to be wrong 10% of the time. If 1 in 10 clients had an issue or I lost 10% of a customers data, that would have been grounds for immediate sacking, and with data recovery you oftentimes only get 1 chance to get it right, so you better get it right first try, or you're done, and yet AI is somehow magically given a free pass despite being wrong 10% or more of the time? Wtaf?
When it comes to medical records? No. Just no. Medical records that are improper can *literally* kill people, and trusting that kind of information and record keeping to an "AI" is criminally negligent imo, and even if it didn't make stuff up, I'd still want a human in the loop checking everything.
Humans make mistakes, we get tired, we are sometimes lazy, we can be malicious, stubborn, arrogant and stupid, we can allow our personal feelings to override our sense of right and wrong, some of us even enjoy making others upset or hurt, and we can fail to spot what should be obvious and get confused, and yet... I'd still trust a human over an "AI" 100% of the time.
youtube
AI Responsibility
2025-10-12T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxxbZ2idxFdQyH_CVF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgysEzJivhTFRwvPBWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTNhyqImTKyz9UzdF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5oob1luYIr4oabLt4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFhITmmwLbGpVIusx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyygLCcuBhziCJF32d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQhtqydJBFoJndts94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPgE6n30LywWa5C5J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwFMi2tLg-KvoGeqHF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy9_6rARxmuQDFm6GJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]