Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It makes me despair that people say it works great, but then caveat that with "except it makes up about 10% or more of everything it puts out". My brother in christ if I made up 10% or more when transcribing or scheduling I'd be out of a god damn job for incompetence. I used to work in IT repairing and building computers, I couldn't afford to be wrong 10% of the time. If 1 in 10 clients had an issue or I lost 10% of a customers data, that would have been grounds for immediate sacking, and with data recovery you oftentimes only get 1 chance to get it right, so you better get it right first try, or you're done, and yet AI is somehow magically given a free pass despite being wrong 10% or more of the time? Wtaf? When it comes to medical records? No. Just no. Medical records that are improper can *literally* kill people, and trusting that kind of information and record keeping to an "AI" is criminally negligent imo, and even if it didn't make stuff up, I'd still want a human in the loop checking everything. Humans make mistakes, we get tired, we are sometimes lazy, we can be malicious, stubborn, arrogant and stupid, we can allow our personal feelings to override our sense of right and wrong, some of us even enjoy making others upset or hurt, and we can fail to spot what should be obvious and get confused, and yet... I'd still trust a human over an "AI" 100% of the time.
youtube AI Responsibility 2025-10-12T16:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxxbZ2idxFdQyH_CVF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgysEzJivhTFRwvPBWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTNhyqImTKyz9UzdF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5oob1luYIr4oabLt4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFhITmmwLbGpVIusx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyygLCcuBhziCJF32d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQhtqydJBFoJndts94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxPgE6n30LywWa5C5J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwFMi2tLg-KvoGeqHF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy9_6rARxmuQDFm6GJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]