Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:09:14 As someone who’s had to call a warranty company several times about the same issue, and had a completely different experience with each different human agent…I can see the value of an AI agent so long as they’re programmed to always be fair and just. I’ve had human agents express empathy and do exactly what they’re supposed to do to help, and I’ve had some who clearly don’t care and are irritated at the world and just hang up and close my case without resolving. There’s no way for their boss to really monitor that quality control without spending hours digging into each case and listening to call recordings. It’s actually wasted a lot of their company time having a couple of lazy agents, because I’m now at five separate agents working on the same issue - it just needed one to actually see it through. Not to mention a huge waste of my time. But they’ve had five people now do the work that should’ve been done by one.
youtube AI Governance 2025-06-16T12:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgybwkUlLjNpqGHwCDN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyK86HVCnfEp2YpsRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwRFOZQ8KEQpf1QQAV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwW7Yd339WGmxUzp5Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw9KbxUxbs29lCHsw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWYEf4uCGRc2uDYGR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwLAGWBiGFois9PJLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzvVe01CWiXNQ3S-ed4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgynndGtTRlHcJfrBp94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyWl78BZDKiSIboABl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]