Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have an Amazon Echo Show 8, which is running on Alexa Plus. If this is Amazon's version of state of the art AI, then I guess we're all in trouble. I realize it's still in beta, but it's been in beta for a very, very long time. One of the most annoying things that it does, other than constantly putting ads on its screen, is that it makes some pithy comment when I want to add an item to the shopping list. The situation goes something like this: "Computer, add cocoa to the shopping list." Echo: "Coco has been added to you shopping list! That's going to be perfect for a nice cold winter day like today!" "Computer, stop making comments after you add items to my shopping list. It's annoying." Echo: "Absolutely, No problem! From now on, I'll keep it short, straightforward and to the point!" And then it does the exact same thing over and over again, adding unwelcome silly commentary after every item is added to the shipping list. If Amazon can't get the basics right, what is the point of the device?
youtube 2026-02-11T12:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx0D0BqJIoJtPN-bnR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzc-5WPzZ2MsuWxCwx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy66fS1HNyCAt1z7IJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzNDYe2N9T01_JR3nx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxW3CtlfcG03TqcNjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxPB46nHqsrKhz9Exp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwz7iNk2pAlvbEqOH94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyDvYV5JmdWL8oLdJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwWZqpHvcU6aCVM3zN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyDyfj2iqMlQclHBDd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]