Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Can you spot a fatal flaw in Tesla's Autopilot?" Yes, its entire existence is a fatal flaw. It is not a self driving feature like Tesla wants people to believe. It is a driver's assist feature, but people are trusting it far too much and are getting in severe accident with it active. I suggest watching the documentary "Elon Musk's crash course" for better details on the topic about the faults in Autopilot, or listening to Matt Farah of TheSmokingTire discuss, with experts, the flaws behind Autopilot. As an example to put it into perspective, the system often recognizes the moon as a stop sign. As Tesla likes to claim, they pull a lot of data from vehicles to help develop Autopilot better. They're not pulling any data, this is a lie. If the system were to be fully autonomous, it would need to learn and recognize all of the moon phases from every single angle on the planet to know that that is actually the moon. It cannot do this and will continue to make this error. This is something that has been recorded as an instance in how faulty Autopilot is as a system.
youtube AI Harm Incident 2022-09-06T22:1… ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyMOGAAN8V6nxQQ4294AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzJ1SyL7JZGWi33v8h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyjtSCqJwLQNJT7F0d4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-3OO32WhlcM5RciB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyQHyDEYLxTB8wsPu54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzBM0aN5bfvyXIVN-R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxMNJ9ZvDYnUko0GwV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxtWxTd7RHqFllf96J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyHsdsdChx3vT6A7PB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxucJlw7VD46-KlDvJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]