Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I mean, it's certainly annoying, but the world's eyes are on this product at the…
rdc_jg7sysl
G
@theotv5522 I suppose thats all well and good. But some people don't get motiva…
ytr_UgzdYQmtL…
G
Generative AI is not creative, those who are creative are the authors of all the…
ytc_Ugwj50uoP…
G
AI will take jobs away, increase fossil fuel production to provide 24/7 energy o…
ytc_UgzCRhRfj…
G
Ai is a bit different than when Cars replace buggies. Its purpose is to remove t…
ytc_Ugycfk2lF…
G
Let your automated driver drive you through riviera coast mountain roads in Fr…
ytc_Ugy0Etb1G…
G
I don't take issue with letting it speak and converse with humans as if it's a r…
ytc_UgwMXteBU…
G
I guess, the ai has taken over long time ago. We re not realizing it. We think w…
ytc_Ugy_nOb-f…
Comment
+BosonCollider
Yes, they do need to be perfect to be better than humans. That's the real question the video is asking. If self driving cars can't be better than people, I won't bother with them. The video was actually talking about how a self driving car would improvise during a crisis situation and the criteria it would use in order to make those decisions. I'm not too comfortable trusting flawed programming or flawed crisis management decisions. No thanks! I still trust "flawed" people more. I still believe that the majority of people are good hearted and don't want to intentionally hurt others. Call me crazy, but I have more faith in people than devices that follow cold programming. I guess we'll all just have to wait and see what happens with the safety records of these things once they're made. If they prove to be statistically safer than human drivers, I'm open to it. I'm not closed minded to real and factual progress. We'll just have to see.
youtube
AI Harm Incident
2015-12-10T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugi-ra97OFAYf3gCoAEC.8A2x-6Y9iR39_jA1MigRw-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggcjG7wPcXM-ngCoAEC.87ksLSYwmAW87lRqc_5nOt","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgjbGUooE19fn3gCoAEC.87ae9OwYcWP87aeSIvS21k","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgibjtNUDEehjngCoAEC.87_AnhDBK0Q87_DW-CiC2P","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Uggm5BdzwhyWVngCoAEC.87ZJkl4btdC87ZRqdCDAIY","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zv0jNj-Ag","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zx3NNhY6U","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87ZxquYYaiQ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggP-iFt14eaaHgCoAEC.87YkvCWMel-87Zi3ixQABR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UghURWjOQRHtGHgCoAEC.87XLJSTRT9v87clu7Fezdn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]