Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think that we want Ai to become people like. People have more of the same valu…
ytc_UgyMar1rO…
G
It's Philip K. Dicks Minority Report horrifyingly brought to life - only replaci…
ytc_Ugxb23v3s…
G
John is right. Self-driving cars won't be perfect but they will be considerably …
ytc_UgiB44HQO…
G
Low bow to you, Dr. Mustafa Suleyman. And yes, they are a new species. Thank you…
ytc_UgwnQC6or…
G
If someone saw my ai chats they would think I'm Ben of the week or something…
ytc_UgwmVdAfo…
G
None of this matters, only the goal of its creator matters. It’s been being don…
ytc_UgyTe1Gf_…
G
If ai is so crash hot then I and myself wants to know if ai will be paying taxes…
ytc_Ugzi96Zf-…
G
It's simple the ai will get rid of religious by creating a culture. Then the cul…
ytc_UgzBjw0zu…
Comment
very flattery view for uber, now how about they release the LIDAR and RADAR data (this vehicle was equipped with both) and infrared footage.
I imagine that footage will depict a software behavior that simply got confused because uber is a sloppy company that asked for forgiveness instead of permission.
Uber knows that it is finished if any other company gets self driving taxis before they do, all uber has right now is stolen technology (see google lawsuit), and drivers (the one thing self driving cars will get rid of). All uber cares about is money and if they have to kill a few people for their research then so be it.
Human may have had high beams on too on a road like that.
youtube
AI Harm Incident
2018-03-22T16:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"ytc_UgwFKh2khiKhfjV6xo14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhGZgAp3fG-PdruSZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwlJMt37GV_y98Iwk54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyixYXJWm_rDcrBg6N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6tdOFz4UasR9ov_Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]