Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
0:50 - AI might not have "personal beliefs", but AI's been inculcated with the b…
ytc_UgzAAa-tt…
G
Nevermind that AI is about to disrupt the entire labor market. AI IS RACIST SEXI…
ytc_UgyFu97ha…
G
I definitely know now that AI isn't as smart as i thought it would be. it actual…
ytc_UgxP3qyw7…
G
🤔 >divine intelligence (God) > artificial intelligence> natural intelligence( na…
ytc_Ugyq9Qzyj…
G
That's a bad thing? Do you drive 80mph in a 60 while it's raining? We spend hour…
ytr_UgwPAOOcX…
G
Thank you for sharing your perspective. If you're interested in exploring AI and…
ytr_UgzlAsoBT…
G
Who said him to go between the task the robot is programmed to complete as effi…
ytc_UgxRACIWL…
G
@fire_silicon7803still doesn’t really disprove my overall point, ethical Ai used…
ytr_UgyZAyMvY…
Comment
The reality is that a real self-driving car's algorithm is not going to have a specific condition for being boxed in on all sides by specific permutations of vehicles. It probably won't even figure out what kinds of vehicles are out there and it certainly won't try to figure out which bikers have helmets. There will be a series of top level, general case decision trees from which specific responses are emergent. The code will basically just tell your car to take the path which takes you away from all known obstacles as fast as possible. In a thought experiment like this, it's going to end up being something like your car will swerve in whatever direction there is slightly more room, while braking to try to dodge the falling boxes.
Ethicists seem to like to think about technology in an abstract, perfect sense. They're trying to figure out how to program an omniscient car AI to respond to contrived scenarios while actual accidents are going to overwhelmingly result from software bugs and hardware failures. If a car AI is good enough to quickly and reliably figure out the complete casualty result of every possible action it can take, then it is definitely good enough to just avoid trailing a giant ass cargo truck at less than stopping distance.
youtube
AI Harm Incident
2015-12-14T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjItq0wivzFzHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi3UjQWwYBga3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugil7mqZ96nRsXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgizhDQN0tfbqXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjdML6iup9kxHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiDITa8mouAQXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Uggk2g1O4hSYuXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjdS9_U-Ytg-3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgjIfcNAortGP3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghasmfeHrS-OHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]