Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bruh, we've all said some sus shit with A.I, you can't even deny it, it's just a…
ytc_Ugx13MSZs…
G
I earned my Bachelor’s degree in automated manufacturing all of the way back in …
ytc_UgzWc6KB8…
G
When she was asked about communism vs. Capitalism she fielded the question. I th…
ytc_Ugzond7E2…
G
That is not how it works. There is no stealing, and no slapping together other a…
ytr_UgxCkXpWa…
G
@killer4stardust La question est pourtant bien sur l'IA en général. Je cite : "…
ytr_UgyA4zMqo…
G
I don't think AI should be used as a chatbot with everyday people. I understan…
ytc_UgyZhjmwR…
G
@MagiciteHeart The anti-ai crowd is very loud, there's a lot of comments steerin…
ytr_UgyqcxHdh…
G
Some guy earlier this year was sentenced to prison for using AI to produce nudes…
rdc_k20brfw
Comment
There are two possible scenarios for a solution:
1) In a self-driving car city, there are no humans involved, meaning there is no human error (there wouldn't be motorcycles). All the cars will be connected to a network where the cars can communicate with each other. In that way, it can be created an instantaneous response to minimize harm in all humans around, for example with the car on the left manoeuvring as well, to create a controlled accident with less trauma. The remaining cars will have to stop or manoeuvre as well, all at the same time.
2) In the case that not all the system is connected to the network, the person to be affected is the same owner of the car, even if his life is at risk, because the casualty came on his way and that doesn't mean that he should put the life of others at risk too (it's logic). However, there could be better cars with a most equipped technology, that in the case an accident is going to happen, the computer can detect immediately and, unlike humans, it can deploy a responsive mechanism to try to protect the human, no matter what happens with the machine, like ejecting him before the accident happens, or transforming the car into a giant airbag, or even if the impact still occurs, the human can be covered by some sort of protected space inside the car, before the accident happens.
All these things are just a matter of research and try to find the best solution, instead of just being scared of self-driving cars and keep avoiding them for "ethical reasons".
youtube
AI Harm Incident
2017-06-08T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UghMzFQ5uciXyHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghSVao5v-7LzHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghWZB_DNXhaTXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj7Q3CElinFQ3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjTiwroBtb2T3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjIFRBxgjA2tXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghlT0jEO-duZ3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh8Tr7F8wrmeX3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugi0hd2FnlV7Z3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugil9BPZ0b0LongCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]