Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is it possible that AI can recognise good in humanity? In any one?
I've never u…
ytc_UgzgFuuwn…
G
He has no idea about reality. Current AI requires that much energy that is not a…
ytc_Ugw60J7yO…
G
No. ChatGPT is programmed and trained to "simulate normal human conversation," a…
ytr_Ugx5sSvm4…
G
My friggin MSI MAG 275QF monitor has AI, i love the monitor but i didnt buy it f…
ytc_UgxcUN5XK…
G
I am a 3D modeler and specialize in architecture where precision is a matter of …
ytc_UgyrIO3OM…
G
Omg the world is going to be totally swamped with AI written trash books isn't i…
rdc_myjhxec
G
Technically they did not lie. We just assumed wrong.
AI in India means, "All I…
ytc_UgxBh3ZCj…
G
What if a country made the choice to mandate analog tech? I suppose in whatever …
ytc_Ugx4z23MJ…
Comment
Sorry, but these thoughts are all useless. A self-driving car will never get into this situation in the first place.
It will just leave enough space between the truck and itself, so it can just stop before crashing into the lost cargo. No one needs to be rammed, and no decision has to be made who may be in less danger or what may be less harm to somebody.
It is the same with all these hypothetical situations. The AI will just foresee it and will have enough space/will slow down early enough to just stop the car without anyone getting hurt at all.
AI will not be able to stop all accidents, but the number of accidents will go down extremely and we will have way less injured or death. But instead of saving lives, we think about super hypothetical situations and decisions the AI will never have to make anyway. That holds us back and people die in car accidents daily, that were avoidable with self-driving cars.
youtube
AI Harm Incident
2024-02-16T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzyoQHfkvKymBmesal4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxjDv4Z1CBjO3WJHB94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTShSLnJ9cwL-Lbw54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw2KRypsW4jIcRnBj14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5i-H8AgNVVhk3L5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLQD8OSQSPK_gIU7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxcoaaNEw2BPDxcR8B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugy1OKJlc-JkmZ4FchN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzCOQgM8jWOa4MaGkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxcp38aM6btPES_LwF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]