Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
humans are proving AI right...humans treat each other like crap...good people ar…
ytc_UgzZ2kVbi…
G
ARE THESE PEOPLE DUM IN THIS WORLD DUM PEOPLE ROBOTICS ARE GOING TAKE OVER THE W…
ytc_Ugw5znBAv…
G
Is there any peer reviewed objective research statics to prove, AI systems inher…
ytc_UgylmH1__…
G
This is a universal thought? I thought I was just a bit round the bend thinking …
ytc_Ugz-PIfIV…
G
Already foreseen 2 years ago: I'm always polite when asking something to ChatGpt…
ytc_Ugz1NPAca…
G
I was working for a big semicon company that in the late 2000's had a big effort…
rdc_gt8ki8n
G
The problem is no t the AI itself.. Is the use is gonna be done of it. It will b…
ytc_Ugwgjd-vA…
G
We're a way off from any serious existential threat. The thing we should be afra…
ytc_UgzLIJ9x4…
Comment
First problem is expecting a human driver to take over in time. Automation that can seem like automatic driving at a minimum promotes inattention as it is doing the majority of the work. People will be much slower to act when there is a need and the realization of that need will likely be too late. There are problems caused by aircraft automation. In some ways it helps pilots but they can develop dependency and get specific training on a regular basis to deal with these issues. Drivers do not get such training.
At worse, such automation enables total distraction like paying attention to one's phone.
Disengaging is also the completely wrong response at such a late moment. If the automation is going to take over, it needs to try and avoid.
I also note that car was staying in the lane next to the vehicles on the side of the road. By law in most places one has to move over a lane. That was perhaps the early warning to the driver to take over, but too late. The system is no good if it doesn't follow some traffic basics like that.
Personally automatic driving seems more of a pipe dream until the roads are specifically constructed for automatic driving and only automated vehicles are allowed.
youtube
AI Harm Incident
2025-02-09T00:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAFQBeSTNdDFCNLCd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzFJUGkjhTnVZaO2cd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz8Kq0kkDafXWeuKAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxyJw0lBpwlB8t0uM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6BdvFC32STL9goAp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVwdsHIRRfPHIeXlh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwisFB6Iwt5bwYPyDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFsr2csxlbD3r8UiN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgypHeJg7YS7YKruSSd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxh50gLsl7n_pjy-594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]