Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Or what if we just labeled them as AI so we could filter it out? I know we proba…
rdc_lz80mbj
G
There's a lot of problems with this video and makes a lot of huge assumptions AI…
ytc_Ugx9yweF6…
G
Neural bypass VR + universal income will turn many of us into VR addicts. Dopami…
ytc_UgxZOfM7b…
G
People are already getting exponentially lazier and dumber. If AI starts doing e…
ytc_Ugy-NDgtC…
G
That sounds exactly like what a cult guru says to skeptics. Not that I'm implyin…
ytr_Ugx-gdDCa…
G
@keion_arknights i am more familiar with the NAP now (which frankly zulu could'v…
ytr_UgzMBmgQT…
G
I've seen the term 'director' floating around, which I think is a better fit. Be…
ytr_Ugz6_y2Jh…
G
What would be the real purpose of this? Can anyone tell me? Why would anyone wan…
ytc_UgweMzysq…
Comment
In the box dilemma, if every vehicle is self-driving it's not a single vehicle that is gonna react at the same time. Driverless vehicles reactions have to be in a "hive" state, as they are connected by a network to the grid. As soon as your car notices the danger, every car around reacts with it to avoid that danger. The way the dilemma was presented its as if yours is the only driverless vehicle. Also, driverless vehicles might have an reaction time hundreds of times better than a human and, even if alone, if the processor is fast enough and the AI smart enough, might avoid damage that, to a human, would be impossible to avoid.
We are trying to judge the dilemma by a human's point of view and a human's capabilities. the truck itself might speed up together with the cars in front of it, so the boxes get pulled with it while your car slows down, creating a bigger distance between you and the boxes, while that happens, every vehicle to your left and right before you will probably break while every one after you speeds up, creating a gap where you can move into. All of that happening at the SAME TIME. It's not "minimizing the damage" that should be sought after, it's nullifying it. If there is a choice to be made in a situation, it's still not good enough and needs to develop more.
youtube
AI Harm Incident
2015-12-11T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgiNpg6zN3dcEHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghvTvR7vS3DkngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgjbGUooE19fn3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UggsqGMr9EidiXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UggKVhVX1FqATXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugjo9a6Pw85wEHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgieJyVJNtEyqngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugh-9Rr-OAATf3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghZ3KaDpI3WZXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghvD0lAXFuW8HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]