Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
FSD is a scam. The technology doesn't work and it puts yourself and everyone els…
ytc_Ugx-0uxc3…
G
Finally, a way I can waste my life on reddit doing nothing AND contribute to soc…
rdc_cjtf3ib
G
I've been writing steadily for around 17 years now (largely as a hobby - I'm not…
ytr_Ugywa9Np5…
G
The article says that about half of the jobs being cut here are in downstream re…
rdc_czlh1ok
G
This exact video inspired me to make my own language. It'll be used in conjuncti…
ytc_UgzGdLpaK…
G
@CentreMetre If you give the a model 2 different data sets you will end up with …
ytr_Ugy8nCjyC…
G
This was a dark episode. I grew up in the 80s and 90s when the Internet was eith…
ytc_UgzyB9I8G…
G
Nor should it. The police should be made of people, not sanctimonious boyscouts.…
rdc_oi3nrit
Comment
"The autopilot turns off one second before impact, who is the manslaughter charge going to stick to"
If i placed a landmine, did i kill the person that stepped on it?
The major difference between a human and an AI is that humans (usually) have reasoning. AI can "reason" to a certain extent but in the end it cannot solve a situation it has never experienced before.
A human that sees two low lights at a distance coming towards them much too quickly and in a odd way will understand that their eyes are being fooled by appearances and spend extra time figuring out what those lights are.
And a human will also see that those lights are passing real objects near to them that they themselves pass mere seconds later.
And a human will understand those subtle differences that makes the vehicle in front of them into NOT A CAR and will then be able to reason that it's something that needs to be re-evaluated.
Humans also have exceptional spatial reasoning, where an AI gets some number crunching wrong and gets to try a billion times more, a human that fails at spatial reasoning usually ends up dead.
For instance, an AI would happily drive off the edge of a fallen bridge as it cannot SEE a problem ahead. A human will not because it can SEE that the LACK of a problem ahead, including a bearing surface to ride on, which means it will fall from a bridge.
Humans are well suited to connect unlikely dots. If it looks like a car, walks like a duck, quacks like a duck and flies away like a duck then it's most likely a duck!
An AI with radar will see that whatever is ahead is closing in fast, but looks like a car. It doesn't perceive that it quacks, walks and flies away.
On the flipside though, humans are also well known to get things wrong and overestimate their abilities. An AI is well suited to calculate the stopping distance required to within one feet.
Where humans think they can come to a stop in time no matter how fast they are coming up to something. Which usually ends with a sudden forceful stop.
youtube
AI Harm Incident
2022-09-30T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyD01XuCc3TMxZvTrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEE3H8wEeY7SJ6C654AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAa2OAHEIeT6tBx_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmXleybRBST2NUUDx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztsG6Eu386q_0W8qp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNaqX4kEjDzeQAomB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzvRSZ5UMTp5W4I5qR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf5TmnZOvSJgMtxGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzp0vvrjwNT-jYIzHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyd94zeSEsiHzQrWaJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}
]