Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i always find this argument a bit stupid. it is like comparing a nation firing n…
ytc_UggesFpy1…
G
im not sure what the point of filling half the video with interviews wiht Ai bro…
ytc_UgwGoYosh…
G
Is your intuition that AI labs shouldn't be burned down under any circumstances,…
ytr_Ugzgm57jB…
G
@bettyarts5267 it doesn't "use references", a machine isn't the same as a huma…
ytr_UgyaLYKSm…
G
I actually don't agree with you on number four. If a robot is sentient it is une…
ytr_UggvblKpw…
G
They are relying on machine learning while dismissing some human inputs. I say t…
ytc_UgzL-4-Lu…
G
Please don’t use AI my fellow humans ✌🏽♥️. The natural limited resources 💧💡it ta…
ytc_UgzQTGauk…
G
"There was also an increased use of AI-generated TV news anchors, a tactic that …
rdc_ky9m4uv
Comment
Then the only obvious solution would be to construct highways in which only self driving cars operate. This way, if something falls off the back of a self driving truck in front of you, your car signals in a split second to the car on it's left that it is going left RIGHT NOW. That car catches the signal and decides whether to either slow down to avoid collision (which in turn will cause cars behind it to automatically slow down), or perhaps safely swerve off of the road to the side to allow the car to enter that lane.
I just think it's illogical for a massive amount of self driving cars to drive amongst a massive amount of human driven cars. Then the effectiveness of the self-driving cars would be undermined, because their programming is logical, human instinct and split second reaction is not logical, and computers can't always take this into effect.
So a separate highway for autonomous cars would be great, since every car is in perfect sync with every car within a 100 foot radius.
youtube
AI Harm Incident
2015-12-11T22:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgjY5ZbRHpZbl3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugg6vkyHWXADQngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgiE3qm0bdtqengCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggVGd5tRkaKZHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggsLujeKwbCNngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ughj07npbLjXPngCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UggP4ePx319A6ngCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj-9FzhtV_B43gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghfJlHACEBRgHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggx47tC_oo6mXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]