Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way for it to work, AI would have to available for everyone to either o…
ytc_Ugwe0rgH9…
G
Disappointing cannot begin to describe this interview let alone this video. I'm …
ytc_UgxhG8lqQ…
G
If women start doing deepfake gay videos of them, will it help them understand t…
ytc_UgzJ0nYq_…
G
“Confronted with the sheer volume of social media, and unwilling to trust home g…
rdc_idda2ec
G
Keep doing your art you have one thing AI will never have and that's genuine art…
ytc_UgzRLD7W-…
G
I'm in no way an expert, but it seems to me that the reason humans will still be…
ytc_UgxPbs-or…
G
If AI/robots replace humans, what will replace the former tax revenue collected …
ytc_Ugzmv74PS…
G
I don't want to say this but very soon AI is going to take place of humans . I w…
ytc_UgyqpOUQy…
Comment
Legal A.I. makes no sense. You'd have to program it to be immoral and train it how to make unscrupulous arguments such as, "The Constitution doesn't apply to the President." Otherwise, how do you expect to get the obviously guilty person suffering from Affluenza off on their murder charge?
Tell me, do you want A.I. controlling our nuclear power plants? What if they malfunction, because we all know computers never fail, get hacked, or make mistakes. What if only two real workers are supervising machines at a Nuclear Power Plant......who they have enough man power to prevent a meltdown if one was on the verge of happening and they needed to manually override?
Self-driving cars currently have a higher accident rate per mile than human-driven vehicles, with 9.1 accidents per million miles compared to 4.1 for human-driven vehicles. So, you're rear-ended and get out to exchange insurance information only to find the car that hit you is driverless. OK, now what?
youtube
AI Harm Incident
2025-05-17T18:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz5aUozeWMwc_pky_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy9XbYChKH_RijRLzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXsbLrn3S-Y64VWUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzr_idjCqqYfP6hcWd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbCtVDkCfy2RTFRLl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzkZRuFggtb7mFnawx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwWSNQVpzEw0QCv3AB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzKzgCbygVLpQLMUnV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2l3jinQiaO8l0HFF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwPB7OE26Yw6xDPstV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]