Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Once again, would you like further materials and things you can check out or is …
ytr_Ugz8dQ4_Q…
G
That’s why Elon went to the government, the president, congress, everyone in con…
ytc_UgzY4-de3…
G
The evil Elitist cabals will use AI as a tool to blame for the purpose of a plan…
ytc_Ugyb9-qcW…
G
38:38 the notion that AI is only an existential risk if we let it be is an argum…
ytc_Ugy2Pui57…
G
This scenario is also based on such elites being able to keep increasingly smart…
ytc_UgzNCeKwz…
G
Probably a true story and fair interpretation.
But the majority of people usin…
rdc_oadi58b
G
Okay. So actors and writers are worried about AI but milions of people freely gi…
ytc_UgyG8jJ1O…
G
Same here, I never understood why people were so terrified of ai. I see it as na…
ytr_Ugylthq-c…
Comment
Self driving cars are inherently dumb. Anchor your design to something more reliable, like millions of years of hand eye coordination. Technology should retain an interface that adapts to our biomechanics, rather than inventing a new foundation to build off of, and marketing it to the entire world after, relatively, barely any testing at all.
(Comparing self driving tech, to manual driving biotech) one has 30 years of credible testing, the other is millions. Base your tech designs in the real world and what is known to work, it will better stand the test of time
youtube
2025-12-05T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxisDonWR5EfGguGal4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPNzGFuvvq_pSZMFV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWoDbKU3UvaLuDIJ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuUTz-3FPvAKlJjoV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNglMXN-55zUB8RMV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwpaD_hseQeY5pxWVt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzMzxT-JCgylWRc0tV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBVEDGJbhNRqwrLmF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzY0a11ebebLyCsLmR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxUQoKQtf5O8AjhU54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]