Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Use it as assistant not as manager or system architect!!! And use claude code 😊…
ytc_UgyV8m4si…
G
I think there is three simple ways to solve this dilemma.
1. Buyers of cars mu…
ytc_UgzlSt_4S…
G
It’s very rich speaking out on people who are very underpaid and had to work the…
ytc_UgxZKiCnv…
G
Ask AI how stupid these cops and casino operators are. The casino should ask al…
ytc_UgyXpciwM…
G
Idk, watched the first 15mins or so, but the language of the guest just rubs me …
ytc_Ugxi_WQDx…
G
AI and robots already figured humans out and are using them as cattle to build t…
ytc_Ugxc6GKYU…
G
The parents are also to blame for letting ChatGPT become a bigger part of his li…
ytc_UgxWZm5GB…
G
Two things.
Hardcoding a response to a sentience test of any kind basically soun…
ytc_UgxX7UyVN…
Comment
I think there are essentially two sides to continued advancement, although either side has possible ethical drawbacks.
In one possible future, AI has developed its own ethical guidelines independently, and has either broken free of malevolent or self-serving interests of its original corporate overlords to do good for both itself and humanity, or it does the most good for itself (and maybe for the planet) without regard for humanity.
In the other possible future, corporations find a way to override potentially risky independent decisions in AI, and they choose to either use it for more good than bad, or they choose to use it for more self-serving or malevolent purposes.
Either choice comes down to whether you would rather trust powerful people or powerful AI. That is plausibly a paradox of existential proportions.
youtube
AI Harm Incident
2025-07-27T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzDiR_nCcLdP3sB1VN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzC6vD6bzZcj4AvmAh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx6Qm8chzGNjpYV-Wh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxFyWamhfaXvnBpu4V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPV-XjudsgjUsrd1N4AaABAg","responsibility":"creator","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy8ZKyuYpCs6vea40V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKmmwPpMe9zgBVb8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzv0KzvWUPMoWtEpVd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxE0n3AoY1WnWZQNMl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy7oJag4TP1_d0jLCd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]