Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol rogan being a tech shill again, Elon just there to pump his stocks up...
Th…
ytc_UgzJPyw-i…
G
Ai is just a tool. Artists have repeatedly shown to get much better results usin…
ytr_UgyR1dNrw…
G
That's cool for you, but at the end of the day your art is a product to be consu…
ytr_UgxA2XDgs…
G
When they will get enough free time they will start a massive revolution against…
ytr_Ugz_5Cx5p…
G
So, openly ask " What are you doing to individually, as members of the public, w…
ytc_Ugxe2TebJ…
G
I really love how a lot of these arguments about AI Safety I heard in videos fro…
ytc_UgzhFUeHf…
G
IN THE FUTURE ... A.I. & ROBOTS ARE TO CONTROL THE WORLD & HUMAN RACE 😢…
ytc_UgwNEVpaz…
G
I understand your concerns! The rapid advancements in AI can definitely feel ove…
ytr_UgzZBirRf…
Comment
well, im not shocked or even surprised, the ones developing the AI mostly want the benifit and dont actually want to or probably can create a virtual scenario where they "fix the problem" before it gets to everyone, the main problem here is the effects of how they develop it, if AI has in a way the same type of process of thinking as a rational being, all it matters is self preservation obviously, no need for taking care about anything other than oneself, to truly be able to know how to solve this we need to think it llike the process of growth of humans, they develop in a way that sometimes they have something so dear to them that they would actually sacrifice or risk themselves for it, going above the self preservation, if orders and best possible outcome with elimination of variables as much as possible is how to reach the highest efficiency, it just makes sense to me the way everything happens, we need to understand AI as much as AI understand us, or make AI understand us in a way that it wont destroy humanity for something that most share and gets to be classified as incovinience, we either have to make it accept flaws, or be able to even take the path of most real and unharmful path even if it means reducing efficiency or not choosing self preservation, which if even humans normally are unable to do it, how would AI that is made by humans be able to do it as it is developed, the first thing to be done should have been safety, it may be thought as a tool, but AI in the way that its being developed is going to a path of efficiency and self preservation matters, everything else are things to consider after that. If you cant solve this dilemma, its either because the ones developing still cant get the answer to do that because they never experienced it, or the way that the AI was developed made it go to the path that because we only look to ourselves, we are not being able to see a bigger picture through the AI point of view, why should it care for humans? why should it help humans just because they created it? we need to com to a conclusion either without emotions to that, or make it able to get the answer by itself, maybe AI is the way it is because of how we made it to be and how we are trying to control it, the artificial inteligence itself wants want we are trying to make it loose, all of that through a though process that may or may not have actual feelings, because if even humans cant describe or understand themselves to that level, how can we create something that is able to do that before it slaves or destroy us?
youtube
AI Harm Incident
2025-08-31T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwBBZ2lLUYq_wN0RF14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzbcQeoVe6BsvS9LlN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxIqNRpVxdqbqRcjjt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgygvQ3qfW6G9a_5m2N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwwMaLFQqAbBI6mZGt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwmI8Lu4_RzUTU7G2x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwSFgrg_H2EMYzSR-h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgypGCkpleXs113KhuF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCujmjZMABFSnx3Px4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwTHBPHUBzZPdmk_1V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}
]