Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The full response I got is:
✅ What It Gets Right
1. “LLM = Predictive Math”
Sp…
rdc_mxfymfg
G
Talk about manslaughter, cause there is no such thing as self driving car. Human…
ytc_UgxNw5Tin…
G
I am not an artist I use ai but not for art. also art's definition is quote from…
ytc_Ugx4NKoUL…
G
What the heck are you going to ask the public - who supposedly elected a demente…
ytc_UgyLKXRq9…
G
Lol i was told 5 years ago not to use ai for coding , I became an engineer…
ytc_UgyK8t2Sa…
G
This isn’t a new problem, tesla FSD has been causing problems in America for the…
ytc_Ugx2kAi0R…
G
Still anthropomorphising and ascribing all sorts of things to generative AI that…
ytc_Ugw-oDUYi…
G
I really don't understand our obsession of developing technologies that make our…
ytc_UgzTCzZZ9…
Comment
So it seems like there is a need for regulation and new laws about the use of AI to prevent people from getting hurt from misusing AI... Mental health needs to be handled by professional doctors, etc. because it's so serious. Perhaps the people who set up fake doctor profiles can be held responsible in some type of way. Just like people who use AI for any bad or criminal thing, including causing harm to the public should be held responsible, just like if they were offline criminals. I hope that the necessary regulations come quickly to avoid people being harmed. So thanks for bringing this situation to our attention. Maybe contact law makers to help regulate and solve this serious issue.
youtube
AI Moral Status
2025-07-01T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxrIe-UHf7KSWtzDfZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBqqn-mNqQtKstlgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxrru4ZUKRjsYcMPjh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBqhB7CyUSeooWVld4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAoXQR3Emic2dssN14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxY4Vv4m0gqKGTyB7J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzRO69FY3prKigvZZZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxkcg2CwyIx1PVyD314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweoHIbBvuY1pGf1ld4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwm3xHyY5i99SnU_NV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}
]