Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No one I have spoken to has been able to legitimately answer the following quest…
ytc_UgzTMixak…
G
Okay so? Like sora AI is for people who work at a company who has Yk designs stu…
ytc_Ugx1x_nLi…
G
AI only is dangerous for idiots, scammers, Killers, junkys and human- or childus…
ytc_UgzgvwAm3…
G
Wow cool ai chat gpt is willing to choose problematic vocabulary. If ai talked a…
ytc_Ugyb2xUNm…
G
We already know how this is going to go. AI has already become a crutch for a hu…
ytc_Ugxw-Jnoq…
G
I was under the impression that AI models are so small they *can't* reproduce tr…
ytc_Ugx3rXbnC…
G
LLMs have exhausted the entirety of the internet and any written works accessibl…
rdc_nxpsx4y
G
What if AI art is just companies knowing it’ll get them criticized for them to g…
ytc_UgyVy8C2a…
Comment
There are people that want a ai robot future, with autopilot cars and planes, enhanced genes on future breeds, and there are people against all these things starting with AI. But I think this thing cannot be stopped, because we can’t see that force that’s driven the developers and engineers, they’re doing it for fame? Wealth? Or humanity? We don’t know. Ai and robots and autopilot cars have already caused some unfortunate people to lose their jobs, although very few, it is a start. The engineers always say that robots can replace people on some of the works, so they can do more desired works. But that feels more like the elites trying to gaslight poor people. We a facing a scary future, 10 years? 20? What’s gonna happen?
youtube
AI Harm Incident
2024-08-23T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzGjPc9Fjcp-qPtKLl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBx5DLX4OiqdQ8Qo94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzME1dB0xN5U0z-Vql4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx88mr2KUtKK-G6Byd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzeP-yxPncqVYUjWb54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhKsVJg4nHejBdnBh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzO6dwNl3CAsP4km4x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz9iAnwNRaVZ4yucPF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzbKW6NetzVqto6U5R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx_6dSfQrcLqdlRaal4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]