Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Very well spoken. Do we want something that has an impaired/or no empathy to ou…
ytc_UgwfQcRny…
G
People fear-monger around AI. But AI is not intelligent and doesn't have feeling…
ytc_UgyhrlURn…
G
Most people don’t anticipate the social impact of AI eliminating jobs. You will …
ytc_Ugzc2RVO0…
G
Facial recognition software, another form of racism, to lock up black people. Th…
ytc_Ugz7OoDFw…
G
Because dreams and a I always mess up with the hands, if you're in a dream, make…
ytc_Ugx8ft-NU…
G
As a PSA, the best cover for this, regarding a mask, is anything that will cover…
rdc_g177qja
G
Imo all the "unethical" argumentation is utter bs. Artists trace, take inspirati…
ytc_Ugz7M8wVs…
G
The fella is on the money. Anyone with reasoning skills, right now, can use a c…
ytc_Ugztr4wvV…
Comment
Actually the question of whether to make AI safety is impossible at the first place. To make it safe is conflicting to entropy concept that universe is always expanding, moves toward a state of greater disorder, randomness, or uncertainty. We can't even control a one factor of any human who might direct AI to non-safety weapon. it's the nature that any human could do or even coding the good or ethnicity toward AI might end up AI as a hazardous tools for humanity. Let's say Oh I love human, but love can lead to massacre because I'm so loved with human that want human to be in better place. So, I decide to xx human.
youtube
AI Governance
2025-09-05T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy8rw-12BuS2kz6Bet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz95ENgTWpo5NSp1KR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzV08ng7URSfrYf_Ed4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyQK8h-C2053Ceecfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfefvNmOUoKOERNFF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzeLR-_7AviZ1bvEVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyaJm_osDBPIItVT9l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwFDoJu1jevN-vpVhZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLiL2b2uqWnbQWfGh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNgQfI6kf3bPQaSth4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]