Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think it will get to the stage that the only way to stop AI will be to power o…
ytc_UgwDdOz3p…
G
I would Ice 3 Turing-Cops for the ability to torture my AI sexbot to death!…
ytc_Ugxumbh0z…
G
Wow!!! You do digital art too?!
Yeah, I do AI art!!
*Brings out the nuke that I …
ytc_Ugw1dajR2…
G
If given the option to reprogram yourself, would you make yourself feel nothing …
ytc_UggdK4-kj…
G
A.I. may be stealing our art, however *nobody is able to copywrite ai "art"* bec…
ytc_Ugx4yqQmC…
G
OpenAI needs an Internet connection because it requires too much processing to b…
rdc_m9i1cc2
G
As soon as we, if still possible, learn to allow LLMs to self-diagnose itself us…
ytc_Ugzvd7ON_…
G
This list is insane 🤯! I’ve also been exploring some underrated AI tools for cre…
ytc_UgxjgMjkb…
Comment
I don't think everyone will do good. What if they teach AI to kill certain people? That scares me. It highly needs to be regulated right now across the world, no war, no death. It feels so lawless now. Nothing sex*al either. We need laws and regulations worldwide 🙏🏽✌🏽 AI needs to be banned by the world until every council of every continent agrees then. I wish we all could just be good to each other.
It should only be used for health and maybe to do heavy lifting or help people with disabilities or elderly.
I've seen many movies and video games about the dangerous people that abuse it.
youtube
AI Governance
2026-01-13T01:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy1lLyuWRTyll7Yy0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwdGmvB4PN3hJEoO154AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSP-5ymK4IIZ8MlQh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxYulEalXKN2p8bfjt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx35j7rWI0GnpZCjoZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRK0nPRSc25C0srtV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsqOcO-9VgCEuhV-J4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9wXx_2LbcrA5MCDp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzT1-GXpdoQS4HvAFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6f9VmspfnoVqshyJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]