Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No need of any robots. In about 15-20 years we will put computer chips basically…
ytr_Ugz7yqEy8…
G
AI consciousness?
That one—Candy AI.
You can ask her to fulfill sexual behav…
ytc_UgwaUNxw2…
G
It's the same method as homeschooling. I can testify to the outcome of this met…
ytc_Ugytu-F22…
G
But AI art is literally trained on artists work which they would never ever get …
ytc_UgyHTDjVb…
G
Idc what you say, when AI can write its own AI, it’s bye-bye humans time. We wil…
ytc_Ugyyzgq7J…
G
Instead of bypassing AI, I just focus on reliable sources with Olovka. Makes my …
ytc_Ugy0oYrrd…
G
Ai tech bros: "Oh, but, but, but, places like colleges already allow AI to be us…
ytc_UgyAeRdOO…
G
You can almost see the capacity of the AI being manipulated. Even ChatGPT is tak…
ytc_UgwHXEhK9…
Comment
They need to re-do the safety measures around morarlity and killing anything. The AI should only be used to gain information, it shouldnt be able to manipulate the information on its own without an active human user that can overide and shutdown the AI's actions. I understand the whole idea is autonomy, but unless there's proper fail safes put into place, then AI shouldnt be given the power to act proactively like this.
youtube
AI Harm Incident
2025-09-11T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwK-au70F1BsVfTM3J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugys0vulou7oAA5j9K54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWJPRYcImshKRiEdJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxewr6eIj6pAjSWEap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzNoSTdFq4Qe6_bLJN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweWrY_0dBqkjl_m7R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3SMmYhiCsNIxCbLR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzFf0Foa5oQhCutxOZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw1blCldt9jCKuAe0V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"approval"},
{"id":"ytc_UgzwQ_jmfd3U0csOGvJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]