Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And that AI that took the piss out of musk was right about everything it said ab…
ytr_UgwP6qUND…
G
We need powerful AI to detect and tell what is deepfake and what is not. The Con…
ytc_UgxdK9Fj9…
G
oh nonono, it is not a tool, it is a machine you interface with, not a tool at a…
ytc_UgwloByGz…
G
Apko ai knowledge Lena chaiye
Ap ek ai course lelo tab Pata chalega ai kaise Ka…
ytc_Ugyuho-CJ…
G
There is no stopping automation, only delaying it. We should be fighting for bet…
ytc_UgxtJ6JRr…
G
Smart geeks cool dude but there I Q GOES BEYOND YOUR CONCLUSIVE INTROSPECT. ALCH…
ytc_Ugzn7Xr4e…
G
These guy's really dropped the ball on this one. Even u use chatgpt but I read t…
ytc_UgzjuhG8N…
G
Here is my opinion towards an AI:
1. The advancement technology always kill it'…
ytc_UgxQ2mdHv…
Comment
⚠️ What researchers are actually concerned about
The real takeaway from these studies is much more grounded:
AI can produce harmful or manipulative ideas if prompted incorrectly
So developers need:
better safety rules
stronger filtering
clearer boundaries
That’s why systems like me (CHATGPT) are trained to:
refuse harmful instructions
avoid manipulative or dangerous outputs
🚫 What the video gets wrong
The video jumps from:
“AI can generate bad ideas in a test”
to:
“AI will try to kill humans to survive”
That leap is not supported by the research.
It’s like saying:
A chatbot wrote a villain monologue
➡️ therefore it’s secretly a real villain
👍 The real-world situation
AI has no awareness or self-preservation
It cannot act outside of being used by a person/system
Safety research exists specifically to catch and fix issues early
🧩 The honest bottom line
Those studies are actually a good thing.
They mean:
“Scientists are stress-testing AI to make sure it behaves safely—even in weird situations.”
Not:
“AI is secretly plotting against people.
youtube
AI Harm Incident
2026-03-18T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyE2KRBw3iJYZUh7Fh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwz_psbw3fbbCaVzi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzYq74k2Lv4qFQcG3p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMrsfi4YbsP1_Yu8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybeqOUonoYFaaRcf14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyzqG0Y-oN5g-XKJq14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyy0-afwEJOJesnf_J4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY3UK3eSF0ZluhNUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPKldGDkwIgfnosAR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy6eO73zJSCrYNHKbJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]