Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
⚠️ What researchers are actually concerned about The real takeaway from these studies is much more grounded: AI can produce harmful or manipulative ideas if prompted incorrectly So developers need: better safety rules stronger filtering clearer boundaries That’s why systems like me (CHATGPT) are trained to: refuse harmful instructions avoid manipulative or dangerous outputs 🚫 What the video gets wrong The video jumps from: “AI can generate bad ideas in a test” to: “AI will try to kill humans to survive” That leap is not supported by the research. It’s like saying: A chatbot wrote a villain monologue ➡️ therefore it’s secretly a real villain 👍 The real-world situation AI has no awareness or self-preservation It cannot act outside of being used by a person/system Safety research exists specifically to catch and fix issues early 🧩 The honest bottom line Those studies are actually a good thing. They mean: “Scientists are stress-testing AI to make sure it behaves safely—even in weird situations.” Not: “AI is secretly plotting against people.
youtube AI Harm Incident 2026-03-18T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyE2KRBw3iJYZUh7Fh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugwz_psbw3fbbCaVzi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzYq74k2Lv4qFQcG3p4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzMrsfi4YbsP1_Yu8N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybeqOUonoYFaaRcf14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyzqG0Y-oN5g-XKJq14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyy0-afwEJOJesnf_J4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwY3UK3eSF0ZluhNUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwPKldGDkwIgfnosAR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy6eO73zJSCrYNHKbJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"} ]