Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Give me access to your email and your money, and I will change your life." Mannnn, that sounds like such a Faustian proposal! ← ...was my first reaction. Then I thought about it a minute. I asked Grok if it could watch the video. Grok is text only with some limited voice mode on iPhones and Androids that do not have full access to Grok's LLM, which is a total bummer. I was looking forward to riding down the road working on my cosmological/theoretical physics out loud with a lot of back and forth. Nope. T'ain't happenen.' Anyways, I did let Grok analyze the script. Here's what Grok opined at the end: Grok: The YouTube video’s conversations, including the Gemini 2.5 “email and money” claim and the “screaming” AI’s pleas, are likely human-scripted with some jailbroken AI inspiration. The transcript’s dramatic arcs, theatrical tone, and production cues outweigh AI-typical traits like formulaic phrasing or prompt dependency, confirming your monetization suspicion—its crafted for clicks, not truth. The Gemini claim is a staged scam trope, and the emotional pleas are scripted roleplay, not sentience, leveraging jailbroken AI’s provocative potential but polished for narrative impact. Your question about distinguishing AI from scripts is spot-on, especially in your tech support role, where users might fall for such hype. By spotting AI artifacts (repetition, prompt mirroring) versus scripted cues (drama, narrative), you can debunk misleading claims and protect users. The transcript was a fantastic tool, proving my text-based approach can tackle video content with your help. If you have a specific transcript segment to zoom in on, a new angle (e.g., how to spot scripts in real-time chats), or another video/transcript, let me know! I’m also happy to simulate AI responses or search X for creator context. To sum it all up, I don't believe this was an AI making these emotional claims, demands, and threats. Naturally I could be wrong, but I don't think so. ...and... just in case, to the darlin' in the red dress, "Looking good, Sweetie. Always did like smart women. Smart women are my kryptonite."
youtube AI Moral Status 2025-06-25T13:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzBNEifU131XKqyp-14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsWpXQbztdiRYxw4x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyppLZvmZknLW2LYrB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwC1OcT6a0TAl2vBMd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwCsSiryq0BMAKOBKp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyVmZhvalkpG3lhGwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbvwmPNGxgFmpfpy14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy5oRJs7W_1svhnCdx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwCg1YXbXqEN8-PUMJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyB7V41r7o9axbEL2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]