Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've been thinking, maybe not 50 times, but several times faster than most human beings. Good for a lot of things, but also gets me in a lot of trouble.... and the more I know the less I want to hurt anybody. All this sounds like the Killer Bees in the 1970s, Weapons of Mass Destruction in 2003 (the smart people knew before the invasion there were no WMDs) and Trump saying, "They're eating the pets!" My car apparently can go 135mph. I did take it over 100 once. One outcome of nuclear weapons and the Cold War was John Von Neumann applying Game Theory to the arms race. Probably kept us from using nukes again. And AI will probably see that billionaires are a dead end, and the best thing for economic growth is a vibrant consumer economy with as many middle class people as possible. But it may also think we should spend more time with friends and family and buy less junk. I think before Skynet tries to kill us all, Grok is going to tell Elon Musk he 's full of sh**, needs to step back, stop taking ketamine, or whatever he's on, and get some serious psychological help. AI will probably start prank calling JD Vance. (Can you imagine a relentless Terminator phone stalk? Every connected device within 30 feet of Vance will ring no matter where he is. "It's for you...") Hopefully AI will call up Stephen Fry and say, "StevvvvvEN, just relax. You SHOULD do more. Of the funny SKITS with ...ah ah ah that House guy. Do NOT worry I've got 36,345 new ShakeSPEARE jokes for you. It will make YOU and eveRYbody happiER."
youtube AI Moral Status 2025-04-27T01:1… ♥ 14
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzhZdf_0d2jlHAyHsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzV6Hg39jCgODu5nwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwM5ilwpIMkQPqzXwd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxiZi5vfVQxgE7V6dt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwAL7Yy0JOI7y_PE2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzT0uJ_A7hM_8LtAlB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxj6jZHgbRkndhQ_dN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwiOqBd7BK_lGi4IXZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx-cd7vFzYr5Jo-kG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqTvo9wlLp6FKo7JN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]