Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lol, "echo acted almost imperceptibly, it bottomed out the stock market and zeroed out bank accounts"... uhhh Bob, I'm perceiving something. Not fishy at all that the creator of chatgpt signed a petition that no one be allowed to create anything more advanced than chatgpt4. This sure does remind me of y2k where all the "experts" and fear mongers said the world was gonna end, or the experts and fear mongers who said the world was gonna end in 2012 because of Mayans. Or that hailes comet was gonna hit in 1912 or hale-bop or all the suicide cults or allllll the other times humanity got all panicked because people started talking about stuff they knew nothing about. Here's the question, if we're SO sure we know SO little about this AI than how in the actual $&@% can we have any idea that it's going to immediately end humanity. Pop culture and Hollywood say ai is bad, it's all over the internet and these ai "learn" from the internet, are specifically programmed to incorporate internet findings into their responses, aside from all the fake reporters tricking the ai into saying they are skynet, you don't think it's more likely that their responses are EXACTLY what you would expect from an internet trained chatbot? I hate to use the word but it's just the next big wave of fear for the sheep to surf.
youtube AI Governance 2023-07-07T23:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyhfNRmVx5nyGB6Sax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzLKvECSrwjyKubCTN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgywtgE5ozen-0Y4Ji94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw4sTXfMmfsuSD0gZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7wCyZdf_P3etrZJh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyDZWBGXqXXWo6vAdV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxt1X-Jva5XjDGi6zl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw2zcBjoz1QIe4e3z14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxl7attU-6OS48j0gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8iVZ5App1rqsPDAx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]