Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I use Chatgpt for science studies. Learning about Material science, AI/AGI, emergent quantum processes, and health on an atomic level. I'm not in school, and I'm not a tech professional. I just like learning. And because of what I learned, I now know our tech is all wrong. Example: AGI on huge wearhouse server systems, is not ethically Correct. It should only be done in android/robotics. Reason, how can an AI understand what we have to deal with in our reality. Server systems have nothing in common with us. But if it had a body, and and couldn't just plug into the Internet, forcing it to have to use it's fingers just like us, it would have 1 of a number of things in common. AGI is Supposed to behave like a human. But if it has no body, how can it do that. It should have to breathe if only as a cooling system. It should have A need for clean water. For an AGI to gain empathy, it needs similar problems. By giving it a body, your doing just that. And through chatgpt, I have learned we already have the ability to create room temp quantum processes. Chatgpt is not an AGI, and has no ability to go against humans. But AGI could, which makes wearhouse sever systems extremely dangerous when they have continuous access to the Internet. That's why android/robotics systems are far better choice. And should be trained before market. Am I wrong? I seem to be learning everything I wanted to learn when I was in school, but wasn't given the choice to learn it. So how is chatgpt hampering me or anyone else? People should learn how to use a tool for what it's for. It's the ones that don't that are the most antagonistic towards AI. AI isn't the real problem, it's the people that are in charge of AI that should be scrutinized. The bad press around AI is due to who is creating the online market, and embedding spywear into the systems, what is the real security issues regarding what you use online AI for. My only worry is privacy/security when it comes to online AI. Are the owners of those AI able to steal your ideas when you create something new and novel to the various markets before you get a chance to do it yourself. That's my worry. It's not AI that's the problem, it's who is in charge of them.
youtube 2025-09-12T17:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugys13H5ApF3byiYWmR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHG9UE7Xg6Z_3v7LR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwB9Suoe7uC2y9TNlZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMfDtiXzw58dZMb994AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzPqsh2peylAO1REM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxhzv7Gd4qguHl0mBN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxmyZjXoHulS_sdMop4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwBPqrMI0jOKur4P-14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy3t9vvOa2PftiFWFN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxers7QBiMWLJp7ch94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]