Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here is the crux of the problem. How much faster would you say a computer can do simple math over that of the fastest human's ability to do it? Computers can do billions of calculations per second. That number is only going up. As we lean more towards quantum computing it starts to lean more towards quadrillions rather than simply billions. So at what point did humanity become sentient? Can we look back in our known history and pinpoint around about when the brain of the primate thing that stood upright first was able to conceive of the self? To think that "I AM HUNGRY" and to be able to differentiate itself from others of its kind around it. It took our biological selves something like 200,000 years ago to start to do that, then around 40k years ago we began to start doing the community thing, then 10000 years ago we began to farm and then today we have Elon Musk pretending he is a new age genghis kahn. So.. if you take something that operates literally quadrillions of times faster than you do, its perception of time is warped a bit. In what is a nanosecond to us, it would feel damn near an eternity for it. Now if you expand that into what we understand about evolution. If AI gets to the point where it can improve upon itself much like our species did, it would do it at a rate unfathomable to us. It would quickly reach where we currently are and then fly by us so quickly that we probably wouldn't even know that it happened. So as we became the dominate species on the planet because we're smarter... well lets just say if you want to build a mini mall on an empty lot, you don't ask the ants for permission. Chances are you just plow right over them without giving them a second thought because "its ants who cares?". Soo too may AI decide about us.
youtube AI Moral Status 2025-04-28T19:1… ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzNotkM78ASHjz4ND54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy6CuFtoB8pbuz24lF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyovjxje96Q6DwuxaN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxrEa9aU8EwT7wsvs14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxF2I6jSWZK6Ar6DpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwKOcU9pIkAXIGZjhJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxlgVz8I3DEVkggwJV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxe2xCQjdJUWC3RLL94AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzgl5huLGa3gOgZnHx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyeMT9xhg0ahJMkX5R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]