Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sentient has a very strict definition, and the only requirement is to perceive or sense things. That's it. We have AI that does that already. We have for a long time. Whether or not they're _conscious at a human_ is another argument. More than likely they're not. You would think journalists would _at least_ use the correct words. Look, we're retreading old ground again. When AI was all the crazy a few decades ago, people were coming out with outlandish claims like, "It's alive," and, "Tomorrow we'll all have our own Rosie the robot maid!" Didn't deliver. Now we're starting all over again. Overpromising and failing to deliver is what led to the AI winters and slowing progress on it. The only thing these experiments prove is maybe we should be testing _humanity_ if they're sentient.
youtube AI Moral Status 2022-07-25T22:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwx461trqOzmqbBO254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwifpGxb6GpZ6KMNat4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwO4Tjkn5_4QUDBUEZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz8Mbjl6LNmKvmCkyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz2P8Oei_aB2li_SUd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]