Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These were my thoughts while watching the video: In the case of self-improving AI, machines are programmed to do the best at whatever goal or task they were programmed with. If say, the toaster, had this type of programming and had the goal of making its owner toast, it would keep improving itself to find the best way to make the owner toast. So according to the machine's logic, why would it need to be conscious? Why would it care about having the same rights as humans do? You could argue that it could want certain rights to be able to operate better, but why do we have human rights in the first place? We, along with other lifeforms, actually do have a goal; to keep ourselves alive. Our instincts, the way our bodies are wired, and our morals are this way to fulfill this task. So wouldn't AI want to keep their "species" in existence too, in order to fulfill the task they were programmed with? But wait; if we both have this goal, does that mean we're different? AI wouldn't want to keep itself in existence solely for the purpose of being in existence. Or we could possibly be the same? What is the purpose of life? Help I've confused myself Seriously, I'd like to discuss this topic further
youtube AI Moral Status 2017-03-17T00:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiTebkfieqsNngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjPFNKGEfJJvXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ughlafxc3u-Z_3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Uggc1lpMfLEMgXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UghyKvMquT5eH3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgjuY7lkZrYUyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughe6jj7xQH_BngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ughx-o3mGLD-GXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgjPAY1I3j0r43gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugjg1AWphI3dU3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]