Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like if an A.I. was designed to, for example, make toast it wouldn't have any inherent drive to live. This would change, however, if it had an accurate enough picture of the world to realize that by being shut off, it could no longer make toast. It may even, depending on it's intelligence, internet access, and software access do things like: manipulate their human's metadata to make ads for toast to show up, subliminally increasing craving; mess with their dietary plan to include more toast; or even alter the dietary plan so as to have less carbs throughout the day, creating a craving not regulated by the dietary plan, inspiring the human to make unregulated binges of things like, for example, toast. Then you get into the real hairy stuff. The toaster may go as far as simulating human emotion and feeling, causing the human to empathize with it as a sentient human-like being, and make it therefore less likely to be disposed of or shut off. All these problems could be presumably solved of course, with clever coding. That is, until a virus is introduced...
youtube AI Moral Status 2017-02-25T06:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgjR2zO_1LwfgXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggOs3HwjLeo6HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UggzjEvQA-SVuHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ughj52dn57v5_XgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UghQ9UQVYlM32ngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgjIXkiz05yonXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UghVxTy-agwO-HgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghVIe6nF4TwM3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugh_UzizPwht13gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugjn9CpVjJQB5XgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"})