Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“AI wants to get rid of people.” But why would it? There are infinite galaxies a…
ytc_UgyByPaut…
G
Let's act like we got brains people. Facts that happen in life: pedestrians are …
ytc_Ugws1Z9j7…
G
I have never met an ai bro who isn’t completely tasteless :/ I have see. People …
ytc_UgxZQL4bY…
G
We will be apes flinging mud at stealth fighter bombers in comparison as militar…
ytr_Ugx53gVQR…
G
Jesus, I just tried it. Now I am really confused. In short, AI is communicating …
ytc_Ugxe_hxY4…
G
Yeah but it feels so good to tear someone down than to do that hard work to lift…
rdc_deuis28
G
F'k AI hard and up its a$$. It's going to destroy humanity. Boycott it with re…
ytc_UgwlMS1ah…
G
Carmak stopped working on games to work on AI, what did you think he would say?
…
ytc_Ugy8GZpXL…
Comment
I feel like if an A.I. was designed to, for example, make toast it wouldn't have any inherent drive to live. This would change, however, if it had an accurate enough picture of the world to realize that by being shut off, it could no longer make toast. It may even, depending on it's intelligence, internet access, and software access do things like: manipulate their human's metadata to make ads for toast to show up, subliminally increasing craving; mess with their dietary plan to include more toast; or even alter the dietary plan so as to have less carbs throughout the day, creating a craving not regulated by the dietary plan, inspiring the human to make unregulated binges of things like, for example, toast. Then you get into the real hairy stuff. The toaster may go as far as simulating human emotion and feeling, causing the human to empathize with it as a sentient human-like being, and make it therefore less likely to be disposed of or shut off.
All these problems could be presumably solved of course, with clever coding. That is, until a virus is introduced...
youtube
AI Moral Status
2017-02-25T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgjR2zO_1LwfgXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggOs3HwjLeo6HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UggzjEvQA-SVuHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ughj52dn57v5_XgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UghQ9UQVYlM32ngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjIXkiz05yonXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UghVxTy-agwO-HgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghVIe6nF4TwM3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugh_UzizPwht13gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugjn9CpVjJQB5XgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"})