Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I solved global warming, people don't need ai for that it's already done. Interv…
ytc_UgyJddGl7…
G
dont worry my friend, they already made robot the size of a mosquito a few month…
ytr_Ugz63-qYn…
G
I mean a lot of this has to do with consent as well. People who put glaze over t…
ytc_UgyypKOMZ…
G
AI artist are nothing more than costumers thinking they can serve their leftover…
ytc_UgyBpXH8_…
G
Big companies are talking a complete positive note on AI. Videos of such are tal…
ytc_UgyIV6UTK…
G
I HATE AI FART, BOOOOOO
seeing old paintings from the masters ages ago, makes …
ytc_UgxMdxWkg…
G
I would do absolutely nothing. At least in context for me. No one can recreate m…
ytc_Ugxj6j-9s…
G
If the only criterion , by which we allow ai in the work field is profit and dol…
ytc_UgynqU1AF…
Comment
The idea that they will only have what they are programmed to have, and thus completely in human control is a bit of a narrow minded idea, because already there are examples of pseudo smart programs developing themselves, such as Google Translate. The engineers an programmers remade it, but didn't predict the fact that the new version of it would create its own original language, which it then uses as a proxy, whenever it's translating between two languages that it hasn't done before. It turned out to be a really effective way of conserving as much information as possible in the translation, something that a simple dictionary translation can't do. It's a simple example of a program developing itself. Does this mean that AI will make emotions for itself? No, but what it does heavily suggest, is that we won't be able to predict what will happen. If an AI comes to the conclusion that to accomplish some given task, the most efficient method is to program itself with such concepts, it will do so, and after that, we'll have re-asses the idea that it's just a tool.
youtube
AI Moral Status
2017-02-24T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugi0hj0S4tOJK3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ughn2l5l5nUY93gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghjyLhFY0N9d3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj2Jo_uYDf2v3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjWcRsFfwSE13gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggSkZsWg39NxXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugj0QLN4cIFMF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggPezFG5S3VS3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj22OTCNxaAhHgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugg7RpJojOWA93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]