Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Also good to remember here in the west that 33% of the workds population has no …
ytc_Ugyv_drIJ…
G
Ai doesn't have to replace the artist, it just has to show them what they're rea…
ytc_UgyX1uqUh…
G
Yeah keep up learning AI in these robots you stupid idiots you're all about to l…
ytc_UgwAjV90f…
G
The only way ai art should be used is, the ai should be trained SOLELY off of ar…
ytc_Ugxw_IhBT…
G
India is the third largest producer of electricity in the world.
They have nucl…
rdc_o3292fm
G
AI does already manipulate humans, it is manipulating their greed, otherwise the…
ytc_UgwOK9Hzy…
G
Yeah the fact is that automation will take some of our jobs but it might keep cr…
rdc_glj7i3j
G
Выдаёшь её за модель и отправляешь в Дубай на хрено-пати ... Для шейхов подойдёт…
ytc_UgxCZK2Ih…
Comment
This is so fundamentally wrong. Computer scientists anthropomorphizing inert electronics and abstract algorithms in a way that actual psychologists and neurologists would find sophomoric is such a scourge on our society. It is impossible for a Turing machines to progress beyond the level of data! Contemporary LLMs are no closer to understanding anything, to having the most basic grasp of information let alone genuine knowledge, because they are just very complicated Turing machines, i.e. they reduce to algorithms, which are inert abstractions. LLMs are vastly computationally expensive linear algebra convolutions with billions of layers trained on the entire internet, trillions upon trillions of pieces of information, all in order to mimic the expected response-- after *human workers* have not only gone through and labeled the training data but also corrected the model millions of times over in order to make the responses come out right. That is so much human labor! All of the vast amount of training data, collected without permission, done by human creators, and then millions of hours of human labor on top of that labeling that data and then adjusting the model by hand. And the thing that is supposed to be intelligent here is just an algorithm that happens to include quantitatively complex linear algebra, executed by the same old electronic hardware? This is the mechanical Turk scam all over again, only this time the actual human making the thing work is not a little guy spatially hidden inside the device but teams of sweatshop workers in Kenya and such, as well as all of the unwitting human creators on the internet, displaced in time from the product for the end-user. Of course the model does not possess natural language understanding! It does not possess any understanding. It's just returning the expected response, which given the vast amount of data it is pulling from and all of the human training it has been given, is not surprising or impressive. Yes, all that these machines do is stick pre-existing pieces of text and images together. No that is not what human creators who are *influenced* by previous works do, at all-- human creators understand what they are doing, they know what a human face is, what an expression refers to in the world. Algorithms do not and by nature never can exist in the world, can never make contact with the world, and so cannot understand anything.
youtube
AI Responsibility
2025-10-11T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0RZmpsz-cM0XvhSl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxbImAhO9if0I9sLKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6njlPgqutOEdEmJ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwow2FEmFfRSSm58Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0wQMFhJ_sV9BGPgJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcKeKhKrJMyqY3PkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyYtX10zNF0QF15xvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ-PyHDxietFMcvdJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuNKPKvWpCbklAQH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjUlX92iyt3c4690N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]