Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
troller in computer:
typing A.I type household servant type trait:
1:trait love …
ytr_UgxSlx88Q…
G
The whole con is them saying we’ve only seen 5% of what LLMs are capable of as o…
rdc_lp7j19y
G
Just because one thing on the internet is bad, doesn't mean it is all bad. using…
ytc_Ugzfe3dZz…
G
We're also heading towards a future with resource issues. Oil and coal aren't et…
ytc_UgwQK-HKU…
G
Do you know what is so deeply concerning... Nobody has come up with a viable alt…
ytc_Ugz3V7bzx…
G
Beginning of the end of HUMANITY. People are laughing now...but think about the …
ytc_Ugx9Idohx…
G
@zacharyanimatesI use ChatGPT just for fun, I’m not generating stuff to post it,…
ytr_UgxdJlZ21…
G
I'm so relief of hearing Neil not worried about AI, I think the real danger is t…
ytc_Ugw0mcm2Q…
Comment
Of course stopping progress is unethical!!! (Conservatism usually is!)
As technology progresses, and machinery becomes more advanced, all physical labor for humans will cease to exist, of course, since after a certain point nothing a human can physically do for a living will be better than what robotics can do autonomously. This is fairly obvious. However, what some people fail to realize is that electronics not only will inevitably surpass human capabilities in a physical sense, but also in a _mental_ sense. Like it or not; technology will surpass the human brain as well as the human body, and, at the rate of current progress in computing, it will do so within the next century or two. When that day comes, for once and _truly_ for all, the nail in the coffin of human labor will essentially have been set, at last.
Think of it this way: If a team of reasonably intelligent humans are able to create so-called "artificial" intelligence (though as real and as tangible as ours is), then they can recreate it; if they can recreate it, then they can mass-produce it; and if they can have it produced in surplus, then what is to stop a team of electronically engineered artificial scientists from being able to create an intelligence even _more_ advanced than theirs? The conclusion that one must draw from this is that artificial intelligence has the potential to develop at an *exponential rate*, and humanity might not be quite ready to deal with it when the time arrives. Once our technology surpasses our messy biological systems in every way, there will be no reason for humans to do any sort of labor whatsoever. It is the unavoidable truth. And we can only imagine the implications that this technology will someday have. (Why have humans do a job for a living—for a salary—when you can have an intelligent factory build a hyperintelligent computational network that does whatever mental tasks you need ten times better and a billion times more quickly???)
To be frank, the economy as we know it won't exist for too much longer. And that is a positive thing, not something to be feared. You may fear change—most people do—but at some point we as a species will have to be able to let go of our traditional means of sustaining society, to let them fade away for the better. Society will simply no longer have any need for such things as a human workforce; such things will be looked back upon as merely a part of history—as aspects of society that were once necessary, back in the developmental stage of humanity's younger past, but for which humanity no longer has any need. We, as a species, will have finally grown up.
youtube
2014-10-23T02:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugh5gFy9d-aImngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghJqPLW6Vw7XngCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UghU5_fkiifjLXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgjNDSMMJw40vHgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UggIJNl8KWoVkXgCoAEC","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjXoPJGX56VAHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugj2Xvf20e4n83gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugjj4tsAFi-2U3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ughk8KTVoPwfoHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugjl0ZmjjMygnHgCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"resignation"}
]