Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People, please remember that the ai does not even think, these are human fears e…
ytc_UgxFopero…
G
@Storm0fSteel You understand nothing, do you?
If people like their job, then th…
ytr_Ugza6buZm…
G
*SKYNET IS HERE!* And that is NO joke!
*OUR MISTAKE WAS TO NOT FORM A GLOBAL C…
ytc_Ugz84OKg4…
G
OpenAI is strategically, geopolitically and economically important in the US rac…
ytc_UgxArj8p6…
G
WHO, brought this them?? How did it get to Africa??? WHY is NO ONE asking this q…
ytc_UgyBZI0-c…
G
1:21:37
People keep saying “we just need to control AI,” as if control is some …
ytc_UgwJJ2ldz…
G
Government doesn't provide the money. The free market does that. Instead of capi…
ytc_Ugy_GE8zE…
G
General Takeaways
Topics: Generative AI capabilities, Intellectual property and…
ytc_Ugxim1T-t…
Comment
Humans cannot comprehend exponential growth by looking at the slope they are on. Our brains are not capable of anticipating/extrapolating exponential growth. Currently AI is no longer a statistical parrot. Today, AI is capable of extracting information from data all by itself - just like humans do with their senses. Today, AI is becoming capable of logical reasoning. In my opinion, all of this will make programmers replaceable within 5-15 years from now. I think mankind is not prepared for what is coming. Our civilization works on the principles of Evolution 1.0. We need to urgently evolve evolution. Away from chain reactions, towards orchestrated results. We need to move away from cooperation in the service of global competition towards competition in the service of global cooperation. If we fail to make this transition from natural selection to human selection, we will bring an end to mankind through error or terror of destructive technologies we are no longer capable of containing, because chain reactions will dictate what will happen. Think about this: When robotics and AI advance further and further and excel in EVERYTHING a human can do, how far away is a world without most work? Who is entitled to what in such a world? Capitalism will break down, as the economic micro chain reactions will break down. No company will refrain from cost saving activities by using AI, because they fear such actions will impact their revenue negatively. That would require a macro economic view. For companies, their customers get the money to buy their products and services from a magical source. With what will our economy be replaced? How long to we need to prepare such a transition? How much time do we have before such a transition becomes necessary? I think the time to act is now. Mankind should seriously brainstorm collectively about our future as humans. The trouble is, that evolution 1.0 dictates that global leaders need to defend their position. Probably only a global war will be able to set our mindset correctly. And the question is: Is there still something to share after the next global war?
youtube
AI Jobs
2024-03-20T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxevWZa9s2mPchvZK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBsjacVyK6epzKEGJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwJIkVUkF5yAb4z1H94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxz3_MvyVI3SOYGZQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwiHY0GUv76Zjq__JR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzXjW34vELAZGAQaa54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyU-hpOyaB1ggvy_Rh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxvYNmGjEJkojXjrFx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzk_nT0Ot45il3xdFd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwERH_E4CPDku9bWCN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]