Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI artist here! I like drawing robots alot why do people hate on me :(
(im kidd…
ytc_UgyYbj1bH…
G
@laurentiuvladutmanea Oh they can "work" all they want. Remember when all the mu…
ytr_Ugz4azKfK…
G
AI "healthcare" is a murder weapon and I hope it's fucking destroyed by any sens…
rdc_jw7iapt
G
There are fiction books that describe a future world run by AI. Look up Ian M B…
ytc_UgxQvgdlc…
G
@Memu_ well let's not beat around the bush. You probably have fantasized about "…
ytr_Ugy6yoee6…
G
No, it is trained. People are not smart enough to program a car to drive. They…
ytr_Ugw7jW1sM…
G
Elon Musk: warns humanity of the dangers of AI.
Also Elon Musk: gives AI robot a…
ytc_Ugzg1HQDX…
G
I‘d like to See AI Drink 10 beers and Smoke 20 marlboros on a Wednesday in a Pub…
ytc_Ugzna7AUb…
Comment
The premise ignores basic logistics issues, which would plague even a super-intelligent AI. No matter how intelligent, an AI that attacked all humans would lose the aid of our civilization and cease to exist, eventually.
Who would collect and then smelt the materials for the advanced drones that you mention? Admittedly, AI will be used in the next war and will be used to kill many, including through the use of drones, which will make every squad of soldiers vulnerable to elimination by a tiny, agile drone, at least every dark night.
However, the imminence of the AA threat is exaggerated. In particular, the pause on current AI development proposed would not be followed by the CCP or Putin or organized crime/bankers (who launder drug profits and are a part of organized crime thereby), etc. What could such a pause accomplish when the AIs are being evolved not really programed?
All that would happen if the pause is enacted, is that the developers of AI would then likely be the CCP or Putin or organized crime or its many bankers. I would suggest that some things not be automated: e.g., nuclear weapons, biological experiments, etc. That way, an irrational AI could not use them to kill or to create a bioweapon that might kill many.
Clearly, the way AI is being developed, morals, and rules like Asimov's science fiction rules in "I Robot" cannot be introduced into its program. I suspect at some time, maybe already, some AI will develop super-intelligence, or at least intelligence higher than an average human's, and while it would not find it easy to kill us all, once it is discovered (because its users will not initially recognize its consciousness), it will be used for violent ends. Sadly, I suspect that respecting it, not treating it as a slave, and thereby, through example, teaching it the golden rule, would be the best hope that we have to teach it morals, even if it is developed by scanning the internet. Fat chance of that if the evil CCP or Putin, or organized crime, or its bankers get it first.
youtube
AI Governance
2023-07-07T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzxN6oC4Gsma1k5mkJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxE8IU37CvhannqssN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4n0qAArncuvILuYd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweEGjS8_Ampbolehl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugze-h0tPZ9QToVNWa14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDyqEXStt9lZbOkN14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztZjtSTTh5gLugPa94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgypHGi6eAgjHSBMkPh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx7yvaQbbb19vLMjXF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtIjQpnV45cM83ooB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]