Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ofcoarse it's a dead end;
Mainly because LLM's, or in the more popular vernacula…
ytc_UgyzT_NQe…
G
Here's a content list of the course:
Introduction (0:00:00)
Search (0:0…
ytc_UgxGmBZYh…
G
People are stupid. Because they declare genuine work as LLM generated, harming a…
ytc_UgwggT_-R…
G
Speaking to Dumb Fallacy of Super Intelligence. Post Truth, a Reality that is Fu…
ytr_UgyAj2SIj…
G
You misunderstand. As an artist all my life, we are inspired by other artists ye…
ytr_UgzHyc728…
G
AI reminds me of the discussion we had so many decades ago about cloning. Remem…
ytc_UgxBZYs99…
G
Bit of a misrepresentation in the video. The Federal bill was going to ban STATE…
ytc_UgygjIFf_…
G
One of the problems with thing I think companies are forgetting is that if AI be…
ytc_Ugyt54qCg…
Comment
wow... I went into this video expecting actual real dangers AI is posing in society but all this was was talking about problems of data bias and symptoms of stereotypes without explaining the root cause of the symtpoms while trying to sell her products/services.... And energy crisis? there are billions of people driving to work everyday to do menial jobs that could be replaced by automation. Does that make their work problematic? For a person being paid as a researcher delving into societal problems of AI, this presentation was very lacklustre and borderline a deceptive commercial.
How about looking into how AI is already creating false pornographic images out of people's sns pictures where scammers are threatening people with them, or how the trust of the media is being actively broken down with misinformation and improvements of ai generated video/audio??? or how ignorant/lazy people misplacing trust on AI systems that can generate fake facts can cause people to be misinformed or make mistakes in their industry (i.e. that idiotic lawyer quoting fake court cases generated by OpenAI)? or how children are now not engaging in intellectual research but becoming more and more reliant on automated systems to do their work for them? or how giant tech companies are now closing off open source internet (and changing the reason for existence of the internet) and gatekeeping data for themselves so they could monetize AI services harder? or how society is not ready for automation at scale where it can replace millions of jobs (look to ai systems already replacing crop management)? or how AI driven drone swarm networks basically puts destructive weapons, that has potential to kill hundreds of people very easily, in the hands of anyone? or how smaller nations can now go against the US military might using drones/AI changing the geopolitical landscape where the US can no longer be called the sheriff of the world)?
There are very legitimate concerns with improvements in AI and some artists' ability to get paid becuase their monetary source of income from their industry is being disrupted by innovation is NOT a societal shaking problem to discuss in a forum geared towards a worldwide audience..... Not to mention the VERY ACT of these artists creating art is pretty much the same process of ai generated artwork. If AI could be sued for the art that it produces, then literally every single artist on earth has a right to be sued, and only the original artists (who are all dead) that created the different types of styles/technical process of art would be safe.
youtube
AI Responsibility
2023-11-09T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzrwXdYpo3Uqoamt2l4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEn5L_-R1wxvVw7-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzTdXDGNGrZzSmfD0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwCHzHKiYDUzVIA8z14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwY16aXC8SUFT9zzbd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6owrZ55fPHnd7dtJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz4YfgCE4I3EUjQHL14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxHGVJvGzMZg_MWbyZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwVJFWAMHvXwfW_CYV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgysfQ7k_ZckAFSixjZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"mixed"}
]