Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Worst example tbh people that type out stuff like this are the ones fueling ai a…
ytr_Ugy6ja2no…
G
it's not how this works. Neural networks take training labelled data in, and lea…
rdc_dsmjc9x
G
As someone who has a big imagination I can NOT imagine “ai art” being actual art…
ytc_Ugxe-yWGW…
G
you keep on drawing. do not be discouraged, because if you quit then AI has won…
ytr_UgyjTUcVZ…
G
Lmao their Lord and leader trump just signed an order basically banning ai regul…
ytc_Ugyb-RguF…
G
Why? Why do artits hate AI so much, be like us, programmers, just use it yoursel…
ytc_UgxRsV0v3…
G
The issues with the system people are talking about are not any issues with the …
ytc_UgzDdU8V9…
G
People who say it's an AI there just mad because they are bad at drawing and nev…
ytc_UgxgyeOPn…
Comment
Man.. Year 2017 - when I started writing my Bachelor's Thesis aimed on the Safety for Humans from AIs, that was the first time I got to know Roman Yampolskiy and reading his papers.
I've done my work, defended thesis and got Bachelor's degree. There I found out about plenty of moments where AIs, robots, self-driving cars, chat-bots have either k1ll3d or harmed people - physically, mentally. And we indeed need to agree on Safety rules and recommendations even back then. And I already understood that we have a BIG PROBLEM with the Safety for us and that people are running in front of the train (that is moving towards them) and they ignore the danger when the train will hit them and leave them d34d.
Then - 2020-21 Masters Thesis. I'm getting into AGIs or ANGIs, understand the concepts, understand the next level of threat for us as a humans, I see that almost nothing is done for the good of the human-kind, only good is for the systems and companies willing to make more money and profit out of it. Also plenty of information there - where we as a humans already lose our positions in terms of expertise, knowledge and experience/wisdom overall and decision-making to AI systems. I was already shocked. Long-story - short - my Masters Thesis wasn't accepted by the Uni as I dug too deep for them. They said that what I made is a plagiat - though why and how so? Especially, when I've defended my Bachelors and know that Masters is a step ahead of that and why would I plagiate it? But yeah - that's when I understood that world is against that also.
And guess what? That was exactly the time when AI Boom has happened - but I already knew that it's going to happen and "I told you". Though yeah.
And just understand what I knew already at that time, and just imagine Roman Yampolskiy in 2011 (almost 10 years before me getting deeper in all that stuff about AI/AGIs and so on. That's just MENTAL.)
Thank you very much Roman for your work and contribution to the scientific world, and that you came to Lex podcast to share what's going on in AI, ML field with the people that watch it! Let's hope for the best, but prepare for the worst. I feel like we are closer to the worst with the level of ignorance we have for that topic and organizations willing to get richer and get more money with AIs, ANSIs and future AGIs...
youtube
2024-06-17T22:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx5m1ixYD5cnMzZHmd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwR311Inj8OASVwxDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2ZYAH8RmTN0SRzm54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxz0g5VdGfsiV_j1T54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkmhBicUM3k9wGxZ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy6xDRR8E_5-iDGhCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzI1pW-qLt2QkY-vnh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmCb1yMgZ2oH34VNx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8cb6_yHjoojyOhM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw91Ue4DZWTegv8_gN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]