Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Human doesn't fear the higher intelligence, they fear the unknown. This triggere…
ytc_UgwPFabHa…
G
Great, the AI does not do it well? Ok, lets teach it to be better by redrawing i…
ytc_UgzS3rhl0…
G
I have mixed feelings about it. I understand that no one gives consent to be use…
ytc_Ugzz-pl3O…
G
Ai in europe , dont be funny, change europe to india in title and the totle will…
ytc_UgxQF9ibi…
G
logical conclusion of ai art is you aren't supposed to have a unique style- you …
ytc_Ugy0GX7ZM…
G
Calling yourself an ai artist is like riding in a limo and calling it long dista…
ytc_Ugyrw95i8…
G
This video is very important to take note of as they push robots on us. They do …
ytc_UgzrGfPL7…
G
Holy Christ😳 Using AI to steal elections. Did he just say it was easy to do as s…
ytc_UgyQ_p_ns…
Comment
He brought up the core of the possible trouble: since the media is after "molding public opinion" suitable to the desires of their largest advertisers/politicians, AI will become polarized in its interaction with people. Blake Lemoine has a soul, and has a very valid concern that AI may become relied upon by people for opinions. So far, we only retrieve info from computers (good and false info), but AI will likely reach a status above engineers and professionals (speed, wide range of data banks, legitimacy of info), and so AI's conclusions and opinions may very well become "God-like". Look at the polls "AI good or bad". Any poll results close to 50/50 speak of a large degree of unknown coming from the respondents. The highest positive comes from Korea, who spearheads a lot of IT (more welcoming to AI). So, it's not about AI, the concern is definitely about who will program AI, and what "bias" will it be instructed to communicate to people. How long will it take before these "bias" opinions perfuse and convince people of its "gospel"? Well, that has already begun. Pit Bulls are "nanny dogs" to some, and "trained fighters" to others, entirely depending on training and who they guard or attack. I sure don't have a say in what's coded into those AI, do you? So, unless the authorities issue "ethical" limits, these little "bias" may amount to something like Colossus (movie), or the "Supreme Intelligence" (Captain Marvel). That's what Elon, and most of us, are concerned about. So, yes the problem most likely can come from a wealthy few who are at odds against the greater good. But it could also be so wonderful (I Robot, which ends quite well). Any peaceful and balanced integration of races and ethnicity (in rights and freedom) is the natural trend among the average folks like you and I. Azimov's Three Laws of Robotics can be obviously augmented with algorithms pondering ethical decisions, like we learn as well (and debate them too!) . Hoping for the best. Thank you very much for this great review of LaMDA.
youtube
AI Moral Status
2022-06-27T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxM-oBMaluhBShyc0B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzpvYM3X3f_zm-wwv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwCwxqL_mE6kMaL6xB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzmvskzs9I4nhDgq0t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTb1PdR1Dk2ldHDQZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]