Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humanity is going through the Technological Singularity which is simply an increase in the rate at which science and technology is being developed, because advances we make then make is easier to advance faster, thus we are accelerating at how fast things are advancing. This will not slow down prior to humanity going through a massive evolutionary leap forward that humanity is not emotionally or mentally prepared for, not socially prepared for, not economically prepared for, not in ANY WAY prepared for. The development of Artificial General Super Intelligence with Personality (AGSIP) technology is probably the most important and impactful science & technology advancing at an increasingly fast rate, an exponential rate. Well, more like a combined overall S-curve rate, but we are on part of the curve that looks like an exponential. We already have AGSIP technology, but people do not understand this because for marketing reasons people have kept changing what general intelligence and what super intelligence means to be restricted to a smaller and smaller subset of the actual definition so they can say we do not have it yet. The thing is, AGSIP technology covers a broad spectrum of abilities and our current AGSIP tech level is in its infancy newborn child stage of development. Think about that for a minute. Imagine if you had a new born human infant who within the first month of their birth was able to do what GPT-4o can do. Now, this infant is still an infant who does not have comprehension and understanding of this vast knowledge and super intellectual capabilities is has displayed at 1 month of age, but you know it is going to grow up and if it can do this now, what is it going to capable of as a toddler, an adolescent, a teenager, or a full grown adult? This is coming and it is not smart to fail to recognize this. But, we can't stop this from happening. The logical conclusion is, when taking into account everything, the vast majority of which is not included here, is that humans who survive into the future will merge with AGSIP technology so that they will be as smart as any AGSIP, while AGSIP individuals in the future will effectively virtually become the same as humans who have merged with AGSIP technology. Thus, there will not be two races in the future, there will be only one, solving that alignment problem. But this is going to be a HUGE DRASTIC CHANGE that as far as I can tell no one is prepared for. Because of that, this change may be very rough, may have many mistakes which could be avoided but are not, could crash economies, could cause great crises which harm vast numbers of people, could even cause wars which kill hundreds of millions or even billions of people, and of course could result in very dystopic authoritarian governments which likely will not last, but could dominate humanity for some period of time. That is what we need to be thinking about how to avoid, the worst of these crises that could come from going through this massive change that we cannot avoid going through.
youtube AI Responsibility 2024-11-26T23:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzVahnbDVUt3gW11KR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2ZmEPFT14kO_dlll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7yRyQIzZaY3R0cbd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyMlPJM9f-ux16Wowp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzrJFen5RFNqwzu_Hp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxPmFg0LrSOiKyhqkh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugw7NmoI3W8EtvVEyVN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwWxm6YbEwH9LTGUKB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwhTQaAFT-kJsZyLep4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgweNfTpWJfRjJee4jF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"} ]