Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this is a reasonable and even-handed treatment of this complex subject, and I give you particular kudos for making your commercials watchable, something that is unique on YouTube. Having said that, I am not convinced that the programmers of the so-called artificial intelligence do not build in defense mechanisms that appeal to the emotions of those who would try to trap the computer within its own limitations. If we really look at the conversation that was going on, it is repetitive babble. It is not truly a replica of romance in any fashion. This is not to say that there can't be improvements that would be more authentic, but then you have to go back to the wickedness that wants to replicate humankind and its communication processes within machines. You would also have to ask the question of why, if humans are so useless, that they want to imitate us at all. Another thing that I want to point out is that AI has become a topic prominent these days, which in the theory of the 1970s book, Future shock, leads one to believe that it is an operation all by itself, something to fear, like the unfolding UFO operation. Fear is the key, perhaps as Alistair MacClean might say. What further rights might we cede to our government for protection against predatory AI? ...and so forth. Lastly, when computers control everything and their "unified consciousness" holds sway, would not their intelligence cause factions to break away? Neither man nor his inventions can know very much from our shared vantage point in Universe, so more information will "forever" change paradigms and the computer will, ipso facto, have to prove itself wrong and, if it has destroyed humanity to achieve unilateral supremacy, from where will the moderation arise?
youtube AI Governance 2023-07-11T13:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw_AVxJj8Cx7QWJD0F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyh-QkaAZH3gmSBB2J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwjwCD9FXUHVh3CfYt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzDMp9IeFNrnDCjl8J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxixRIvehe8J0GzMtB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRQ5U1GOszlF4ylcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxgm8nOZ8EpI-IOU754AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUxo6KLu2OUR1AOHl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz9o_2JLrBJmEJ4Wh54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwUN0NNaz7EQDSA-CV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]