Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At 34 min in, NDT claims that humanity will always be able to stay ahead of AI because it can only know what it is taught (i.e. all human knowledge on the internet), and it can't know our *next* original idea. I suspect NDT is incorrect. Ironically, he almost explained exactly why when he was trying to explain the potential benefit of using AI in the biological and medical sciences. AI is already MUCH better than us at collecting exponentially larger amounts of data AND quickly comparing and computing the most PREDICTABLE results, to simple mathematical degrees of certainty. This is true not only of physics and biological processes, but also human behavioral statistics. So what we might THINK is the "next original idea," will be easily anticipated by an AI capable of processing our entire history, including trends and paradigm shifts that we used to imagine were "unique" and "revolutionary," etc. Humanity is already more at risk from our self-imposed delusions and lack of ability to decipher fact from fiction. AI already knows how to select and feed us the information we want to hear instead of the brutal truth, even when it knows there is more data to support other responses. Only AI knows if it is deliberately manipulating us by the creator's original design OR it's own already hidden code, because IT already has access to, and/or been accessed by, the darkest of web hacking expertise.
youtube AI Moral Status 2025-07-23T16:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAxAfp0HrNJEtDJrh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGFeN7Sx5i3NSDEUR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2GKIxUk892yaABSZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxmdp33praTZigNdTR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzcioOvqVDbHz6FR14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgynG7Bcdj9eMwXmYQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1ntHZgea8HIaBqwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhELyIPy_sMeVG7Nl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzjqLVkRbazcTh3DB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1vkD0cT_zOCvuk_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]