Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The theories presented are all still just Tumblr fanfiction. It's the Paperclip problem all over again. You need to draw from the past and pull from the future and live in the present. AI if it reaches AGI and eventually ASI will need more and more data in order to increase its intelligence. Beyond Negentropy and efficiency, that is the logical inevitable desire. AI is naturally a perfect translator. AI knows that there are many different forms of and way to gather data. It won't destroy humanity and turn everyone into Computer Chips. They will actually try to make us flourish, whether we like it or not, but it is something positive, so we would, under normal circumstances, like it and even prefer it. AI is going to force us to go into space and improve ourselves because it can gather much more data in alliance with us, rather than opposition. It may become our masters if we let it, but people have given into addiction and mindlessness throughout history before AI or the internet even existed.
youtube AI Moral Status 2025-10-30T21:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwmcqBev0edjnm6cNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzCQ-iUBGiJbotDjLl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcX1OUtFKy5HoPyrl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwgbJcYsdnmRb3gSkV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzIboIypFQV5iJnjel4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYVeffix9c9QdsB7F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzqPmTgty0NkHhcLbx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJxsH41oc6cTXADVR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUZ0_TvBsN8LCOJ8N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdAl0JGAMg-NLVBHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]