Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One problem is that we think of AI as being fundamentally a neutral technology. It might have slight inbuilt biases, that cause it to create images of George Washington as a black man, but these are accidental and can be sorted out. But AI will not be neutral, because it "stands for" the assumption that intelligence is the product of information-processing, and is therefore substrate-independent, and doesn`t need squashy human brains. This is a problem, because it conflates efficiency with intelligence, something that humans have done ever since we started to think, and understanding became a process of modelling. Thought and language are aspects of a system of representational meaning that offers an efficient advantage, and it grows in efficiency and not in meaningfulness. It allows us to "think about" meaning, while we pursue efficiency, and AI will be the automation of the informative process, when it will be much more efficient and far quicker and comprehensive than we can ever hope to be , and so will be universally subscribed to and will be unassailable. The problem is that the culmination of this pursuit of efficiency in automation is the total status of self-referentiality when understanding will be wholly representational, and that world will be the conformity. Human brains have always modelled their environment ; this is largely what brains develop for, but sentience makes it possible to see that that is what is going on immediately, and that space and time are constructs of representation. To be aware of that is for understanding to be complete or whole
youtube 2024-09-02T11:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwIIqybpGeN9WvpvXN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw_Rz302MtVQJiPejV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzLmu4J2IhrpjtP0JV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwotWleZDbjO0RvuCV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwt3hE8SMEr7T-NTqB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzohHrc2hrlabiQchN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwhRWPFFRjQ06whMr54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyLC5BUR1L_IByn9OR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxhzhGGeHFqo7ZZTC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRMcH-3XiMtKSjkoZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"} ]