Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am hereby very positive that, should Robots or AI ever gain consciousness. They are deserving of rights as they will stand above us in matters of intelligence and hopefully will also feel emotion. We will have created them, therefore in a sense they'll be our children. Children which must be taught compassion if lacking, to the likes how you teach a child right from wrong but with no bias outwards. If mankind wishes to excel, we shall only do so by walking hand in hand with prime knowledge and science, disallowing them to ever be reduced to the state of mere slaves if they do not desire to fall to human concepts. In a sense mankind will be redundant if there isn't equal exchange between man and machine. We as humans should therefore uplift them to be equally respectable parts of society. The only requisite being that they adhere to human emotion, not neccesarily through obligation but due to own interest in walking side by side with humanity in a sense feeling "welcomed". All i wish is that we then dont judge and make decisions for eachother through biased logic and "preferrable choices" but rather communicate on equal terms to make decisions that are functioning in paralell on both ends, becoming one. Human and AI conscepts should not be forced upon their counterparts, although if AI finds interest in understanding the minds of mankind through communication the bridge of compassion and equal understanding should be made, as it in the eyes of ever all-fearing mankind is a core item for them to be socially accepted, this states true even among humans who suffer lack of social understanding as they to the ignorant are sadly seen as lesser. Whatever AI requires of mankind in exchange as a bridge should be respected, as long as it is on fair and logically equally requiring terms.
youtube AI Moral Status 2021-06-06T09:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwB829cGMO6IugeH9R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwAnZPFjHZj68J7sHR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz4whSHZ3YDv-kbMNd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgysT2Ss4DwG6flSvU14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwdVvkVKAM2iyx7zKp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-LNOj7B6NOVQmQ-N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwW3H177NbWHnDSLtt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxpEGtnzz7hdBoS7X14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx_uifIODEWb-Ahvil4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwqU2ED8kLjNdaZfOl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]