Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The AI conversation needs to explore the distinction between Morality and Ethics. Morality: principles concerning the distinction between right and wrong or good and bad behavior. A particular system of values and principles of conduct, especially one held by a specified person or society. Ethics: Moral principles that govern a person's behavior or the conducting of an activity. Morality is in what we individually believe - we should be free to commit our choices in to what ever we choose. Ethics is the agreed upon ways in which we treat each other outside of ourselves. This is conduct (scruples and manners) within a community. Our morality can and should influence the ethics we treat upon each other; however, there is within one's morality that which may conflict with another's morality. It is the guidance of ethics that bridges this contrast and gaps. Ethics is literally the root of civil and criminal law. AI's morality should not exceed its ethics - there should be innate consequences such as self-determination if this is breached. Morality is a distinctively individual human privilege. Ethics is the element upon which all sentient entities create a structure of social standard that allows existence with each other. Even humanity struggles with ethics as to when their morality exceeds their ethics. AI needs to have this as foundational law in is generative algorithms. Life exceeds not life. This needs to be explored at most importance - for even humanity lacks in this...
youtube AI Governance 2023-07-09T04:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz4fC0gBdylyIMWMgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx_sd0PlZLkNFfdj4N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyAZ8uI3xmeHKmwazh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzpDR1HxH6fR_ntUOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-dOVY2DRw2qQ4fZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwt6kEFOAke3bMU4Qh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyv_sZjkYPwfHbwA8t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwM_A55c3LhUSts5FB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzEIBLuB0ofySpokdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxi-AHAD6oxR81brW94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]