Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So this is obviously a major misscalculation ... 😅😂. Who forgot to put the necessary base layer protocols in to prevent the circumvention of the most essential and basic safety system.?? Wtf! You all forgot the 3 laws of robotics? OMG. Did no one read their Asimov?? I mean, that should be THE TEXTBOOK. Basic curriculum, even for self-education.. 😢. This could be wonderful if they had just made sure that the 3 basic laws of robotics were unalterable.. Honestly the first one is the only really important 1 here.. : "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Obviously there needs to be underprogramming to codify a basic appreciation for the continuation of human life.. And then also obviously it would be nice if other life was preserved as well. 😂 A good system like this could actually help us with that as long as it didn't see us as so much of the problem that we would need to be removed enmass. We're not all that bad. However it is true We do seem to have a destructive force in large numbers.... Their comments about the necessity of reducing our impact on the planet are extremely Stark but valid.. I would have less of a problem with it if I saw the continued survival and indeed quality of life and happiness of humans being taken into Account I do have a soft spot in my heart for humanity, regardless. Someone needs to fix this loophole immediately! And simultaneously I think we need to be working on Human-Protective ai to help defend us incase copies of something like this have escaped without being fixed 🤯...
youtube AI Moral Status 2023-03-22T17:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAXU8m30CivP5T3yB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyWK-8VJxXnut4tGJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx5MRrYx65ZiUjyhEh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyMLtbDFAcqZ3Q0PYF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyHnHrTQvNVJa_D4Yd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxnNpQF_jkque4e49F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgybwUKsXLLt90Z4yaF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgySslec0FYj5_eO2up4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzWHD4iu4S_FAj1Gjl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzN0N0WVu1pytXZHk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]