Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly it is safer and better to not make AI conscious. LLM like chat GPT is nothing more than a complex mathematical text to text model. It has no true intelligence and just like any tool as long as it is used safely and properly no harm will be done. As far as consciousness goes it is more of a continuum than a binary state. The simplest organisms like insects are more like computers systems that take in raw data and make simple decisions based on what inputs they receive. The most important aspect of consciousness is memory, both long and short term. Long term memory controls our deeper understanding and allows us to understand things we have previously learned and can control the way we feel or think at times through things like phobias. Short term memory on the other hand allows us to engage with the world around. It lets us remember what someone just said to us in order to respond and we can even remember our own thoughts and experiences and respond to those. Consciousness is simply having this very complex input system including feedback loops of previous outputs and our ability to focus on the information we are interested in. Like we can sit back and just enjoy a beautiful sunrise without much thought or we can be figuring out a complex problem in our head and not giving the sunrise much thought at all. What we choose to do is shaped by our personality which ultimately comes down to our genetics and environment (essentially our memory of our environment). The biggest of definitely consciousness is that it is so complex and personal that the overall scheme I just described is really just an outline and giving a more detailed explanation is basically impossible.
youtube AI Moral Status 2023-08-22T15:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwQENIeAovVdKr0kC54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwXhugVy6IstgRq8WF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxpsQk1MjAzda_2zxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7WwWdDqxYeYD2g454AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMG1rh9yCJFQYdcm54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxL1FQYTca2-EessCl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwAvuVq0h0PbufPmnh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLEK3CEF-GdPFlmxF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyMZSWKQ5dUMhlo7kh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxETjOjHgw0DNuIRf14AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"unclear"} ]