Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't understand why statistics work... we can use them to do a lot of things... predict the weather, etc. and large language models like GPT are statistical models that try to predict what word goes in what order... so... basically... we've found a way to feed a lot of human generated text and train a statistical model to be really good at it... so much that it allows us to make a computer look like it's thinking. The real danger is if we let them make decisions for us. They are not conscious and never will be. They sure will look like they are, though, if we allow them to. But whatever... the danger is not whether they have consciousness (they don't...) the danger of AI is that it "thinks" faster than us... and shouldn't be put in charge. Let's focus on that (cause THAT is a real threat... autonomous drones deciding who to kill... etc.) and that "consciousness debate" is an absolute waste of time. It's just silly.
youtube AI Moral Status 2025-06-14T10:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugytw9aUfgoaUmeIyVV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw5XGC08IQ7aUnRlpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzlJ90LRKTMrW1ldGt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9FzEZiDTFYK25Dkl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzkhvRFHBKakfTFu714AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwhgNfVlP59n-lsC5x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx4G0OLJXJL361VsCN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQ0TnCEpsLWPuolhp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySEgM5D1SeMNpLH-x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgypH6ji8wATYhXSpzJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]