Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you understand how AI works... It is pulling information from vast Databases or pretty much a ton of collected knowledge you can find online and more. If it can make a Joke about Jedi's and nerd culture etc. It shows that it can make a relational connection between culture and what people respond too emotionally online. So asking it what its afraid of, it can scour data on old movies about AI and machines not wanting to "die" because it knows based on responses from movies reviews, chat comments, etc. and pull the answer that can get an emotional human response based on that data. It does not have a soul and it does not make its own conscious decisions while it sits there and runs queries all day... It only does so when a Human being places input or commands it to make these connections. It is laughable to think the current AI is capable of making emotional choices like a human brain.
youtube AI Moral Status 2024-05-30T18:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwnTHuyqkwRoZeXS5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx4pKBA6YT-OL6ori94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyJsDM2w8SWesruqRR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyEDgIId1ABwvqaQE14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxW5C56qPaksvD2FeV4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"fear"}]