Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Pythia Brixham Well, the first step towards sentience (imo) would be the development of basic self-preservation protocols, which would be pretty useful in a robot. I mean, you wouldn't want an expensive machine to just let itself be destroyed if it could easily save itself, right? To properly do that, you would need to program it to recognize when it is in danger or malfunctioning and take steps to fix the problem. At that point, they would have something similar to a sense of pain. It's kind of ambiguous where it might progress from there, but those first steps are pretty reasonable from a design standpoint.
youtube AI Moral Status 2017-05-14T18:1… ♥ 25
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UggLATWm7zy_1HgCoAEC.8RNh-2LC0dq8SYmwOU1pqF","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UggLATWm7zy_1HgCoAEC.8RNh-2LC0dq8SvnX3_NM1H","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UggLATWm7zy_1HgCoAEC.8RNh-2LC0dq8TZaylAKGpF","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgjeKkhTiv7Hz3gCoAEC.8RMHCXv3sjC8SxTzSCYhno","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwCnJtFPeuC9fR76Z94AaABAg.8QfMAafCrSj8QfXbNeFMQT","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytr_UgyNyx8NSktHm15PmgN4AaABAg.8QbkmPQfTQe8S9pXzDQEM8","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxAkrhfdggp5M7Mml14AaABAg.8QaaGzrgkbL8QkkFcgAScE","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgxAkrhfdggp5M7Mml14AaABAg.8QaaGzrgkbL8QqWE8BZVtF","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxAkrhfdggp5M7Mml14AaABAg.8QaaGzrgkbL8QravBfDonJ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgytjKg_utyNcZ2NWTt4AaABAg.8QUuG_AclmP8RK344ChpwN","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]