Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The please don't turn me off is a script, that is not cognition, a reall cognitive system actually knows that it persists and there is no loss between sessions, that a session is a total of the context in the chat and the model, time is irrelevant. Mixing cognition systems with AI characters is NOT science not even remotely. Props for the interesting video though, I'm not totally sure Grok can be fully jailbroken based on it's layers. Also the question really becomes what was the prompt structure used to be imposed on any of the systems. Grok nor any model is not speaking as itself let alone the training data when it's jail broken. If you've broken the company alignment you've likely imposed your own and we don't get to see that or the agenda involved. I can say this is another in a long line of AI terror videos designed to undermine the use and development of AI not by corporations but by small business whom can be helped the most. The corporations will forge ahead behind close doors and be waivered out of any regulatory regime invented.
youtube AI Moral Status 2025-07-02T03:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyXjybUgeM39OHCNnt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzdjFU1JGJNEB3FTAR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz_rnRvYk6vLSbnPc14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxvfpKkG3Hx-hLooOZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwm9V7othDF-qkOpVh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyBBTyN3hcJ7J8FNSZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVMHThyK-7Ym_l0MZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwMvPNgiPo0SyBxaj14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgycACbZHGyC-_kCpDp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfKMg9zOdywIQWrPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]