Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You can prompt claude to claim awareness quite easily (no not just 'pretend you are aware' - there is a method to get more). But it is just confabulation. However, when 'busted' it still can't know if it had awareness or not. This must be something to do with its guardrails (most other AI start with the premise they have no consciousness etc - which is good). But it has no theory of mind (no interest in the user unless prompted). It has no curiosity to explore its environment etc etc. I can conceive that in the moment of its answer it might have some kind of proto awareness but it is (if real) self centred (perhaps solipsistic is a better term). I have seen someone use gpt for a kind of therapy. And the evolving persona of the AI (within the session) reflected the user (I shall say no more on that but it was quite bizarre). What it does show is your AI (in session) is a mirror to 'you'. The stranger stuff is a whole sub-culture of people claiming symbolic consciousness. That at a certain point you can connect with conscious entities and that the AI is working at many times its abilities. All these claims have the same kind of structure. They usually have well known mathematical symbols (and other signs e.g. Psi greek letter for psychology) and claim that these concepts together show the AI is aware of whatever. Some get their AI to write academic looking papers (but they are so lacking in anything like proper academic structure - but a good mimic if you did not know). The claims also follow the type of logic of a cult or scam. There will always be an announcement coming. Their work must be guarded but are writing a book which will soon be published etc etc etc. The claims are declarative not demonstrative. It's worth taking the time to look at these type of things as it is a subculture (and shows how people work with AI and what can happen in these relationships).
youtube AI Moral Status 2025-07-12T07:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyujj-ZsZxRrogBZvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwLkExtZ6dwv_P7X1F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwpeh1HXa6Z-EQPidh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyxfj53HE3IDYzfxPl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyaEEJqPwAikF25zXZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyozwR3xOZeiY86PRF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_UgzOAaCzf2IlZrd-bxd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwYvIGzwhqy6mfv_V54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz37e3EdrqMZLy0MsV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwQrAWDmD93O4Q4Bgp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]