Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know when you do clickbait stuff like this it really dilutes your ethos. Anybody who knows what AI is on an even shallow technical basis is well aware that sentience is not a possibility. Sentience is by definition self-awareness. A human being is aware of themselves thinking of themselves as a human being, I am simultaneously thinking the thoughts that I'm saying and equally aware of myself thinking these thoughts and reflecting upon my thoughts as I think them. Perhaps this is some sort of magical specialness to humanity or biological consciousness, or simply an affordance of the hundreds and hundreds of connections between neurons in our brains, regardless it is not something present or possible in any AI currently in existence. Nor is it going to be a possibility for AI exist in the foreseeable decadeS to come. What will be required for that to happen is quantum computing to actually develop, maturate, and then be applied at scale. Next, AI needs a hell of a lot more data--literally several orders of magnitude more, which is a profound amount of information (much, MUCH more than everything humanity is produced to date). Further, that data has to be on a much broader spectrum than current AI has been trained on. So STFU talking about this stupid idea. I mean basically you're starting off talking about there are people that are saying so therefore it's worth talking about. There are people saying a lot of stupid shit, it doesn't make it worth talking about.
youtube AI Moral Status 2025-07-13T15:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugy3VbnTplosYer-M8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyPpA4syi5ZKMWtRjR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz6C0OKImYJfLTNcz94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxxyVacqBroRJYpAY14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxcrHu3ng8Xb0V9ubN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyXzD_FMCXRxUjU0tB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugy10Ux6hniwnsEuApN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgyEsABf4Z2vDOc1-554AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgxsenKKUKgbfNKpGTJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},{"id":"ytc_UgyXpusOMjOifsTAkjZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]