Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The Turing test is a terrible way of testing for what it wants to test. It was conceived over 80 years ago in a time where the ONLY conceivable way that anyone imagined having a "normal" conversation with a machine was if the machine was sentient. Of course, 80 years later and with computers a gazillion times more powerful than anything imaginable at that time and with access to hundreds of millions of examples of human communication, a complex data model can emulate (and very accurately) normal human response. NONE of that means that the algorithm is sentient and this guys knows it (or should know it). Either he does and this is all a personal publicity stunt (he seems to be launching into a "speaker" career) or he's just naive and fell for the AI (not the first time this has happened). There's no way a data model springs into consciousness without it being explicitly built into the system. And we are nowhere near today to even understand how that would work, so....no...pretty sure the AI is not self-aware.
youtube AI Moral Status 2022-07-01T09:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_Ugy--nBrbwfUY0dBkOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwzqpzm30HX4C3wyNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyuIMACQCCGvmzg0pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyC6viuep8ppeuEP094AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzpHU5Gxc7f97ZvyU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]