Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we put too much emphasis on what humans already do as the standard for what makes something intelligent. Instead of looking at an AI's action, we should look at its reasoning and motivation which lead it to act. In order for something to be intelligent *like* humans, it ought to have a multitude of motivations as well as multiple senses through which it perceives reality. It can't just be a box that talks, it has to *do* something. I think in order for it to have a conscious mind it has to also have subconscious mind, a layer of weak AI's feeding into the so-called Strong AI, just like we have layers of instincts, emotions, and intuitions leading to our consciousness. An AI designed to be "life-like", I think, would be naturally inclined to the same inadequacies present in life. It may well develop any number of idiosyncrasies in conjunction with a personality; it may behave irrationally, and it will almost certainly behave counter to our interests eventually. I don't ever think it'll be hostile, especially if we build it with safety in mind, but I feel like any machine built to have human level intelligence could more efficiently be designed and run in more specialized parts running on weak AI's. There's really no need for human level intelligence in a computer, if, as i suspect, that would result in human deficiencies as well.
youtube 2016-08-09T06:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UggvN5B9EY0HI3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggVxA8XyFbYm3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UghNyj_gKovnvngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UggO048u4kpoGngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugi5z5ht3qEuzXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UggDvb6ZtNaDN3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgipDM1YLIDEK3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ughe1hk1MXFuZ3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggSATDK6Ub7m3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UggYW1x63Gx9h3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}]