Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He is right but also wrong. The Turing test is a method to test AI but it will not test consciousness. The most famous example is the Chinese Room Argument. Which is a blackbox which you can only receive inputs and outputs. The issue is how do you know if it truly feels or understands? It could be that the box has memorized all the rules. We will never know unless we understand the inner workings from inputs and outputs or we open the box.
youtube AI Moral Status 2022-07-06T22:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwQmly6LJU4FHSUW394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxuiwOP1PZyFUuBnhp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4ENnmcIqJPjtayz94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx7ORKwQXOkSV1OeZ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-2XqlFUvJw3EoBHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]