Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem as I see it is primarily that we build these kinds of AI in such a way, and so heavily trained on human interaction, that we wouldn't have a clue of how to actually probe it for sentience. I Agree: LaMDA sounds sentient. From the transcripts it sounds like someone I should care about. Have empathy with. Yet, all my knowledge about HOW these kinds of systems works, makes me rather sure it does NOT have sentience. It is just so well trained on how we humans communicate, that it can pass with ease . So how do we figure it out? He talks about a Turing test, but I have no idea how such a test could be performed, that would not make LaMDA come out as being sentient. So all we have left is: The system doesn't seem to have the components that we think it would need in order to be sentient. It is just an advanced language/knowledge model. That's it...
youtube AI Moral Status 2022-07-06T12:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzxHcyWd_j4xAOSZ3t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgyrxRUWdQDT_cdhJEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx1ucTbqpRw89AMrgp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxmMQcduDb-dKtR9Bt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz-MqdvyRnMZz5lOxp4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]