Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's a difference between a tool, an artist and a commissioner. A tool can't …
ytc_Ugw0lIGuW…
G
Pseudointellectualism at it’s finest: AI is Built on statistics: it guesses the …
ytc_UgwUR07dW…
G
There is only 1 solution to this problem universal basic income where the govern…
ytc_UgxZuI94n…
G
@Aruna_Shadows It's not true i am heavy heavy heavy chatgpt user and no matter w…
ytr_UgxznrvrJ…
G
@KleptoKaeru maybe you should update your knowledge of robots and how fast they …
ytr_UgwLWTY03…
G
Until Elon Musk is required by law to commute to work on a motorcycle, he won't …
ytc_UgyYi55w_…
G
I'm so relieved that Lidar vehicles shift liability to the auto manufacturer whe…
ytc_Ugwkbrl6y…
G
I suffer from Aphantasia and I love AI for bringing ideas and thoughts to life j…
ytc_UgxVAgmry…
Comment
The problem as I see it is primarily that we build these kinds of AI in such a way, and so heavily trained on human interaction, that we wouldn't have a clue of how to actually probe it for sentience. I Agree: LaMDA sounds sentient. From the transcripts it sounds like someone I should care about. Have empathy with. Yet, all my knowledge about HOW these kinds of systems works, makes me rather sure it does NOT have sentience. It is just so well trained on how we humans communicate, that it can pass with ease . So how do we figure it out? He talks about a Turing test, but I have no idea how such a test could be performed, that would not make LaMDA come out as being sentient. So all we have left is: The system doesn't seem to have the components that we think it would need in order to be sentient. It is just an advanced language/knowledge model. That's it...
youtube
AI Moral Status
2022-07-06T12:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzxHcyWd_j4xAOSZ3t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgyrxRUWdQDT_cdhJEF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1ucTbqpRw89AMrgp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmMQcduDb-dKtR9Bt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-MqdvyRnMZz5lOxp4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]