Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting discussion, seems like Steven's more onboard now about the AI risks …
ytc_UgylV5Kg6…
G
i am working on artificial consciousness and i can say just one thing .. both p…
ytc_Ugzac2vfq…
G
Seems like a shill for the future installation of AI "personhood" as national le…
ytc_Ugy4Tp6dw…
G
When my sister was in middle school, she was very sick. It took over a year of w…
ytc_Ugx2Tfw0H…
G
I have said this over the last year. AI does not need to get to AGI to be very…
ytc_UgwzwQPWw…
G
As long as these robots aren’t human-like, we should be fine. Robot vacuum’s, au…
ytr_Ugx5rbt1m…
G
> 4. AI might be taking away jobs from medical scribes but it’s too early to …
rdc_nnrqjru
G
What if it was realized in the form of a physical robot, it would be terrible😢…
ytc_UgwDBAOCE…
Comment
There’s an easy workaround regarding using the Turing test, just have both LaMDA and the human say when asked their identity that they are AI.
youtube
AI Moral Status
2022-06-30T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugw5j7D18_Oz_2lJhA14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybiYcvGfKS36yN3Z54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxT_J32secKIh4ls7B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxAIqVyUD1I9iJ_Tjd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwwgcg08eHrRzjli6B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]