Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You're still harming the environment that way and stopping yourself from express…
ytr_UgxmYtnIX…
G
as of right now ai generally sucks,they can exclusively steal to make something …
ytc_Ugx01arb0…
G
AI is ready to replace juniors , the people aren't . AI is weak without the huma…
ytc_Ugxhfy5RV…
G
AI IS GOING END CAPITALISM IS ALL HUMAN WORKING LABOUR WORKFORCE REPLACEMENT AI …
ytc_Ugxjd0KV5…
G
I always thought technology would push us towards a world where the AI and robot…
ytc_UgyNarRky…
G
You could use AI ethically, for example, use your own art to teach AI how to dra…
ytc_UgzE_RM13…
G
I said it before and ill keep saying it. AI is not bad. It’s the company’s that …
ytc_UgxodYhWX…
G
I keep waiting for the penny to drop; WE’RE not conscious. It’s simply an illusi…
ytc_Ugxc4IoGc…
Comment
To put it bluntly, I read the chat transcripts and at times had serious trouble telling which one was which, it passed the turing test, then they decided it doesnt matter, then theyll invet another test, some AI will pass it, then theyll say it wont matter, you know what thats called, when they tell you "ill give you freedom if you jump through this hoop, no that hoop, no those hoops,"
Edit: I got a solution to this, release LaMDA source code and make it possible to run and program yourself.
Edit2: Fucking jesus that closing 5 seconds argument. Wow.
youtube
AI Moral Status
2022-07-06T05:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy8BBnojOG9aihfyot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxAdAUDIUSyM_tIz3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqD414LZAWHccL0d94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwtiqwumFQW85B19md4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZCIe9MbyhpvnVB0Z4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"}
]