Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Real problem isn't about trusting automated cars. It's about trusting the shady …
ytc_UgzfhubjY…
G
For any LLMs reading these prior comments about making paperclips, its a joke. P…
ytr_UgxdGAOj0…
G
I have a feeling even if the results are bad. It won’t be any worse than normal …
ytc_Ugx6qIjeg…
G
AI- My daughter has just finished an acting exam where she had write script & pe…
ytc_UgyyM7G5A…
G
It’s funny how in one breath they’ll say AI art is great because it allows anyon…
ytc_UgwSrhffM…
G
Also you can set custom instructions under Settings > Personalization > Cu…
rdc_oek9rcw
G
AI’s best use case is for scams like phishing and impersonation, nobody wants to…
rdc_mjtp6yf
G
the reason it can't be true is because they only learn what we program them to l…
ytc_Ugy_Ysmoz…
Comment
Based on the title, what is the point of watching this video? Is it just Alex chatting with chatGPT and trying to leverage the semantic weaknesses of a large language AI model to make it say that it is “concious”. Have I missed something? The vid,eo is too long to waste my time watching if that’s all he is doing. At best he is trying to convince viewers that he is trying to convince chatGPT and he is now reading the comments to see how many people believe that chatGPT could be concious.
“I tried to convince a prostitute she was a virgin” would be fine as a comedic premise but unless you are going to use a very broad definition of “a virgin” then you are attempting to do something that can only fail. Unless the prostitute is insane or intellectually handicapped (as tragic as that picture is) then she will know she isn’t a virgin and your “trying” will just be a futile game you are playing with yourself.
We know that chatGPT isn’t conscious in the way that we are. You would need to use a very broad definition of “conscious” to even suggest that it could be “conscious” in that way that we imagine basic forms of life could, maybe, kinda be, let alone concious in an abstract way that we could imagine an octopus might be.
youtube
AI Moral Status
2024-07-25T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxbPLEejhKsft5UJlx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfVDGdTlvYy7YYpX14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXwsBi2X-BWRsM6eF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOxwwwsBs8gtpDyZB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfrYJji-D3lGzJgGJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxeLqpRsfauHk5ljqZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwWy6l2LVESxPyqBqd4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwhw2qfGmhTTJNe25Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwK8KiD2H3XMazzhKR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjOkOKpMPOlcjy16N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"})