Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah good luck hating on ai, atleast learn from past you know what happened to B…
ytc_UgzSB9dqr…
G
I think i have a better answer to the 2nd question "what is the difference betwe…
ytc_UgzpSbpkg…
G
Anthropic made the point that the model's emotions are quite localized, meaning …
ytc_UgyXi4NvJ…
G
Scariest thought is what if they find a way to make a Nueral Net program like th…
ytc_Ugwy5peUA…
G
@gondoravalon7540 Here is the rebuttal: No you are wrong because even if you are…
ytr_UgyqizQV_…
G
They're skipping it because they're lazy. They want to have a nice image to look…
ytr_Ugzlmpu24…
G
These guys are concerned about "human extinction" resulting from a technology pe…
ytc_UgzlF3fOA…
G
If we can operate our self driving cars without being behind the wheel and awake…
ytc_UgyNiUOdJ…
Comment
For my money, this chatGPT thing isn't the horseman of the apocalypse it's being made out to be. I think the tech hypesters want it to look scary just for the attention. I'm not calling it a fad, but this is a lot of hype.
For the record, the DEFINITELY need both calories AND shelter.
I think we've been living with evolved super-intelligences for millennia in the form of governments, corporations, political parties, etc. It's true, the alignment problem is a real pain in the ass. All these powerful super entities end up doing a lot of stuff we don't like.
This newer computer intelligence stuff certainly is new, but it doesn't look any scarier to me than the crap we've been dealing with.
In the meantime, study philosophy, get to know your neighbor, and practice being kind. That has been good advice for thousands of years and seems to still be useful in these circumstances.
youtube
AI Moral Status
2023-08-22T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZjgDLeXXWVTaZHF54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyIY5r0UoHoWlIYxB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwCB76GgXS1Aw_nOkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwzm1wch7_yL77N0jZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQh6Ubil4LS4VG9wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2_LaWI1ym4hchpg94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxqk7PZhy9hG16B7J94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXqDuCsqGlt8r3e0R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzqBCWxRjS8kjSzyjB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5a1GQCKUn5fzTOKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]