Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ForOne814 Your Wonderland you mean?
"It's not important because it doesn't aff…
ytr_UgxsE6j-l…
G
This is awful! Will META pay for illegally steeling and downloading billions of…
ytc_Ugxgaky-l…
G
AI lacks the I. It's dumber than someone learning something new by copying becau…
ytc_Ugzo2E1z6…
G
You know the world is about to end as we know it when the AI CEO sounds global a…
ytc_Ugy9ZMnpU…
G
5:25 sick so all I have to do is pull up AI slop on a big screen and photograph …
ytc_UgxrdjmA-…
G
Autopilot: I can detect dangers and automatically maneuver myself so that I avoi…
ytc_UgxNYN3Q3…
G
I always like to say that my art and my drawings always have a piece of me with …
ytc_Ugzcwt82Q…
G
@vanillasky2194making AI illegal will make things worse. Higher ups have more …
ytr_Ugwf9t23r…
Comment
While these models can seem eerily clever and creative, they are essentially just hypercharged autocomplete and lack true intelligence or understanding. However, if these systems become more efficient and cost-effective, they could be distributed at massive scales, leading to the creation of millions of emulates that emulate human intellect. This could have potentially dangerous implications, such as creating counterfeit digital assistants designed to gather personal data or flooding social media with spurious claims to poison public discourse. The future of AI will depend on whether these language models hit a speed bump or become more efficient, and whether society can find ways to regulate and control their use.
youtube
AI Moral Status
2023-08-21T02:5…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJ3FLTyw6se1M-VXJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4g9JhMUtU1hOpO014AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzD5Ga7iRgWTzMgTN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx1GuESI1uqv6DHL554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy980grYPdF95VZUFB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxAgMZpYk-v22DbTqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx7VwP2dj16RojXhBx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzbDJ_lHDfvcS--4Kp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwjy_66TPPRzFpGsCZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzJ9xAXSYVCRXs2ZR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]