Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
lmao people thinking AI is making the decision between fire ze missles and doing…
rdc_o7bwgdl
G
Lol over demonazing Ai is crazy, it's real good if you actually know how to use …
ytc_UgyjuZSnl…
G
is it OK if I purposely feed your poison art to AI so it can deteriorate faster …
ytc_Ugzmoclt2…
G
Meet the robot capable of stochastically parroting a large dataset of descriptio…
ytc_Ugx6vaoBr…
G
The way AI works is simple, its just repeated sets of instruction and then the p…
ytc_UgzNTbqzf…
G
Okay, kevin. This is the problem
Crimes shouldnt have algorythms. By judgement i…
ytc_UgxUwTdvA…
G
Great video! The two main takeaways:
1) Don't eat bromide.
2) AI represents a…
ytc_UgwmH4s0r…
G
Executives sit through meetings and make decisions AI could make better. Worker…
ytc_UgylBL2ay…
Comment
James Gleick reports in The NY Review Vol. LXXII No. 12, in The Lie of AI, mentioning an article published in 2021, written for the ACM, called, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" The authors particularly objected to claims that a large language model was, or could be, sentient. The kicker is that two of the coauthors led the Ethical AI team at Google. Google ordered them to remove their names from the article. They refused and resigned or were fired. This shows that AI makers encourage this myth (and some of them may believe it themselves). There is no real controversy. Machines and software will never be sentient. Only crackpots claim that.
youtube
AI Moral Status
2025-07-09T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzBKKpk66maK-nV1bF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwi8KnHOOw_GQGscA14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxdWwGEfRMPhDYMkuV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx3JADeD_wcgYaYdL94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLYj1v_ngVTYuVweh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwsM8WjGeBz2kBU50h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw9ptTtz4cih9ZrCMR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxrKRLPXW84H5KkrEt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyJ4Z5MJGJWuJm6Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwt2q2eTSbitIqx-7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"}
]