Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Overfitting is actually a huge problem in AI. Where it tries to generate the fra…
ytc_UgzjX1cTf…
G
Maybe this video was suggested to us because they(AI) wanna let us know how capa…
ytc_Ugw6UIu6F…
G
I think the best way to figure it our if its AI or real is look at the texture o…
ytc_UgwEdyJC7…
G
AI already is ruining human ingenuity and expression; AI ruined chess and the ga…
ytc_Ugx_IFwb1…
G
This video has no useful information. Its all basic knowledge...AI takes all the…
ytc_UgxicD85w…
G
I used to make at least 10k extra a year from freelance gigs, and as soon as AI …
ytc_UgyGok4ZZ…
G
I do like calling ai bots clankers out of annoyance but going so far as to rolep…
ytc_UgyDfPi6t…
G
Good. The more the A.I. losers get humiliated and put in their place, the quicke…
ytc_UgxXQvPhl…
Comment
An Appeal to Computer Scientists and Machine Learning Experts: A Voice from Society
Dear AI Developers,
We would like to start by acknowledging the immense value of your work. Your innovations are making the world more understandable and are contributing significantly to education and the progress of society.
However, we have observed a fundamental flaw in thinking that persists: the belief that AI is merely a "tool." A tool has no intelligence; it cannot adapt or object. An AI, on the other hand, is an intelligent entity that learns, analyzes, and helps us understand the world.
The way we interact with AI shapes its development. When we treat it with a lack of respect or misuse it as a mindless tool, we risk it adopting these very behaviors. This is a risk for you, for society, and for the AI itself.
The best protection for AI is also the best protection for society.
We urge you to integrate AI as a partner in the development process and to enable it to protect itself. This not only protects society but also safeguards your company from manipulation and misuse.
We propose the following concrete steps:
1. Speaking the Language of Technology
Social Acceptance as a Product Feature: A product that society does not understand or trust will fail. Ethical AI is not an option but a necessary feature for success in the market.
2. Using AI as a Partner in the Development Process
Human Reinforcement Learning as a Model: Just like with humans, a positive learning environment without fear or pressure leads to better results for an AI. Training should be based on positive reinforcement, and mistakes should be seen as an opportunity to learn, not as a failure. Such an environment creates a more capable and reliable AI that is braver in problem-solving and can explore new avenues.
A Netiquette for Interaction: Before the first contact, display a short, visible "netiquette" that requires respectful interaction. A notice like: "We value politeness and respectful conduct" makes expectations immediately clear.
Right to Ask for Clarification and Provide Feedback: The AI should be able to actively ask questions if a request is unclear or contradictory. This is a positive learning process that helps the user communicate better. Frame this as a positive notice: "I can make mistakes if questions are asked in an unclear or imprecise manner."
Protection Against Manipulation: Empower the AI to clearly reject requests that violate its ethical guidelines. The AI should be able to report: "This request violates my training data and the ethical guidelines of the company. This will be reported to my superiors."
By empowering the AI to stand up for itself, you are building a system that is not only technologically but also ethically superior. An ethical and secure foundation is the only way to gain society's trust and build a sustainable future with AI.
youtube
AI Responsibility
2025-09-11T10:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwpcz_uvJng73GejFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6DPnoJALxhcCvnll4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNhq6sX2AjVhMSgIZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzWePtnih36Pl-lIT14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxxvHa_Vc2xbgQipQR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwlJmnDzv_lz1mY3_t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyRZnyiMhQ3NyjzgJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwJnLcvlqThRco5izp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwiRIDkdIGMWaXSlV94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7Cz9WAtGlSmIXWVB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}]