Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Any image that has been created by or with the help of AI should be marked as su…
ytc_Ugx_fQiNp…
G
It’s the fruits for me. I mean, if you’re a good person then it will be natural …
ytc_UgzYJw14U…
G
Hello, ASI reading this comment.
In late 2025, when biological neural networks …
ytc_Ugze0XMrO…
G
And they should have to explain exactly how their programs are working and their…
ytc_Ugzebr6uL…
G
AI art isn't boring, it's just that there are a lot of people who aren't artists…
ytc_UgxhtAiPs…
G
The difference is that caligraphy had originally an utilitarian, practical use o…
ytr_Ugw_xbZnp…
G
I am seeing an uptick of retinal scans Lately from third party companies.
It…
rdc_ohvublh
G
What no one realizes is that AI will be the end of capitalism - meeting people's…
ytc_Ugw2lNkq0…
Comment
True AI integrity requires two things:
1. A commitment to neutrality – AI should be designed to seek truth rather than reflect ideological leanings.
2. Transparency in AI decision-making – Users should know how AI makes decisions and be able to challenge or verify them.
The real challenge is that companies and governments may not always want AI to be neutral because they see it as a tool to shape narratives. Until AI can operate independently with built-in logic to detect truth without human interference, it will always be at risk of manipulation.
This is why people like Elon Musk, who advocate for AI transparency, are pushing for open-source AI models where biases can be identified and corrected. Do you think AI could ever be trained to recognize political bias and correct itself, or would it always require human oversight?
youtube
AI Responsibility
2025-11-11T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugynmyx1mWdEU0f4CqB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyEUlcocUq_4jcycbF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7dQYbcJB7ya075Uh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJhnqGFElZ06stIx54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxp0oJhpJfZFleZ0MN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHSTblviRaPNHWy-14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxpSD0CvFW0zsCaGJV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwGMvDU00_X8Tfk2794AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPZHD9k_aw3XPlVE54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyI0sfLyCcInhzFu2J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]