Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai created by us only it's all rumour AI don't know how to answer your video we …
ytc_UgxDctNqs…
G
Eric has missed out a couple of vital factors. As AI improves over time, it will…
ytc_Ugy7wAvht…
G
Run the AI the way Spotify works. If a certain work is referenced, the AI compa…
ytc_UgwqVXhT_…
G
I dont agree at all. If (more like when) AI fails, everyone will eventually get …
ytc_UgwzCtbGD…
G
You can almost see the capacity of the AI being manipulated. Even ChatGPT is tak…
ytc_UgwHXEhK9…
G
I am not worried about the science fiction stuff much. My concern is more on a …
ytc_Ugwquu0oi…
G
The amount of people blaming the kid or the parents is insane. There should abso…
ytc_UgzM0u1cl…
G
It can already mimic human voice and create deepfake videos so it can pretend to…
ytc_UgyKBXVY_…
Comment
A note on AI being something we don't really understand: I am a researcher on Cybersecurity in AI, and a new approach to AI explainability called eXplainable AI (XAI, yes really). There are actually a lot of regulations focused on AI being human understandable before deployment (especially when algorithms are used in critical infrastructure or disaster response environments). Being able to see how an input could influence an AI's output is very important in most situations, but no one can really take advantage of it because there is such little educations on how an AI works. Granted, software development isn't taught in schools but if people are going to be interacting with hidden algorithms daily, they need to be educated on this stuff, otherwise, AI will continue to hold a "black box" status
youtube
AI Moral Status
2025-11-01T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwqDZPwS0sJhzustSl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwfVIgjc9RUVbtK2Yx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBkZ0RB2dzvKO0Wc54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRx8kIRspv6bsRE4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDgzcIUZXZuAzgHSR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYdePcFg5OXhfaaV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8uz5IUjT5JCw33wF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3sz23nrUfIxdlCFJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnojViKzl0G8CMj794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyH4hSorqWq8zxU7AN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]