Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don’t forget the 20 people that Tesla picked to try their robotaxi’s are Tesla i…
ytc_UgzoM4zdv…
G
I am a disabled artist and only thing i want ai to do is to tell me what color m…
ytc_UgwSDYA6W…
G
What's dangerous about AI is its social application and the lies about it. To ca…
ytc_Ugxlhz04N…
G
AI cannot be selfaware since it's just a bunch of code and it cannot feel pain. …
ytc_UgznL1CZ1…
G
> Is everyone in tech ghoul?
I feel you could drop "in tech" from that. Ther…
rdc_n67ggv6
G
okay I have watched the full interview twice now and honestly, I just don't see …
ytc_Ugy_UfQv7…
G
What a stupid idea to put AI into focus as a replacement for human input in the …
rdc_i2vf2la
G
A.I. can either mean "Assistive Intelligence" in terms of helping offload our me…
ytc_UgwqeEs_E…
Comment
I don't know. I personally am incredibly disappointed by his characterization of AI, especially when he compared automatic braking to AI. By that standard, anything at all that has a sensor and a microprocessor following basic if/then programming is AI, which is plainly and completely incorrect. Although the definition of artificial intelligence is rather nebulous, nobody goes around calling their home security system AI, just because it can alert the police when it senses a door opening. Also, the fact that he did not differentiate between large language models and other examples he gave, such as chess and jeopardy specific computing models, is just baffling to me. LLMs, from which people are trying with varying degrees of success to accomplish actual agency, instead of fairly straightforward task type actions, is specifically designed and desired to replace the vast majority of white collar jobs, and a significant number of blue collar jobs (once the robot side of things is worked out). A chess playing robot, however far it may be beyond human ability, isn't going to take anyone's job, not even a chess grand master's. Please do not take this criticism to mean I am not a fan of him and his work, but I really feel that he dropped the ball on this one, most especially because he is here specifically in his role as a science communicator. Whether or not you think AI is going to destroy the world, there are certainly intrinsic dangers to AI, and especially AGI. If they were to solve the hallucination problem but not the guardrails problem, then AI could easily enable pretty much anyone to have step by step actions for planning and enacting any number of illegal acts, or just plain inconvenient ones. Consider the hacking potential alone. AI can make it exponentially easier to scam, hack, and decrypt files, thus putting all of us in even more danger of identity theft and other related issues. AGI is potentially an entirely different threat. No one is going to invest this much money into it and then decide not to utilize it, regardless of the potential for harm. By definition, AGI would have its own decision making power, and there is no real way we could know what it based its decisions on. It could manipulate or crash markets, misallocate funds, and so many other things that could harm many people with out being an apocalypse. His own admitted naiveite about both the direction AI will go, and the expectation that all of these displaced people will just find jobs in other, new, sectors surprises me. It is well established that very few displaced workers actually get a new job equivalent in pay to their previous one. If AI fulfills the expectations of these two people, it still won't solve the basic problem of humanity, which is stupidity. It will only increase it, since it will do our thinking for us.
youtube
AI Moral Status
2025-09-06T18:5…
♥ 75
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxDtenlY0UJD2vYLit4AaABAg.AQ8-c3ZNqlmAUoQz9PnHVj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxodIhbehwqCgKNE4Z4AaABAg.ARJBsOEynlvARMmkmFRH9w","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxz0Uj7CK4Vtqf3rih4AaABAg.ANyBBxiuDonANyBjRGB1Qu","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytr_UgwOuLFZe6b3hVxIImt4AaABAg.ANoMQFqXdikARa14ryjayl","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwOuLFZe6b3hVxIImt4AaABAg.ANoMQFqXdikAUTqQ8GotqF","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgwckWZZ_x6OBm9XGp94AaABAg.AN5hu7PdfPYANuT2aHgPRT","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwckWZZ_x6OBm9XGp94AaABAg.AN5hu7PdfPYAOBD8tOjZLs","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwckWZZ_x6OBm9XGp94AaABAg.AN5hu7PdfPYAODkioMjpQu","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugz8YLPihyTX9ZQ_o1Z4AaABAg.AMdWsTP_rSoAMjtiGYLx2v","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytr_Ugz8YLPihyTX9ZQ_o1Z4AaABAg.AMdWsTP_rSoAMmC_PQcR5G","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]