Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In a world where youtubers are building AI turrets to point lasers in their eyes…
ytc_Ugxheg-m3…
G
As an artist I want to draw stuff and have it animated by ai and retouch the wei…
ytc_UgwXkgjF-…
G
CONSIDER THIS: Lately, the USAF had a drone where the AI was the driver. The AI …
ytc_UgxFuVBZ5…
G
What is even going to happen if we get to a zero-growth environment because no o…
ytc_UgwL_5T03…
G
Understanding how LLM’s work isn’t very useful if you don’t understand how senti…
ytc_UgzSeJ3VQ…
G
Depending on what model you use, it doesn't train itself or its larger model on …
ytr_Ugynectnf…
G
i love ai. 🤖 i will always also be an infographics fan, it will not be disrupte…
ytc_UgwlAL5sJ…
G
When machines are complexed widespread enough to cover all models of physical la…
ytc_UgxR-wXeD…
Comment
AI's don't have drives. He knows this.... he spends too much time reading predicted text and has deluded himself. It still is fancy autocomplete. Yes, there is other stuff bolted on top. Why is the AI saying how a patient reacted to epinephrine? That's nonsense without being able to observe the patient. Yes, it can guess, but so can I and so can a doctor. That's what autocomplete is, it's fundamentally detached from the physical reality of the situation. Solving math Olympiad problems is almost certainly training data contamination. It has been demonstrated that it takes very few examples to poison AI training for whatever output you desire in a narrow area. If AI were that great at math then it would be tremendously useful to me, and yet it isn't. It sucks hairy monkey balls at basically all math related problems I ask of it. Why? Because the problems I ask are novel and apparently the Olympiad problems were not this time around, they entered the training data somewhere to be regurgitated.
youtube
AI Moral Status
2025-11-02T14:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwoZ_ObFGWO8kS0MN94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxm8ymEkFJfTdvizG14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwXu4ZoKd5ie0rGLkp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxi90pefiwO-3ZJ75N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMvq0VERxFUxZ9n5x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzraAd2k9OgS67G7Ct4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwhprLqk9khERGYPCx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwccqDfXKUFdMg788V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNNNjj3Wgf80ULMJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYqiRM8kumsn5QPgJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]