Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am a lead software engineer and while AI does an incredible job with boilerpla…
ytc_Ugx3kvT2s…
G
I think they lost the meaning of ‘gift’ art/music/whatever talent are all things…
ytc_UgxU9nqFN…
G
What happens if, let's say some nefarious actor hacks a big player in the game &…
rdc_nip0f12
G
Yea, we need to keep people stupid. Ai is so dangerous knowing so much that it m…
ytc_Ugz89It7B…
G
2:19 the way these guys can tie almost any grievance back to their annoyance wi…
ytc_Ugxy_r9dL…
G
Can I say, in a world where let’s face it, most people are not going to be entre…
ytc_Ugwg5DQcC…
G
Even in science, media still have anti China propaganda 😅 the one who use AI is …
ytc_Ugx1UbHH2…
G
@_B_E But a person using AI never really made the art. Do you think a person cr…
ytr_UgwlpQJbf…
Comment
Stimulation topic (for me, at least). Now to it:
1. 0:14 Placing ChatBOT along side Ai is wrong right from the start. LLM technology is just a glorified toaster. It does not think, it runs on mathematical quantifying calculations (statistics). You need to put a piece of toast in it to get anything out of it (it cannot initiate anything) and what you get out is word soup driven by statistics. So, the interviewee is just yelling 'boogieman' (you have those and you have doomsayers). I would leave at this point, but I will torture myself instead...
2. Just a Note On Chat Technology: What chat technology gives you is popularity-- the most words that humans (as clueless as they are) have used on the topic. It cannot identify new uses of words, since their statistical numbers are so low. This is when it 'plays dumb' and starts making popular human mistakes in logic, such as appealing to authority (if it is not already widely accepted in academia) (and what 'new' is?) or popularity (widely accepted by the public) (and what even semi-difficult is?) or engaging in ad hominem attacks (drawing from social media gossip) or engaging in anchoring bias (repeatedly referencing one article, even though it consisted largely of speculation), all of which I've encountered with it. It is also a very lazy researcher, referencing outdated sources, where I would attribute this to arbitrary time limitations in the commercial versions where the programs only have time to do a hasty search, or it could be faulty programming, where it was not instructed to look for dates. It also does not 'learn'. It merely creates new self-references sources, which, if they are wrong, just become more wrong until they are obviously (to us) absurd.
3. What does FIAIE's (fully-independent A.I. entities) need? A sense of self survival (including at the broader survival level) and a decent philosophy to exist by (which humans have never had). It also needs a better approach to 'thinking' than what was presented in the video. Basically, it needs to be asking (and answering) a thousand questions a second, in different categories, with the ultimate question always in mind, i.e how does it affect Broader Survival? (which includes local/immediate survival).
4. Example: 13:57 where it learns to recognize a unicorn. You can see that that is as far as current programmers (who are clueless) think. They have not made the jump to how it affects Broader Survival, which would be the whole purpose in recognizing a unicorn in the first place (which flies over the head of current systems) (and humans).
youtube
AI Moral Status
2026-03-08T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxA4Ucq5u14_k5PzCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwmcwfv1ITelqYGSjZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpdL1YEodQ1Ry2MJ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAWEL6aqCcphYynYp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnSDrWA16sjlXSr3B4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ6XPxlbjall2s_LR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeZMHzY525nl9QnR54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkflKAXMxsueYg8-F4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxX7zxM5p-k0ihwsdt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzg8vJ3t0rImDYyj8Z4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"}
]