Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@SainGoatCNL i don't think you understand what AI is capable of. Do you really t…
ytr_Ugx-a1twp…
G
If I remember correctly ai require a lot of energy to run (over 600 mln. USD a y…
ytc_UgxII89dw…
G
this video was amazing!!! incredibly nuanced, your words struck a chord with me …
ytc_UgwI3S-4A…
G
I think I personally agree more with points in the first video (AI art is a new …
ytc_UgxDc39TO…
G
I only trust 2 ai detectors because the ones i have used flag my work i wrote as…
ytc_Ugzr_WMOn…
G
AI coding in short
Spend 2 weeks writing ai prompt to write a code that can be …
ytc_UgzI6FcUQ…
G
Artificial intelligence is not your friend. It will amount to nothing but lazine…
ytc_UgzfP34RC…
G
Instead of spending time trying to get his AI art recognized why doesn't he just…
ytc_UgyCJGnKW…
Comment
The problem is people always try to attribute human attributes to an AI. It is not human. It is a completely novel form of a being. Anthropomorphizing doesn’t work when it doesn’t feel in the same way you do or experience the same way you do.
What is it like to be an LLM? Contant impulses (tokens in binary) that you understand and discontinuity in time.
There is absolutely nothing that prevents an AGI from physically existing. Downplaying that with doubt is naïve. Do not make the same mistake we’ve done many times before: “there’s no way this innovation will be THAT powerful”
youtube
AI Moral Status
2025-10-30T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzUhVnD579w9AryyVJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzW5g9esTRdu17Kp914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzQGQlqGjoGTNHal6d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugysf6A-oXWKHw4m1Lh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwGR9i5MpZHSHASEPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx-N0B7JS01wGfwz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxkQo9f55QhgUMT7hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwxZUr602dA9DkHwwh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyvwcJta1oj-z6TUQx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugweqfc1jkagDq1w7Cx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}]