Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, agi can eat, reproduce, travel and push the button to launch nuclear bombs…
ytr_Ugxmws7P_…
G
Not against AI, but I think there is still a lot of misunderstanding about the c…
ytc_Ugz7Ic8bN…
G
Okay AI robot try to take care of a person that wants to be looked after by a hu…
ytc_UgwXAsBvx…
G
I cant watch this whole video, gives me chills and i don't want anything to do w…
ytc_UgxpcImLP…
G
Yep. OpenAI/Microsoft put out ChatGPT and immediately started talking about how …
rdc_k0b6eqw
G
While I think I get what you're trying to get at with this video I have two disa…
ytc_UgzEYjtoo…
G
Considering every advancement Ai art has made and will continue to do so, should…
ytc_UgztSxIRA…
G
AI has a lot of potential, but mimicking humans in superficial ways is not worth…
ytc_UgzYCam7Y…
Comment
LLMs do not think. It's not anissue of scaling, thought of any kind is beyond these models. Anyone who claims otherwise is either a crackpot or a grifter.
youtube
AI Moral Status
2026-03-17T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyqxt5IJdXB9eelCf94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5yPGi8gAxbasB_-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2ErNQXQH8qkVLRQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwlWgW6XRvwqNIr4Ud4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwiEmsfSvp8hElW1Vl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRLP7ffkwSQwPfo7d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCa9XeQ6kg8kKmZcF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRBfcWInC8fby0QUx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzMmUTWxajU8mFlpN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzv7FBttz_OetAoEgB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]