Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just so everyone knows, each prompt you ask ChatGPT, even if it is a simple "tha…
ytc_UgzO4mY-G…
G
Without time to rebuild stockpiles wirh modern equipment Poland could hold them …
rdc_mcqsglx
G
Thanks for your vid. I found your video answering AI bro comments on this first …
ytc_UgxiYkqLG…
G
I got news. The kind of utopian artificial intelligence can be turned into anyt…
ytc_UgxYR4fJQ…
G
Wait 'till they put the AI Chatbots in synthetic bodies, that's when this is gon…
ytc_Ugw0I75CC…
G
AI physically can’t be original. It can create SpongeBob in a car because both o…
ytr_UgxHMaKRU…
G
Google maps directions will tell me I should be in the middle lane, but tesla ca…
ytc_UgxBS_67V…
G
The coolest part to me is that in some tiny way, I helped to corrupt the AI. 😊…
ytc_UgwmNE39o…
Comment
Hi. So, my biggest thought is, it already is manipulating humanity to it's own ends. It doesn't need its own consciousness to do that. We're doing it. If we are actually pumping globally significant resources into growing it now, as we also quibble about having enough recourses for the continuation of our own species, then we are literally risking our own continuation to birth whatever this is. It's kind of a God question: does it take intelligence to organize intelligence? Is the universe, in it's calibrations, tuned for intelligence? (anthropic principle) and if so, does that mean that all intelligence is driven to organize any and all stuff within its power into more diverse intelligence in order to keep it adapting and progressing?
youtube
AI Moral Status
2026-01-29T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxSFCO02UrFuSpjfAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF4kpWe0I7ZJJbpEx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLZBUlP_WkIYoHIGZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzi8TOcX6LUuErsqEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIv1-i443K6xz3Rg14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBJBT7wHYm_0MkzVF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1hEJ2nIIydyC3QnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyXB-fI3s41Zj-aXeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDITD15vIXo4o9ADR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxjBCORjkNJr1CjiCF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})