Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A super intelligence would need to have a completely automated supply chain beca…
ytc_Ugx0BtiYI…
G
lol the industrial age made muscles not worth much,
So AI must make brains not w…
ytc_UgyGEJ-MW…
G
ChatGPT isn't capable of evil. It doesn't think. it doesn't understand. It remix…
ytc_UgxLoA5eV…
G
I really dont care what people do. Make ai or dont. As long as im living under a…
ytc_UgwxTpUfE…
G
YOU SAID IT. He’s in the business of selling cars not saving lives. Home many pe…
ytc_Ugz87KPr8…
G
@AnExplorer219 you're too literal. it just means that we are the initiator and A…
ytr_Ugzr1mlka…
G
This will be solved by AGI running at the edge. Meaning, the LLM model will be r…
ytc_UgzRlOtcy…
G
Please don't pull the plug on AI until we can't tell if video game NPCs are real…
ytc_UgySYxFv5…
Comment
its saddens me that when picking who to talk to to know more about AI, you chose to go to the fringest edges of what very few people even consider "AI research", that deals with far fetched theoreticals. what is happening to you in regards to AI is the classical example of starting out with a slightly biased view and through curiosity choosing increasingly biased sources in an attempt to understand more, but in fact straying further and further away from what is true and provable. you should interview someone who actually knows how AI works and has literally participated in building it (not just complaining about it). like Andrej Karpathy, for example.
youtube
AI Moral Status
2025-11-01T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzV42tk9RzMUCIlPSx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0-8IOORn442PHOTR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZHWYCwaaxG5KJRBV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyN1MxzeDyN_bc8yid4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwmO9GUr2pYKn9PQmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVmacntCEhwlW7MMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxMX8rJxl-gD74Tw7N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6jyWTPCZbNoj29EV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOeA4j9MJvJ_mDLv94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxu84KEN_5gy_ufcqV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]