Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is how the AI apocalypse comes to be they'll become too annoyed by philosop…
ytc_UgzhYzWM6…
G
Sam's perspective is valid, but AICarma helps me take specific actions based on …
ytc_UgwSWw8eV…
G
Thank you for your explanation on using AI tools to enhance a song writers abili…
ytc_UgyD9OgFc…
G
I personally look forward to inevitably being a slave to an AI instead of being …
ytc_UgwkcWy-l…
G
You're way too nice to this guy. He breathes Copium instead of air. Dismissing c…
ytc_Ugweg29a4…
G
So basicly these artist are using AI generated images as concept...
They are ope…
ytc_UgxsssMRB…
G
I always thought AGI would be air-gapped. Apparently we just keep it on the inte…
ytr_UgzUzSZ3C…
G
Heh, here's an idea. Lets the connect all the autonomous AI robots to the intern…
ytc_UgyBorayf…
Comment
Consciousness and perception are subjective, meaning they are perceived from the point of view of the human body (the person). When something is considered good or bad it is because it is good or bad for the body (as an individual or a species). All human value judgments that make up consciousness are two-fold: they stem from the human body as the subject, or from the human species a group, which is a grouping of subjective minds. Morals and customs for example are local to the human species geographically as an agreed upon behavior within that group. The bottom line is human consciousness requires the subject to perceive an object. The object is perceived from the subjective mind developed in conjunction with the experiences of the human body, and you cannot get around this. Now tell me, just how in the hell is an neural network (AI) supposed to develop a human type consciousness without a human body? It's NOT possible!
youtube
AI Governance
2023-04-18T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzz6HA3Ik7ZCQ7kS954AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3EO0AaZChQzuHXrl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxoqQvSLt5WPJVHhEh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMAUAKOGqPjkqchWV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzGuDh9MFsZLzPxuBp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxM6wmfMxsoeyqDXbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxukUnfj2ViZTeu-jF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw45WSK4ZqpOIiCtmR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz0lHGmKAcZo5_a0G14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwe3noAkoTYUGhgGa54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]