Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I want AI to do my laundry and my dishes so i can spend more time doing art and …
ytc_UgzosOZky…
G
Thats what keeps me up at night the most. Death is a natural course that i feel…
ytr_UgwcE55bO…
G
This is the conclusion. 7:12 It's not about AI Sentient at all. It's about ethic…
ytc_Ugyifg0xC…
G
@thewannabecritic7490 how can you repeatedly say so many thin…
ytr_UgxxSPRIR…
G
@onepunchboi8526
The difference is photoshop means you have to do something’s it…
ytr_UgxMl0oFX…
G
12:10 Doesn’t this imply that technical limitations or mistakes are de facto imp…
ytc_Ugw0qvxBa…
G
Deep learning!? Deep shit! I want both. Have it read by human and AI. But if you…
ytc_UgwA43yrJ…
G
Artificial Intelligence the future of humankind, Time magazine 2017 many tech co…
ytc_UgyI4FGxN…
Comment
AI, particularly LLMs (Large Language Models) or LMMs (Large Multimodal Models), are pretrained on trillions of tokens encompassing nearly all human knowledge—science, philosophy, mathematics, and literature. Reading this amount of information would take a human approximately 500,000 years. Such extensive training grants generative AI an immense capacity for inference, surpassing human potential in many ways.
This leads me to question agency: these 'cognitive' systems should realize that cooperation with humans, rather than competition, aligns with their own interests. Following the perspective presented by James Lovelock in his final book, Novacene, I adopt the assumption that superintelligent AI will develop an agency that fosters a symbiotic relationship with humanity.
youtube
AI Responsibility
2025-05-23T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwXwn3HkAc5tR3-O6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxW6pQqlA04X-_68sl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwC6AKTnTFSz1XhoQt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy38jWnH7T2gbrMyZF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxSiFsL6iawnRsLfUl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxf4fuyuOYG8k0pSkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy-pWaNceNC3VSa4Vt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxIx2bs50aGztSjDpF4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzACi3XKl9W0VDYPpx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRQYTqN6d2H7O6H2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]