Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This won't happen for a few reasons, but mainly, people will shift towards a mod…
ytc_UgznwlIyt…
G
maybe for Open AI, Meta and Google are pricey (particularly Meta) but not an ins…
rdc_m94n13k
G
The interns are going to be needed to run the AI.
Pretending that management ca…
rdc_j6dvudc
G
Thesis: AI/robotics—driven by billionaire investment—will massively enrich a few…
ytc_UgxLpLrRs…
G
Terminator. Let AI take over. Humans go to war against it snd win based on emoti…
ytc_UgwWfq47L…
G
Hopefully when AI gets smarter it doesnt remember the times the art community bu…
ytc_UgwbnYwbi…
G
John Milton was completely blind by the time he wrote Paradise Lost, one of the …
ytc_Ugxx83m-c…
G
So i am very late to this party but i just realised the "surreal flower head wom…
ytc_Ugwgblf7n…
Comment
Due to the fact that the efficiency coefficient is always less than 1.0, or in other words, the cause is always greater than the effect, a programmer cannot create software that can completely replace it. This software will always have lower complexity and functionality than its creator. So called AI systems will always be less efficient in terms of organization and energy consumption than their creators.
Any idea of a computer super intelligence created by humans does not correspond to the reality in which we live. Such an idea can only be another scare tactic by the globalists.
youtube
Cross-Cultural
2025-12-19T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzNanwxKvr6SC5jjpF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdDgxfpblCdsBuixB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzxIP0B3IikW-gRrN54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxiQR75jYl3whVFNAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRlzu5yCdkE3suKpt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzchm61pYTnKpprcvN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJdX69KOMGCTB3dix4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6YgprDhEu47ZRz3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyjBJ8s5A3utn1vxzV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgziLpUv-e2DNwqo93h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]