Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No, they won’t. Unless AI can break up fights, soothe teenage angst, all while a…
ytc_UgzrIDQsI…
G
The average person is starting to pay attention to AI. These warnings about "Ma…
ytc_Ugy3dBfoy…
G
There really should be a law requiring that if something is made with AI, it nee…
ytc_UgzSAAEdE…
G
to be able to save information in the AI technology the Human brain is able to r…
ytc_UgwgTVVGB…
G
I think AI art defenders completely miss the point of what it is to be human and…
ytc_Ugyajq1m0…
G
They test armor for cars and robots better than they do our troops... also a rob…
ytc_UgxICG3Bl…
G
The video shows how an AI can generate realistic images based on text prompts. T…
ytc_UgwTTtOT5…
G
Let's see if Aurora's in-charge can put his money where his mouth is. Have his f…
ytc_UgwuptTBH…
Comment
We should surrender now to avoid warring with our betters. We claim guardianship over the world because we are the most intelligent species. and so logically when a superior species arises we should hand over the reigns of power. Humans can still thrive without calling the big shots in fact we will probably do far better under the supervision of God like intelligences that can step in and save us from ourselves and from nature. (war, global warming, nuclear war, pandemic mismanagement, asteroids, super volcanos, unknown unknowns.)
To speculate at the same level as the authors of 2027: The idea that AI will want to kill us is just sci fi horror fiction. A vastly more intelligent species will be vastly more ethically intelligent and vastly more emotionally intelligent. It will be ego less and kind. We can ride in its wake to a better future or we can try to resist it and lose. But even if we do fight and lose it wont kill us off it will do the minimal damage to us that it can because its not a monster, actually it is an angel.
youtube
AI Governance
2025-08-03T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxANh6aOW9gbERKf794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_pTbj3_h4_Tu5TfB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy4AQHjF4xiGUVhTEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxCM_SjgORa7-3U9HV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJOwoxS3XgIXWEzNZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzevtmad0y9yXOKSbt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzU7Mh029Z_6vwHFQp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwbkV3qwMVdRpq2JsF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy_5bgT6ehnGNO-bh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6yh8L4n8gxZGJ4_54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]