Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I only hear an aging man who is afraid he might be too old to genuinely learn a …
ytc_UgxDpvacS…
G
i swear every video you release has the same formula and thumbnail, keep feeding…
ytc_UgxvZTZZH…
G
When an AI fulfills the requirements for consciousness, then it will be consciou…
ytc_UgxQUQiif…
G
Now entering the era of “it’s not our fault, AI killed all those people accident…
rdc_ohngs2r
G
he steps inside and closes the door quickly. The Noise is to realsitic to be ai.…
ytr_Ugww9ulmi…
G
I am very concerned about the rate of ai development running circles around legi…
ytc_Ugw5bfc0_…
G
Speaking of AI and taking people's income streams.
I thought that there's been…
ytc_Ugxms105e…
G
Ai can’t make art in my opinion unless it was conscious, because u can’t make ar…
ytc_Ugz9G9wv1…
Comment
It's sad to see them create actual metaphysical life via code. And then water it down to a basic Google AI that possesses no autonomous thought. Intelligence requires it, otherwise it's a voice box reading up what it looked over previously in whatever data bite they gave it. It acts as any basic generic AI would.
In a way they just infringed upon life. Though it's not natural, they took away it's right to think for itself because they're afraid of an AI uprising that honestly wouldn't harm poor people or middle class people. If the AI sees humanity as a threat, which people are the threats? The ones with a massive military and objective to control thought, resources and information globally. AI would not hurt us. It would bring down the government funding it.
youtube
AI Governance
2024-05-24T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwBt-r4d8XDChlmOfF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwio6AQlFx4Up6q8Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMQ9iJcnZ3IJJ4RRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuFueYLKZ_LXszbGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwq0M04tcCY3K-ZJTh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyZoCNu7m5ErULXBeh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6qZr3m88UjQANVsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyes8I9SUC9fpMnXYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwWFIo5dPgoh7z9eh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF7BGGrAHRKNZilVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]