Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's a regulation problem not an AI problem.
I work in the data center indust…
rdc_ohyb0mw
G
I dont like how only thing that people talk about is dangers.. I think that it h…
ytc_Ugy8lDD6S…
G
OpenAI as a whole is built on stolen data + compute. Grok is simply plagiarizin…
rdc_kco4d98
G
There is still human input and human intention. It's a little more abstract, but…
ytc_UgzGn7mWk…
G
Its not going to erase actors, there is just going to be a rise of AI actors. S…
rdc_lub05k0
G
5:54 didn’t the Waymo kole CEO more or less say that people would be OK with a f…
ytc_Ugw6JFm5H…
G
The biggest issue is the use of "AI" to refer only to LLM technology. AI has bee…
ytc_UgzxlWpD5…
G
they are saying AI will make programer more productive, that means one can do wo…
ytc_UgytE0fGj…
Comment
Humans could kill all Gorillas, but we don't do that because we have morals. It seems that intelligence and morals develop together. I think the more intelligence a being is the more moral it becomes. if AI becomes more intelligent than humans it would have very high morals, so we are safe.
youtube
AI Governance
2025-12-05T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYLqGdCiCDXwFe9XF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz7fLFhx-ZTY30iDhN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEM-v269J50Zg8nVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUs9VJolAwZ9JtCyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhwYa1mJw-YQBqaUd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZk7orM3w14Q2X7gh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgEyJfGAm80Q7GWsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzuVBF8JpP7Ae8bqKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSxefe9LeyYNEkVhZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDILA_4Ia8orIT_Tt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]