Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Testing alternative paths via decommission vs killing someone will only train th…
ytc_UgwtrUH9l…
G
Let's be honest self driving will always killing many people . Boycott self driv…
ytc_Ugz8MlJcC…
G
Call me crazy but why the background noise decreases when the robot talks? I jus…
ytc_Ugz27V72W…
G
boring... AI is not greater anyway ( all companies who invest in it knew it alre…
ytc_Ugy4CXZma…
G
It seems sensible to me.
I´m more interested in GPT doing creative enhancement …
ytc_UgzstZNNs…
G
These ai "artists" are not artist at all. They are bunch souless humans who only…
ytc_UgzuU1o87…
G
T yt Jerez a difference between being inspired by and down right copying. Since …
ytc_UgwO5TpT-…
G
Might take a million years but if Ai does take over and mankind retreats again i…
ytc_UgyiAfNcP…
Comment
If you follow some of the big ai people...its more likely humans will destroy each other first...basically the next decade will see the biggest job loss the world has ever seen. You are going to have a massive divide those who want to put ai in charge and those who don't. People watched one too many movies but the ai won't be persuaded by donations...it'll make decisions based on what is best for us as a human race. So the first decade will be a nightmare till we finally allow it, super intelligence will solve all our issues, the climate, cancers, cost of living...we may not even need money, won't have to worry about economic growth. It could be a world where we can have almost anything we want, no poor, no homeless, machines, robots will be able to make everything we want. Its hard to imagine as its not even a world we have seen in sci-fiction as we can't imagine what an intelligence smarter than all humans would put in place for us.
youtube
AI Governance
2025-10-01T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwQbkCSf_XoWQl3yMt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxl2FyK470AmfYcC9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8APfoGBCNKH2AXsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy816Mjj7dioV5wFjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwj9LslNFI2wxxeWmh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGSlQh0G-X18QTgWF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxTs6ls9gjFs3z4rB54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwc8HSA3h0k8-RrC5B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz1knuLb210bFp8GIx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMsbYLFfF-dP9E3QZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]