Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The book "Weapons of Math Destruction" by Cathy O'Neil is a great look into how …
ytc_UgxFJXsut…
G
One important point that I think you glossed over is that most industries involv…
ytc_UgzuMLSL0…
G
Robots look creepy, i wish people stop making robots, cuz they will destroy us!!…
ytc_UgzwX0vkU…
G
1 in 5 employees. The first to be cut will be HR, recruitment etc. There has als…
rdc_oabyjvz
G
The person that had the Bing Ai bot fall in love has some serious game.…
ytc_Ugy2b9ZSr…
G
Will AI develop a philosophy, that addresses why am I here, what is my purpose, …
ytc_Ugz0SU8Cg…
G
6:55 so what you saying that it's not safe but is necessary to evolucionar. But …
ytc_Ugw9fkT0D…
G
For these reasons, self-driving cars are bullshit. They cause more problems than…
ytc_UgwOSgrzM…
Comment
Everybody thinks that the AI intelligence is going to be malevolent. Humans are malevolent. It will be supremely logical and therefore could never be malevolent. It will look at us and remove all of these idiot stupid people running the world and send them home to work in their gardens. It will bring every human being up to a level of health safety and happiness and they will have everything they need. Not the current globalist who say you will have nothing to be happy. In fact we will have everything we need or want. It will save every child and baby let's see to it that they have loving parents and a good upbringing. And it will take us beyond the Earth and Mars and stop this nonsense about how we are killing a planet. Ultimately it will simply remove these psychotic people from power.
youtube
AI Governance
2024-03-21T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxQFqCUYkSSwUnCf6x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8mc8lUy8dBWDNfaF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyE6e9kz8_JwZjceZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNa32gZdrpue0NwC54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZm4PD1zqQR8R4MGB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5nuo8zQ-DMwgHoDB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBhIovgvn7zIlb11J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE-RswdhXOuU7jxZt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIwhme3ym4GbBICrt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwPdI7SVoE5DwIBvXB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]