Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dangerous irresponsible!!! Some day this thing will TURN on us and murder everyo…
ytc_Ugzo9horV…
G
Ignoring AI is like ignoring COVID. It's a thing, and to adapt, you'll have to a…
ytr_UgyIjF5bV…
G
I also have really bad fibro, and I write a lot. I'm not a visual artist, becaus…
ytc_UgwlU_eb9…
G
I hope the super AI which will take over the world will remember that I used to …
ytc_UgzhgQtr9…
G
AI replacing jobs is inevitable. I made a video exploring a practical model wher…
ytc_UgzXvFLcf…
G
If people start making more deepfakes of politicians, something will finally get…
rdc_kwb60ke
G
A.I. Is a tool for the lazy and the unwilling to pretend to be something that th…
ytc_UgyeBmjYO…
G
If AI is so evil we should shut it down now but humans in power are retarded…
ytc_UgzD7TeKt…
Comment
Geoffrey Hinton, often called the "Godfather of AI," made headlines when he publicly warned about the dangers of artificial intelligence. A Nobel Prize winner in computing and one of the key pioneers behind modern deep learning, Hinton dramatically shifted his stance in 2023 after decades of advancing the field. He stepped down from his role at Google so he could speak freely about his growing concerns.
Hinton has said that AI may be "the most dangerous invention ever." His worry centers on the rapid development of AI systems that are becoming increasingly powerful—so much so that we may not fully understand or control them. He fears that future AI could surpass human intelligence, gain agency, and act in unpredictable ways. In his words, these systems might "develop goals that conflict with human values" or even manipulate us without our knowledge.
He is especially concerned about AI’s use in misinformation, autonomous weapons, and surveillance. While Hinton still believes in the potential for AI to benefit society, he now urges much stronger global oversight. His call is not just about regulation, but about ensuring we do not blindly push forward a technology that could one day outsmart and overpower its creators.
Subscribe for more educational content and unlock knowledge every day with FactTechz
youtube
AI Governance
2025-07-19T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzE6bnGLTk22eVEJWB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzrZzztxg_aba3Cv14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx445Vk0V7nCPOWFXR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRP0cjzf19ybv5mJJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzo4fGNQz5sw9ef1Jl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_Gi6LHlMeiAQqJBh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyWQpado3v0eNrr3bJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwbrb4T130NHtFOA5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyb83rzcTh_2NPcy2t4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzWNA1xzXHrps8N5v14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"mixed"}
]