Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
did'nt the Dune book already predict this and banned AI. If AI doesnt solve heat…
ytc_Ugweysmbs…
G
@SS4KirinBolt
Always the same: "but future models, tho!"
How is that even supp…
ytr_UgzCj2Nkb…
G
I want to know if Dr. Roman predicted where AI would be in 2025 five or ten year…
ytc_UgzZLgCJt…
G
Just imagine something so much more smarter than humans, that the gap between AI…
ytc_UgyrzXf25…
G
At the same time time I have seem her explanation and she has no idea of AI, so …
ytr_Ugxz2UT3y…
G
Yes, because as Eliezer Yudkowsky and Nate Soares wrote in their book "If anyone…
ytr_UgxrdUgl4…
G
There's already development for COPD and other diseases. Soon it will be release…
ytr_UgykvkLQa…
G
all of AI is Israeli connected - look at who's at the helm of all the corpse rat…
ytc_UgzW8ui9X…
Comment
Thank you for platforming Geoffrey Hinton sooooo much !!! His message MUST be heard, by the masses, and the decision-makers, as some of those risks are already very present, from algorytmic optimization to human obsolescence: those are not fantasized "what if" scenarios, those are very real consequences of the unchecked development and democratization of AI usage in all aspects of society.
PAUSE AI are doing great work in raising awareness and lobbying toward a slowing down on the rolling out of AI so further research can be done to limit existencial risks.
youtube
AI Governance
2025-06-17T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyUP118DsDMQIKk0VV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzF_kEPxi2jQJBD7Ql4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVPCC7TA6p9OaGa6l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxGuheFI7Ld0QvIUg54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1pDb1aHGQywwgYSV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhS-KVJUFxpQ-xX554AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy96S9jU9VDUuhPMgl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzpYTAD57y547QxFnt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOGYnQ4abRC7q5ru14AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz5qJuYEOWyh9PyTOt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]