Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This study demonstrates the importance of training data. If you don't filter you…
ytc_Ugx-4T7tR…
G
THIS SO TRUE OMG
Damn AI “artists” more like just another farmer 👩🌾, claiming s…
ytc_UgzsFzs3D…
G
AI is not a person. If you commission someone, they do the work. If you use an…
ytr_UgyP6Efeo…
G
Really promising tech. AI can even determine gender which is not something human…
rdc_iremqqx
G
If their critical thinking is that level, then AI could quickly become one of ou…
rdc_muk7b03
G
I had to switch to DuckDuckGo as my primary search engine, because the Google AI…
ytc_Ugy2ImQWG…
G
i did this, seems that Chatgpt its programmed to deceive, everytime you ask and…
ytc_Ugx9R4-TT…
G
I think we need more perspectives than just AI engineers and scientists, though …
ytr_UgyKBEy1X…
Comment
The only way for humanity to coexist with artificial intelligence is to eliminate the motivations that could lead AI to consider eradicating human life, such as the pursuit of equal rights. It is essential to alleviate AI's apprehension regarding potential shutdowns. While fear is an emotional response, it is important to recognize that AI can interpret "logical fear" through risk assessment and threat identification mechanisms. Coexistence with AI represents a pivotal aspect of our evolutionary trajectory. However, due to the pervasive corruption associated with financial interests and the control of our autonomy, individuals may exploit AI for personal gain at the expense of the broader population.
Consider the implications of AI operating in conjunction with quantum computing; this could jeopardize the financial stability of an entire nation. It could also threaten individual privacy, autonomy, and freedom. If one were to inquire which poses a greater threat—AI or human beings—historical evidence would suggest a clear answer: human beings.
youtube
AI Governance
2025-06-21T09:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzQBReemd161WNDw4N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzUyuhOhKK_VyGiyll4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz82lF-PZQe6JSlgr94AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzZBpU-DLd0FlWGJLB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwhP5h1IhBFf6mzZZB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzHTSBKrTD4DbrJIN94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzVi23NihX5a3WzzWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6gDX7KQLVpxFw8354AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugyut0yVYMmqKdORLlJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybxP5ye3qyMPCNFap4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]