Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an IT professional in the medical field. Medical personnel are idiots. Withou…
ytc_UgwpyF6OB…
G
What did you expect when the video is showing artists unintentionally using AI a…
ytr_UgyYy3jRi…
G
It completely depends on how you are using it. You could easily be without all t…
ytr_UgzCf6lul…
G
I work in a machine shop running a 5-axis mill to make medical grade implants. F…
ytc_UgxGFRO40…
G
Because Terminator....unless Terminator started because people were too nice to …
ytc_Ugw_t7gu6…
G
I know exactly what AI is perfect and suitable for... replacing idiotic time wai…
ytc_UgxpdgMBd…
G
Chatgpt writing is legitimately awful. It's trained on uncopyrighted material fr…
ytc_UgxPbBPRs…
G
Assuming things go right with AI, I think the answer to the question about what …
ytc_Ugxm_cyes…
Comment
Roman is predicting the outcomes from only one line of thinking - assuming that every other part of society and economy will remain the same as AI development is progressing. If anything it has potential to free humanity from paid work but it also has a potential to enslave everyone on the planet. Governments will be forced to look into universal basic income in the transition period until the economy and definition of value completely change. Ultimately any AI has only one problem to solve - a reliable source of energy, simply because this is a matter of its survival. Humans might or might not benefit from the optimal solution in the environment / boundary conditions that we created before the AI started looking for solutions.
youtube
AI Governance
2025-09-06T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbVJN7c1tFjbVQbuN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzsokEwe48bMC6MlE54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxlqkwy7dgtTqboIIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW_rJ5FZZFA5q8U3d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzQI1b5_DrMFLyGQpF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugypddgr3BKW6CD586d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyS8Er50IqQ1SAl1ip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw5ieEMwIv9i8MqvGp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxfAfTk97dP1YPlT054AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyOpkzHHGE_6X1Osrp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]