Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not that hard, but you are making it very difficult and complicated to lea…
ytc_Ugz6TI3tK…
G
Ok but you didnt take a side
Like i know youre not supposed to but ffs guys taki…
ytc_UgzNWXU-d…
G
I think "hallucination" is a hugely unhelpful term because it implies that somet…
ytc_UgyE1VLNZ…
G
Asmongold not realizing AI are coming for his job too, which is something that I…
ytc_UgywfvQHW…
G
The biggest difference between Artists and "AI-Artists" has always been intentio…
ytc_UgxpwEEnI…
G
You people were wrong about the printing press. You people were wrong about publ…
ytc_UgzU3A1E1…
G
The auto revolution created more jobs than it eliminated, arguably. AI doesn't …
ytc_UgwIBrIOP…
G
As a provider working in the healthcare, I have found that it has a very limited…
ytc_UgxdgAZ_Y…
Comment
Firstly ai is an existential risk and beneficence we should focus on that and reduce the other. Secondly there are a number of control systems I can imagine to help mitigate its threats. 3rd Intelligence should only be embodied to the level for it to solve a task. Ai doesn't need to build robots we already did this and anything that is electrical may be able to be influenced with a sufficiently powerful ai. There are already angels and shoggoths in the current system. We can make more angels by very clean and kind data. :) One more thing ;)
youtube
AI Governance
2023-07-09T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy0RS8rCJsCo4XkwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJgzi4OkQ7QPapltJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugwu0fayEqNBHovgu2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJXnv95u_j7vvt3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKWrogoupRqwRe8EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVZxBgODIUen5Phwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwaIsXG6vGzkg3o0V14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF-JUIdpiLbjc_lUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuzAUM67Dn8MAFwxZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZBZa5vsqXpN2YZ2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]