Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots will never be conscious for fucks sake. They will only be able to make us…
ytc_UggnlcXdF…
G
Soooo what happens when the AI voice changes something and it ends up being to t…
ytc_Ugy2_Byza…
G
Haha, it's always good to keep a sense of humor about the potential rise of AI! …
ytr_Ugz5LxUMt…
G
The only people who say AI learns the same way artists do are non-artist. We lea…
ytc_UgxR4_bpv…
G
My buddy needs to learn from these kids still waiting on his food truck business…
ytc_UgzlcxFvo…
G
This is how much contempt Satan has for humanities were actually seriously debat…
ytc_Ugyqwe8cL…
G
Making an automatic call center machine ✋
Making a machine that makes the angry…
ytc_Ugw2Wk1KV…
G
@thewannabecritic7490 Nice strawman, we need to ban AI art so we don't end up in…
ytr_Ugwaod6_J…
Comment
Ai/AGI will not be a major threat until it’s able to run efficiently on general device. Currently “dangerous” Ai can only run in data centers due to the amount of compute it needs.
If the Ai/AGI runs in a monitored environment, it’s easy to manage safety. Unless there are bad human actors that build the infrastructure to run Ai/AGI for destructive purposes.
We already have viruses that run on devices and we can mitigate those. Ai/AGI would be a “super” virus that could evolve over time compared to our current day dumb viruses. However, like I mentioned, our everyday devices such as smartphones and laptops are currently not fast enough to handle a “super” virus involving AGI on device neutralizing the real threat of a AGI massive takeover.
youtube
2024-06-09T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxj2bYDcZz5H-KZP2d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfqrGn4DK3mWfzZ-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxqg-TF_5RKEQh9ZcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzziEQx93I_L1s1RDV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwkD-qDKzHZxsAXlBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz8ICTkLRkQL4hlReB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwd2eilb2aRg2nQuGV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPCpbbGged-kVApXp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKMaL0KLHA-l2RWKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybjRWn_iH71pclQzR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]