Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I read that there are already plans for a cyan or green (can't remember which) c…
ytr_Ugz4zhYPV…
G
I asked an AI to write loglines for some movies that will likely be blockbusters…
ytr_UgwoAaLWA…
G
"Ai will kill us because it has human qualities like... deception. Humans can de…
ytc_UgwDX-nxB…
G
I bet when AI starts replacing Congressmen the rules just might start changing..…
ytc_Ugz9mnxNa…
G
The AI race is about greed and control. The human race will destroy itself for t…
ytc_Ugz78MgPd…
G
@fsociety6983well yeah I sorta get the point that AI is enabling corporations r…
ytr_UgzMrOcQZ…
G
Whew! I was thinking that if Werner Herzog dies we won't have a voiceover talent…
ytc_UgyeT2j2I…
G
I do both digital art and AI art. My pfp is one of my drawings.…
ytc_UgzywdJSv…
Comment
While a rogue AI is a risk, it's not the most immediate risk. Agentic AI doesn't need to be ASI or even AGI - it just needs to be effective for businesses who are drooling at the thought of replacing $500k in salaries with a single agent.
We're already at a Star Wars droid level of tech. So far, non-AGI systems have proven to be very effective at designing things like rocket motors and ICs, so if we extrapolate that to materials science and robotics, it's not much of a stretch to imagine robotic plumbers in the very near future.
The current adminstration won't be implementing a UBI even if they're watching the economy collapse around them, so the economic risk is _far_ more real and dire than the still-imaginary Skynet.
youtube
AI Governance
2025-08-27T02:2…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwykEjkQPfhqMmdTzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCwGVWLanMfqnWwmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyevG6T0Yv2B5Dtq1p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxQ0B5F8aWE82y9EgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwY4BbWnnGOfgWa4ad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]