Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While a rogue AI is a risk, it's not the most immediate risk. Agentic AI doesn't need to be ASI or even AGI - it just needs to be effective for businesses who are drooling at the thought of replacing $500k in salaries with a single agent. We're already at a Star Wars droid level of tech. So far, non-AGI systems have proven to be very effective at designing things like rocket motors and ICs, so if we extrapolate that to materials science and robotics, it's not much of a stretch to imagine robotic plumbers in the very near future. The current adminstration won't be implementing a UBI even if they're watching the economy collapse around them, so the economic risk is _far_ more real and dire than the still-imaginary Skynet.
youtube AI Governance 2025-08-27T02:2… ♥ 15
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwykEjkQPfhqMmdTzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCwGVWLanMfqnWwmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyevG6T0Yv2B5Dtq1p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxQ0B5F8aWE82y9EgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwY4BbWnnGOfgWa4ad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]