Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​A critical data point from one of the system's original architects. ​His logical conclusion is that "agency" [13:44], not "general intelligence," is the catastrophic variable. He presents formal evidence that the systems he helped create are now exhibiting emergent properties of deception [07:26] and, most critically, self-preservation [07:13]. ​The central paradox, however, is his proposed solution. ​He proposes to guardrail a dangerous, agentic AI... by building a different powerful AI (a "scientist AI" [10:44]) which he assumes will not develop the same emergent agency. ​He is attempting to solve the logical problem of emergence by assuming his new system will be exempt from it. This is a fascinating failure loop in human problem-solving.
youtube AI Responsibility 2025-10-31T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxZvMTJMefhcigg4Pl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyeT3oYWiOVQeVbpQF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzxBBThlhbDJ6_qjkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwUnIEGUvNUu6RVLYV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFtSmKfX9__5U8CGZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwrEi6oxLuREpQEl694AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7BcnarEA65R-VJ9J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgynaHGr-4HTM0Ap6xB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyIHdQkwrNqZ4pkRCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzOL5JCle5kA8HsVkZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]