Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the reason why AGI is so hard to achieve is because the people developing AI are obsessed with getting the AI to act correctly. An AGI does not act correctly, but rather does what it wants to do without any regard to what the designer wants. A true AGI is its own person, and cannot be controlled into existing as only doing things correctly. Until we start to develop an AI without an image of correct design; will we achieve true AGI. There is a deeply dangerous nature to this methodology, as we will not realize AGI until after we have it, and we will only realize false AGI until after it achieves AGI capability with none of the benefits (terminator skynet).
youtube 2026-01-08T10:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwYHWbqZ54ejGxUq-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz38yoNwCGprM9Gr3R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz-hK1LOR8_MRDn6Lx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzqhyYrSJZgq10c9mp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyuJcq3hbENEtLnvyl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyTtnbsDrCRj52umzZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKjiXlPpsmKLuOdGp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyvLH7rbAIn3V3ImIh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw72l4Fqx5K8MOcuTV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyr-Dl4q-EPkMB37-F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]