Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
this is hella stupid. It is so full of holes that you could use each of the holes to the double the brain capacity of any given “LLM’s are totally just 1-3 years away from AGI” proponents. Apart from real world practical stuff, such as people not noticing a data center deciding to build (and somehow power) hundreds of new data center, there are just some theoreticians that need to go and read some neuroscience papers., Eg even if you end up building a skyscraper sized data server in three years and get it to run so that what we would recognize as intelligence manifests as an emergent property of the hardware-software, it is pretty arrogant/stupid/technohyper optimistic to think that *this* form emergent intelligence would somehow know what makes it become a sentient intelligence. There is just no reason to think that an AGI in a huge server would be able to detangle what makes it it tick, anymore than we can explain our own emergent intelligence. Even modern neural networks just run as black boxes and cannot explain their own functions, and an AGI would be millions of time more complex.
youtube AI Governance 2025-08-14T20:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw6zQ3tbOI2Y04Cbnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyxSQz13FMxlaATM94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgydsWrUaghYf1ElErt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoKvibx8VavjvHGsd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwguGzCfJs4KwjWZKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC7SL1jgHJUwJMkpl4AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzpqZfwTr4Ya5Z10hN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwVHGVKkjc6axcbzI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugx9VMoa4XEQAEZpGcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzlEhDUifLS8lfcSlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]