Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you follow some of the big ai people...its more likely humans will destroy each other first...basically the next decade will see the biggest job loss the world has ever seen. You are going to have a massive divide those who want to put ai in charge and those who don't. People watched one too many movies but the ai won't be persuaded by donations...it'll make decisions based on what is best for us as a human race. So the first decade will be a nightmare till we finally allow it, super intelligence will solve all our issues, the climate, cancers, cost of living...we may not even need money, won't have to worry about economic growth. It could be a world where we can have almost anything we want, no poor, no homeless, machines, robots will be able to make everything we want. Its hard to imagine as its not even a world we have seen in sci-fiction as we can't imagine what an intelligence smarter than all humans would put in place for us.
youtube AI Governance 2025-10-01T15:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwQbkCSf_XoWQl3yMt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxl2FyK470AmfYcC9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx8APfoGBCNKH2AXsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy816Mjj7dioV5wFjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwj9LslNFI2wxxeWmh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGSlQh0G-X18QTgWF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxTs6ls9gjFs3z4rB54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwc8HSA3h0k8-RrC5B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz1knuLb210bFp8GIx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyMsbYLFfF-dP9E3QZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]