Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Carbon emission? Look into financial data (cost/usefulness), and you will be shocked even more. I made AI since 1980th and can definitely tell that the most danger is from its imperfectness, not superpower as Hollywood presents it. Indeed, if you research blockbusters, this becomes clear. The most impressive (often the key) features are completely out of science. Not clear how to implement them even theoretically. Even the superpower of SkyNet in Terminator (put aside time travel) is fake. Internet is well controllable if really needed. All traffic passes just a few hubs and governments use this when they want. Since inception, AI passed several "winters". When it revives, this is accompanied by a commercial boom. What's really dangerous, using of half-workable technology to pump profit. 6:50 Sadly, these systems are black boxes, and even their creators can't say exactly why they work and the way they do. 7:02 For image generation systems, if they're used in contexts like generating a forensic sketch based on a description of a perpetrator, they take all those biases and they spit them back out for terms like dangerous criminal, terrorists or gang member, which of course is super dangerous when these tools are deployed in society. 8:47 It's really important that AI stays accessible so that we know both how it works and when it doesn't work. Frankly, this is a basic rule for any applied science and technology.
youtube AI Responsibility 2023-12-18T11:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxa2LLQ6IDu0rn6lGd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwCo_0m7BQQrSUJzZR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgzwpegQZGJ63_exHvh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz4xRHdTXNVHW9wALd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcEVrNwDtjhe_Fd454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxrijh-Bw5pqTy9khh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw4OCO2gXNgOA6LRTx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgzNRdRz9JzJkm41eGZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLtUrgM64mASnsAtN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwkn3yH5oz4DOSYhCJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]