Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1:12:00 So, a specific suggestion on how governments could help with AI and drug development is around clinical trials. The drug discovery part of drug development (which we've seen the most benefit from AI thus far) is not the most expensive or time consuming part of getting a new drug out to market. The expensive bit is the fact you by law _need_ to test it on live animal models and subsequently human participants to get the drug approved. That takes time, money and has ethical considerations, although it does happen for a reason (the body is a mind bogglingly complex system - often you get drugs which do what they were designed to do, but interfere with any one of the millions of reactions happening simultaneously). If we move into a world where, due to AI use in the drug discovery phase (as well as other advancements around doing things in-vitro), fewer and fewer drugs are failing early phase trials, then reducing the regulatory requirements on in-vivo clinical trials may be warranted and speed things up. More ambitiously, governments could put money into foundational models which seek to simulate human metabolome (i.e. the sum of all metabolic reactions in the human body). Incidentally, moving away from animal models would also be good because there are probably a _lot_ of drugs which fail on mice but wouldn't on humans (given the inverse is true), but we'll never know, because development ends at the mice.
youtube AI Responsibility 2026-04-22T09:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxfSH0sAcbCwhWXE1N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2rx5Brl7cTXHZGxd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZV-ZWKHvt_yPWaCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwniAl8pOEQ32_1a9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYk0QqM86956CEQP94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgybrBuIv5sB4iz7bMh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyPGeE8aJ-muCOxBYh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy_548ta4EnoPE7AkN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx5o4dcJ1uq9BSsUXV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzi2UONWVL4XCpC4Sd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]