Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, the progress ethic pushes suggests that should move forward with new technologies because that is how we change our norm and when considering it, we should not take into account the potential dangers involved? I can follow that for robots and AI with little objection .... but when I apply that to other fields where incredible progress has recently been made I come across several personal objections. Recently Crispr and gene drives, recent technologies that can allow us to rewrite the DNA and genomes of entire species, should be moved forward for progress' sake despite the very clear implication that that progress can get out of hand and destroy ecosysyems, wipe out entire species, and create diseases that can "racially cleanse"? It affects the status quo in the same way as AI and robots could. Though I think we can all agree that would be bad. At what point does the ethics of progress need to be countered/considered with a morality of progress?
youtube 2016-06-28T20:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugi0VpRQcJ-dlXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiNbJ86LvBRq3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh1IEiVJXdTL3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghLxJuzfSM6WngCoAEC","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggsqieRRiXa53gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugg4GbRqmgOic3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UghxMbj5aY2YSHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UghdOZ7MnsH1QngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjiENPkdW5qpHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UghmqNgxlJSGC3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]