Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
“We do have what it takes to manage the problem because we created it” he says …this isn’t a problem like the extinction of the dinosaurs which came from outside forces. That is illogical. We can’t always necessarily solve or “manage” problems just because we created them. There are all kinds of ways we can create a problem which we can’t manage or resolve. A drunk driver deadly car collision is a problem that cannot he managed or solved once it has arisen. Maybe he means prevent when he says manage. For some problems we might not be able to reverse a poor outcome that we have created but we might be able to put safeguards in place to save others in future. Eg all the safety features and protocols developed as a result of air accidents. We don’t have the imagination to foresee all the causes of accidents but we can manage retroactively. Another point: the scale and reach of ai could augment problems making them infinitely more unmanageable. If we can’t foresee and prevent human-created problems. I don’t think we have much chance of imagining and preventing the kinds that ai could create for us. And another challenge with ai is that there it seems we may only get one chance. Most human problems we create our contained and limited. With ai a problem might be species-wide. There isn’t another human species so we can get it right next time.
youtube AI Governance 2025-07-20T18:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwFZwLtT1p2eGKtEON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzmg3Eb2I3PZAeON394AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy3Xe-Zvhu2OJoXHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy7YAZ2pX0O2Suh6mt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxhvcsItIdMOxRO-Jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz3GdOhDzXwHZQ5TSp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwTM7EwXsmg0AjdfYN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxQ_gKuIf-KUriojwF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzVOv8x8DbaRfHV6iZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwUFP2e04fE-zGe_x54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]