Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Considering the difficulty being free of AI enough to have access codes and passwords and such beyond its reach, it might be worth considering what it would mean if AI was as dangerous as we already know it could be, plus if it had access to nukes. There's extensive surveillance these days, so what are the practical things in place to protect this stuff?
youtube AI Governance 2024-06-03T07:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxMyQqwZTj7E74Rbpp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxElwnbpadHs60A_b14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwrzdvSF_bxkibu_Eh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxMLUQjyIfo523o66V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyEYi7IYpHAusZ-nIh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyjBI_kDBh9wF1aWZ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxD03v0mcD1FeLaD2d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwqpB_m8GxjoiGDjS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz1S2Sn0LCywfXbsAl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy9bGljAb63uJG4MVl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]