Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What is technology? Learning how to use fire was a form of technology. It allowed humans to cook their food. This energy that would have been otherwise used to digest food to go into growing our brains increasing intellegence. The printing press was a form of technological advancement. It allowed us to document what otherwise had to be passed on from elders to younger people. Since then it became normalized to learn from reading and other media. It ended there. When humans invested the atomic bomb. Harnessing the atom was a major advancement in science. While it could release huge amounts of energy it could not pick who it would bomb. Humans were in control. AI is a technological advancement. It can already perform complex math problems better than most humans. The main difference between all the other examples mentioned and AI is that this invention could possibly cause harm and replace humans.
youtube AI Governance 2025-10-22T16:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwYkedyhCe7oXAmC_p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwcMxOF5Kt1tpuAXFZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwEkY-kuCJyfT7U_px4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyC0sAMR77HA8z0RpV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0BBWFvV_FmgL7kNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4b7YT9_8lmZkFEGd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFivlgwc2s0s2IV4h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwzhZhR2WmkZ4nvG-x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxWXcjm5NVA10Iy8KB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFzmHtKehzKDW_NJl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"} ]