Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What of the risks to human knowledge of factual information given that AI is seen to generate factually inaccurate responses to questions some 60% of the time? Is AI yet capable of actual reasoning using rules of science and logic, or only a word string generator creating grammatically correct statements without regard to factual accuracy?
youtube AI Governance 2025-06-20T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz03PBoKb7o5e-DQc94AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyKLpdVEwsfCy9_ub94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy5056FSX-HpHfjUzZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzvYu-WWi6NHALNZj54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzi1YhUl4GaXWyxPvp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytYiZrq8QuOoCUzaR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgztnGrDzOrH6imQ9nJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxlUrWkXMcAClDQmTR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnL2W-Qw6uOXZwJ4d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzAguT1KzfC57ez8Lh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"} ]