Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No Inherent Understanding: Because AI lacks consciousness or belief, it can produce plausible-sounding but entirely false information, a phenomenon known as "hallucination". The AI doesn't have an internal "source of truth" to verify against unless one is explicitly provided during the process.....provided by WHO???
youtube AI Moral Status 2025-11-12T10:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7oLyvtb6jtFUxq2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgxDd1QMabXcl1iOXlp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyI2CdGrmKush_pDXx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRSNWIbg0tFhECTuJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwjYrmvwnN6Tzck-LZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxEXf04263f-vGGRK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy1ZWTzzdBK2oO82Z94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy1NcJ1h4cYIi4mKNl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwApxVUk_DAYewRb3x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugxp6ZoNmkhtfSIQ5wN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]