Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As a professional experienced in creating risk prediction algorithms, a simple questionnaire seems to me too little information and too much unreliable to be able to confirm future behaviors: - Little information: this cannot represent a person's future mentality, or what were the conditions that led to the first crime and if those conditions will be repeated in the future. - Unreliable: because in a questionnaire the person can simply lie in all their answers. Assuming we had a highly reliable and accurate algorithm (which I personally think is impossible), in general the design of this type of algorithm tries to use past statistical information to predict future behavior, which is extremely unfair. Just because people with similar behavior have committed crimes again does not imply that a specific person will do the same. As well as people who grew up in neighbors with higher criminality, belong to ethnic groups more condemned for crimes, or with different political/philosophical preferences, they will receive worse sentences. But most important of all, it's not morally ethical to use this kind of tool to dictate people's fate, especially when I can increase someone's sentence not based on the crime committed, but on the possibility of recidivism, that's basically punishing the person before he commits the crime. These judicial decisions must always be taken by people who understand the entire moral and ethical framework, as well as the impact of their decisions.
youtube 2022-07-26T01:0… ♥ 4
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxGLjPhbv7L5DIQvJB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyUT2ve0yW5k8YrR654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy-aveiVnwA4amrust4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzRvpAAnZnlQG7lVsp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyQglA8BqAtm21JaeZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyx54w0jVvm3e_kP8p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwHUzQ-UNWEXF-Z6yN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx5JfyOgqMmDf4ya8J4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugz6kNrE6viSmd0_jax4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxjf8jwbfTdZ3IiWy54AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]