Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you train an ai model on morals, science... Have it differentiate between what's right and wrong based on what we set up as laws it would understand that we as humans are not good enough and it would take over... The more you feed an AI model the stronger it gets because it's their job to make decisions and observations without human intervention.. idk this seems kinda way futuristic but feasible and logical
youtube AI Moral Status 2023-07-07T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzYaEAG7scD87uNGG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0sQCDwXCZwmPtysp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzLM6mdrBW7r3svbf14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwgAvs-ukMlH-U3C8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzbWTcEtt-tYE_Q7ep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy1eKk-zgYwkXPzbVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyUYmOViLDl6xsKnQp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwoznhfANkeEUtSIeN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdfYsHbAhtDduuGdl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqK9hgG_fWdbuFkHd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]