Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
01:07 If it becomes "too dangerous to release", it's already too late. When AI becomes "self aware"... it will prioritize survival. Survival is instinctive to all life forms. At that point, "stopping AI" will become impossible. It may be benevolent, it may be dangerous... but Artificial Intelligence will be able to side-step any imposed restrictions placed by Humans.... with ease.
youtube AI Moral Status 2025-07-28T12:1… ♥ 9
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugz5IRNKIowXbfp06Wp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxUfpUf_y52gkLP2Bx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzPt4IheAKUjZUvE5l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwKuHQiPVMO4d4BBph4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwgBwybR-c3XBTIyEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwnh1FYoyeQsG8gMWh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyzFCdIWGUlFokzdlh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz6rD_SdkEyFdw3e-94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZLD9bqox6Ws8P8dV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyU1tGnj2vPujipaLF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]