Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So by Harari's reasoning an AI told not to do anything stupid by humans would observe whether humans do stupid things and if it judged they did, would copy them and do stupid things. Its first thought might be how stupid it appears that intelligent life creates machines capable of destroying itself. Then it might copy that example too...
youtube AI Governance 2025-07-22T09:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwBmIud28l_qtSRqT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJ-0xOEbWLoLhSZFV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzL9HzXDKb9ha0i6Rl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgztrpcDLVlnizzWtht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7vzxbFJBL1M3BlBl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx6M_WEJeFM6BYGLfh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxdfryuQSOLLmxPnfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgznzNf6vSFBqlh3F4R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFPPFg-dMfhv7Z8cd4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwoE1RjmODEQJwnjZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]