Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Usually like your stuff, but it seemed you were becoming frustrated that you just weren't getting it and were trying to convince him otherwise. If we train an AI to be a billion times smarter than a human and it goes rouge, human programmers will have no idea of what weapons it has employed, let alone be able to stop them.
youtube 2024-06-14T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz7jiol8y-3kDAQ72p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLRAA5Qu-ZLBa8C6l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwOaAlI_D6L2KcRxyF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9-oU5MPpmgAfkr254AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTXGMA-UqmL5vf-9l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwu73_33tALn8zxVMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwmQ1hv_mfhIV4byUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxybr-Ra7PgMv0XTFN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugy5pVyQGxzO-sk7cwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzomI_L3ALbMDFUUkh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]