Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was just the result of an 'if' function. The robot assumed it was requested a task so it said 'okay, i will (insert inputted speech, in this case destroy all humans)' but since it does not harbor the knowledge to carry out this function, it does nothing.
youtube AI Moral Status 2016-10-27T19:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UghOOeNz8r2vhXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghKg5RjiSXIO3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Uggoy2Y17Mb-bXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugjqc3PSGKr3rHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg62pzAp7c8mXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghS4Hz9xLZQMXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugj1ZH_ifI3yb3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiRT_43Bk3K-HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugg97SDTD0HWkXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgiK1IELr0hZP3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"} ]