Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As Aasimov said in his short story [Reason](https://en.wikipedia.org/wiki/Reason_(short_story) humans could very well become obsolete once they aren't as optimal for a task as an ai could be. "Cutie knew, on some level, that it'd be more suited to operating the controls than Powell or Donavan, so, lest it endanger humans and break the First Law by obeying their orders, it subconsciously orchestrated a scenario where it would be in control of the beam", we will be treated like children in the best case scenario for humanity.
reddit AI Bias 1438016180.0 ♥ 8
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_cti1yju","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_cthnoeb","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_cthxc0i","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_cthtjt1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_cthrpzb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]