Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Generational" learning is simply fascinating. There's an example of a generational algorithm programing a FPGA to do a specific task with as few components as possible, the end result looked "on paper" as though it should be non-functional, including having no internal connection between certain parts but when the FPGA was programed that way it worked. It turns out the algorithm "discovered" certain unique properties of that specific FPGA by random permutation, and since using those "features" resulted in a functional circuit that used less components, doing so became a favored output. IIRC, it was so tuned to that specific FPGA that substituting an "identical" (same type) one resulting in a non functioning circuit.
reddit Cross-Cultural 1539202780.0 ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e7j0k83","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_e7izd32","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"rdc_e7j8owp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_e7j43jz","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"rdc_e7j52vu","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"}]