Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As SMR said. "you're not reading between the lines". "FSD/ Optimus" isn't the issue. . The issue is an AI potentially either misunderstanding it's role, or a fundamentally bad core instruction being cemented by "us" into *one* of the apparently numerous systems now in development. . As Steven suggested with the "paperclip" analogy, one instruction, misinterpreted by the AI (OR, more likely poorly phrased by any one developer) could lead to (just for instance) a broad goal of "saving the plant" resulting in "The" AI *reasoning* that at a fundamental level the goal requires analysis of what is the greatest *danger* to "the planet". The (LOGICAL) answer to which could quite easily be "Humans". . If the AI then reasons that "reducing resource consumption by Humans" is a way to achieve the goal, the next (LOGICAL) step may be to remove the ability to pollute. "The" AI then turns off every automated valve on every energy production plant under its control (which would be every one under computer control with a network connection...... All of them?). . No malice in its part, but dire consequences. . Your "excellent images" can wait.
youtube AI Governance 2023-03-30T10:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgxBbjHdNEXCBjidVfZ4AaABAg.9nsv8z4LZEt9nsvHhBQyLi","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugyn8vRMgBLNpaTFoeJ4AaABAg.9nsuZtTfUIb9nsxmDm3iEf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwD5K5LX9k7VcdSeYZ4AaABAg.9nsrH1bRpzH9nsss5rpbvT","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugy_bymuQCOqeV6s8Sl4AaABAg.9nsqsDyTBSf9nstu0Pqwrd","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyHLe62bx7LqEPZlV54AaABAg.9nslC6fF1eg9nsmYSKG1ad","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzUvcNmWdhebmM0rq14AaABAg.9nsiiFbt9uv9nso-HFEp4z","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytr_UgyQuu1agpBnCarU-WV4AaABAg.9nshRnwXsTD9nstNXShz5G","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz_ExjirZyC-Mmo8sl4AaABAg.9nsh0_j6Fq69nskh_BrUbB","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugws9LG6oGPBF1DhUfx4AaABAg.9nsPu9YTMQy9nsmHJwbPS8","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugxuw4r-QaSU79GjGdd4AaABAg.9nsPIpCpSlP9nshLvBQVsD","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]