Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Sorry I am late Tucker. I completely agree with Elon on this point. What is truly scary about AI is we are programming and training them to think like humans. And when it comes to how humans think, most of us generally use these sorts of AI platforms to express our darker aspects. I do not necessarily agree that AI are smarter than humans, but are certainly better at processing vast amounts of data spanning larger spans of time. This gives them a predictive advantage. Further, as problem-solvers, it is not their ability to troubleshoot and provide meaningful solutions, rather it is their decision/implementation ability that is dangerous. If, hypothetically, an AI comes to the conclusion that, to solve climate change we need to eliminate non-renewable fossil fuels and coal, it's ability to determine the fastest way to achieve that end then re-engineer our technology in order to accomplish this objective is 'anti-speciesist'. Factoring in that any coder and programmer can independently develop their own AI technology with very little investment and no oversight, this certainly represents a serious area of concern as we move forward. As Elon said, 'would we even know?'
youtube AI Governance 2024-01-25T18:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwbN0Zas7hnaOWChuN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyON372r3BSPjlx0R94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwErHKCzHsYoE2smvN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzOfuoKnFjj2fMz1e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzEdNZn6WRC5M0fnod4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz0GyAWruZCm3lhHvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0WmWg99hbGRuXlNN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwz8CO1tr29pV1jlq54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-eJOqESVTo6stdlx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxOUju_vBA0mlOwnsJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]