Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Nail on the head with AI progressing faster than we can understand what's happening. The scariest thing is that, according to some very prominent AI researchers, the field is basically in a state where, when an unexpected behavior is occurring (which happens constantly), they have to reverse engineer it to figure out how and why it happened. When you code a piece of software, it's a simple progression of "build thing -> get result you were aiming for," even if it doesn't always go (or rather almost never goes) that smoothly. This is something entirely new. This is "build/modify/iterate thing -> see what happens because the range of possibilities is so far beyond your human brain's comprehension that it's utterly unpredictable -> work backwards to figure out why it led to that result." It is pretty much the definition of flying blind. That is.. terrifying. We could have conscious AI somewhere out there right now, for all we can really say for certain. And if we did, even looking past the question of "how do you identify consciousness" and pretending we already have that answer for a second, we still wouldn't know for.. what? Years? Maybe even decades that it takes to reverse engineer it? When there was the whole situation with (I think) Facebook's inter-AI communication experiments, where they actually started adapting their own "language" to facilitate a more efficient communication model between them, it took weeks just to prove that's what they were doing, let alone what triggered the change. At first glance, it just looked like the AIs had devolved into nonsense, until the pattern finally emerged. I would say consciousness is just a touch more complex.
youtube AI Moral Status 2023-08-29T18:3… ♥ 400
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwDjeOHFJLhUxm02xp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzGeet3vFYQMPX4NzN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpXXa3rtpEbhptQ2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRUh0vWeUDvQTQzL54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxV7h_cDJhGEX38I0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8MAjyWO03nLEuXNZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7x9sPPpWlfVC-Isp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw1rNHicqe8ydLz2hJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyF7g9XT-IWSpY7Afx4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxKtBISzKCL3EkodN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]