Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do use Gemini sometimes, and indeed it is helpful. But sometimes, as a programmer, typing out and thinking through what I want it to generate or solve means I already figured out the question -- often finding the right question leads to the right answer. Recently I created quite an algorithm of decent complexity that serves to create waves of enemies for my game. It has a pattern as to what level of enemy to choose, which leads to define the average and the spread of levels, with a dome function, a category to define what type of units to send (scouts, combat troops, guards, elite, etc), and having simple integer parameters like "wave level" and "waves strength" and "quantity mult." So in essence the overarching AI can decide to issue to send its ships by strength 5 (2 waves, lower quantity), level 4 (lower level ships), pattern: Small (sends many small ones), and because it's 3 players, use a ~3.0x quantity modifier (results in more units). This means that group of player will receive 2 waves of a lot of small ships, and there is a delay between each wave. I like my algorithm, I managed to get exactly what I wanted. Anyway, writing down the details for the AI and refining it with a LOT of queries would have taken more time than writing the code itself and figuring all of its details out. LLMs are indeed a great tool, but they are a tool. A hammer doesn't build a house, people do. LLMs aren't even close to replacing humans, and it would be better for humanity if they woudn't be.
youtube AI Responsibility 2025-10-01T13:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxKokwkdWwoDk9uvwd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyobSBcPctQdhrxrqd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFOEGXtvQf5UQqd-R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxU2qaMis4nmOCQiux4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy9VZFxo3Ndd75HY2F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx56XTcBLhT-jMHvSF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxfcE9gOocvvnqwg2d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJ34QILEtccp3sYh14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKuhg0TZTd1axBRIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxZGpV_0JHwS5n3UTB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]