Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wow. awesome. I can't imagine that this dystopian nightmare is being implemented…
ytc_UgyMYnB3D…
G
First...What is the Intelligence ? After, you will understand their AI are not a…
ytc_Ugwz1ABRC…
G
I can't really be positive about that. History teaches us that what bilionaires …
rdc_jd6vnci
G
Just use “all-knowing” Gr0k. Why study anything at all? Useless PhD. Gr0k has th…
ytc_UgzGotAiA…
G
Automation absolutely destroyed the soul of the working classes - not just the l…
ytc_UgyNaCN4K…
G
@jethheriee ai can create pictures, not art
art is a combination of skill and em…
ytr_UgwV-fOeP…
G
When I got my bachelors degree in 2018 one dude yelled after he got his degree “…
ytc_UgxRomQ1O…
G
I wish more people would conduct these sorts of private experiments with LLMs. T…
ytr_Ugx6vZjGS…
Comment
Yeah... Probably going to be a while before an AI/LLM is going to be able to debug and optimize the code-generation output of a C compiler for a custom CPU architecture or similar...
Though, for small scale things, I had before used generic algorithms for some tasks, mostly "fine tuning". Examples would be, say, feeding ASCII strings through a hash function, and then finding a hash multiplier that produced a low rate of collisions (as opposed to the traditional "pick the largest prime number less than 2^n" strategy).
Usual outputs had a higher density of 1 bits near the high end, and becoming sparse towards the low end, but otherwise pretty random, so, say, something like 0xFEDBA621 or similar, with an algorithm something like (from memory, but similar idea):
hi=1;
while(*s)hi=((uint32_t)((hi>>32)+(*s++)))*0xFEDBA621ULL;
h=(hi>>32)&MASK;
Vs, the typical "human" solution of, say:
hi=0;
while(*s)hi=(hi*65521)+(*s++);
h=(hi>>16)&MASK;
Where, say, one is optimizing for a target which has 64-bit registers, 32-bit widening multiply is significantly faster than a full 64-bit multiply (and a 32-bit multiplier has a lower collision rate than a 16-bit multiplier, ...).
Where, for a genetic algorithm, one might merely tune the hash function or similar, and genetic programming might represent the program as a sequence of RISC-like instructions inside a very specialized VM (in my case, often expressed as a mix of gray-coded-immediate values and ECC, which tends to make it easier for the algorithm to converge on a useful answer; where each logical 32-bit instruction word is represented as a 64-bit ECC'ed value, typically with invalid instructions decoded as NOPs, ...). Also tends to work better with a smaller number of registers than a bigger set (say, 8 or 16 registers, vs 32 or 64), and for "very simple" tasks (if not "very simple", it will often get stuck and not find anything useful). Then one needs to manually turn the output into something "actually usable" (like a C function).
Granted, a genetic algorithm / genetic-programming is very different tech than an LLM...
It is also infrequent that a genetic algorithm can come up with something that I couldn't have just guessed in a small fraction the time, and typically with less effort...
youtube
AI Jobs
2024-01-14T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzNMfJ3LBTBLiJDdsV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaucqwbM5OCnaQTXV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyzIKW0hoavYy5pyzd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNSe6KwQOIIj_XVVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzM0fuhJORJIopTsep4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwhBUOOljNwsarnIA54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxYYaLWvq6ktyq6-c54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2A7E-dH-z3o-nElR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrIWkyDIuNFOzsAnZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzlf-1PD6zmnNAbjz14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]