An experiment in linear genetic programming
Go to file
2024-09-11 17:00:42 +12:00
src Progress towards running the machines in the training loop 2024-09-11 17:00:42 +12:00
.gitignore Merge remote-tracking branch 'origin/main' 2024-09-04 17:08:52 +12:00
Cargo.lock Method for dropping nutrients 2024-09-11 11:06:41 +12:00
Cargo.toml Method for dropping nutrients 2024-09-11 11:06:41 +12:00
LICENSE Initial commit 2024-09-04 05:05:29 +00:00
README.md Expand Readme 2024-09-05 17:02:58 +12:00
shell.nix Progress on training loop 2024-09-11 09:14:47 +12:00

jllgp2

An experiment in linear genetic programming.

Actually my third such experiment.

History

  1. Some time around 2009 I wrote a simple implementation in vala. It was a virtual machine with 256 bytes of memory, some of which was memory mapped flow control, arithmetic, and input and output. Every instruction was a move so no opcode was necessary, so instructions were just the source and destination addresses for the move. The initial contents of the memory was a copy of the virtual organism's genome, between the memory mapped facilities and self-modification it had a tiny approximation of turing-completeness. With a fairly standard genetic algorithm it was able to learn the difference between spam and email that I wanted to receive, but I didn't have the skill at the time to integrate it into an emailer.
  2. In early 2012 I wrote a more complex implementation in C. The virtual machine used a more complex instruction set loosely inspired by MIPS. One of the qualms I'd had with the results of the first experiment was that I couldn't figure out how the evolved programs worked, and I suspected that having separate address spaces for code and data would alleviate this by avoiding self-modifiying code. I also took some inspiration from how in real biochemistry there are are a few processes which rearrange RNA and made the location that the instructions from a given "gene" appear in the program address space independent of its position in the genome. I had some trouble finding a fitness function that would prefer smaller programs but not at the expense of learning the relationship between example inputs and outputs. I used an algorithm based on Pareto efficiency rather than a single fitness function but it still created too much selection pressure on genome size. to evolve anything complex.
  3. (This one). A reimplementation of the virtual machine from experiment 2 in Rust with a new selection algorithm. This one will take inspiration from the evolution of mimicry in real-life - the overall model will be a bit like an oversimplified ecosystem where the example inputs and outputs are some poisonous species, some of the evolving organisms are predators which need to sustain themselves without eating poison, and the rest are prey which need to avoid getting eaten.