## Monday, June 22, 2015

### Evolution in a Computer

As I already mentioned in The First post, my main interest is the evolutionary computation, so I decided I'll write about some basics of it.

## Optimisation

Because evolution is in fact a sort of optimisation, what that one actually is? Generally, optimisation is a process of developing or tuning a solution to some problem such that the solution is as good as we can get.

Example: suppose we want to find the minimum (e.g. because it is the money we have to spend) of the following function:

$$f(x) = \mathrm{e}^x\left(x^3 - 3x^2 + 5x − 5\right) + 6$$
In this case it is super-easy - we can just compute the derivative
$$f'(x) = \mathrm{e}^x x \left(x + 1\right) \left(x - 1\right)$$
and then find the points where it equals zero
$$x = \left\lbrace \begin{matrix} -1 \\ 0 \\ 1 \end{matrix} \right.$$
and from them pick the one with the lowest value
$$f(-1) \approx 0.849688 \quad f(0) = 1 \quad f(1) \approx 0.563436$$
... and the winner is... $$x = 1$$!

### Black-Box Optimisation

The previous example was nice, because we had the analytical description of the function (and it was very "user-friendly"). But what if $$f(x)$$ is not differentiable (e.g. it's discrete)? Or what if $$f(x)$$ is a black-box and we know nothing about it?

We might want to systematically try some points, e.g. in a grid, or pick them randomly and try luck... but what if the process of getting $$f(x)$$ is expensive? What if $$x$$ is a set of parameters of an aerodynamical body and evaluating $$f(x)$$ means physically manufacturing the body and testing it in a wind tunnel? It is clear that we have to find some clever way of exploring the space of $$x$$.

## Evolutionary Algorithms

Believe it or not, natural evolution is a process of optimisation. What is the $$x$$ and $$f(x)$$ then? Well, in nature, $$x$$ is the DNA that determines the organism (very simplified) and $$f(x)$$ is the success of the particular organism of surviving and reproducing in the given environment. So natural evolution optimizes the organisms to surivive and reproduce well in the environment; it seeks the "solution" to the life in the environment.

Nature is, generally, quite successful in finding good solutions. We mimick nature all the time (insect-like robots, bat-like sonar...) and evolutionary algorithms (EAs) are no exception.

The general structure of an EA is following:

initialize a population of solutions $$P$$
while not stopping condition
$$\forall x \in P:$$ compute $$f(x)$$
select parents from $$P$$
generate children from parents
mutate children
put children back in $$P$$
possibly remove (kill) some individuals in $$P$$
return the individual with the best $$f(x)$$ encountered


### Genetic Algorithm

Genetic Algorithm (GA) is one particular type of EA. There are others but GA is probably the most famous and the one being extended and based upon the most. So how does the GA fit to the EA structure described above?

In GA, the genotype of a solution, or $$x$$, is a string of 0s and 1s of predefined fixed length (depends on the problem we are solving).

The population is a fixed-sized collection of solutions (the right word would be genotype but most of the time the difference is not important so I'll use solution or individual).

Selection is the crucial part of the algorithm. That's the part "where it all happens". Selection's job is to pick such individuals from $$P$$ which are better than the others but in a non-greedy way. One of the most widely used selection strategies is the Tournament selection.

Generating children from parets is done by crossover. In general, crossover is a process of combining the genotypes of two (or more) parents to create a genotype of one (or more) child. One of the most simple (and also the most widely used) crossover strategy is the One-point (or single-point) crossover.

Mutation is the second crutial part of the algorithm (together with crossover). It's the part where new genetic material is created. Mutation consists of a small random change in the genotype. The easiest and the most simple mutation is just to randomly pick one (or more) bits and flip their values.

The last two steps, that are putting children back into $$P$$ and killing some individuals in $$P$$, are usually bound together. In a so-called generational GA, the process of selecting parents, crossing them over to children and mutating the children is repeated until there is a number of children equal to the size of $$P$$ and then the $$P$$ is discarded and replaced with the generated children. On the other hand, in a so-called steady-state GA, only a few parents are selected (usualy just enough so that crossover can be performed) producing only a few children (defined by the crossover method). These few children are mutated and then they are put back into $$P$$ such that the size of $$P$$ remains the same. This means that some individuals must be killed. The strategy of chosing those individuals is called replacement strategy and has to be chosen. A very simple strategy might be just killing the worst individual(s).

Both the crossover and mutation are usually applied probabilistically, i.e. there is a predefined probability (chosen by the user) of two parents being subject to crossover (if they are not, they are usually just copied so that the children are identical to their parents) and another probability (also chosen by the user) of a child being subject to mutation. Crossover probability is usually rather high (0.8 is a usual value) while the mutation probability is usually very low (e.g. 0.01 per bit).

### Final words

GA is definitely the only EA in town. I also only scratched the surface. The Evolution in a Computer is probably going to become a little mini-series covering some aspects of evolutionary computation. In the next post I'll write about other EAs which are more suitable for continuous problems. And also await a post about evolutionary programming! But it is really too early for that one.

## Thursday, June 11, 2015

### The First

So this is the very first post...

... so I'll start with a very brief introduction of myself. My name is Jan Žegklitz. I'm a PhD student at Czech Technical University in Prague. I study artificial intelligence, evolutionary algorithms and programming in particular. I also like programming in general, trying new languages and stuff...

In this blog I'm going to write about various things from my academic/hobby stuff. The nearest big one will be the GECCO conference in Madrid, Spain, which starts exactly in one month, where I'm presenting two papers:

• Model Selection and Overfitting in Genetic Programming: Empirical Study (as a poster at the main conference)
• Symbolic Regression by Grammar-based Multi-Gene Genetic Programming (at the Student Workshop)
I'll write more about these later...