## Introduction

I have spent this summer designing and writing a virtual machine that no one will ever use. I have written roughly 6000 lines in an assembly language that no one else will ever learn. And it has been absolutely awesome.
This machine has been named FML for reasons described in the background section. The machine has been designed to be easy to implement and easy to use with a simple but powerful instruction set.
As of now the machine has been implemented in Java and I have also written a surprisingly usable assembler for the machine.

In this post only gives a brief explanation of how the machine works.

And here is a little demo I wrote for it:

I Hate this demo. I’m so ashamed of it.The music is not played by the VM! This is the first demo I have ever written and i know its not that good. I  also decided to have the demo done by the start of the semester so I had to scratch some plans for cooler effects. If you want to you can go to its pouet site and down vote it 🙂

## Background story

Well. Once upon a time not that long ago I decided to write a virtual machine as the final project in my CS 101 class. So me and two of my friends sat out on the ridiculous quest to write a working VM from scratch in SML.
Now this project was slightly overkill and my friends where utterly confused and had little to no idea what I was talking about. We had to write the whole thing in SML which is a slightly pointless and annoying functional programming language and thus the name (for obvious phun based reasons). And if anyone of you are one of those “Everything looks better in a functional language” you can go sod yourself. Anyhow by the time the due date was up we had a fully functional assembler , a slightly working VM implementation and a ridiculous long report.

I was still quite happy with the general design of the VM and I had this crazy notion that i was going to write my own programming language as a neat little summer project. But i needed some platform that the language was going to compile to. So i decided to redo the project in Java and “pimp” the general design of the VM and the ISA in order to make it slightly less sucky.

When i first tried to do his thing I started out by just writing a Java implementation of the VM and use the old  SML assembler. I wrote the VM in accordance with the documentation we had written as part of the project. This turned out to be a bad idea since the ISA we had specified in the documentation only vaguely corresponded to the actual output of the assembler (luckily for us no one bothered to read the entire 37 page report). So i decided to fix those issues in the assembler. So I opened up the SML code for the assembler and immediately closed it again since the code was completely unreadable. So i decided to build the VM using the output of the assembler as the specifications instead. Halfway trough I gave up on this idea since i realized that it was stupid and that the assembler was crap and the ISA had some shortcomings. So i decided to redesign the ISA from scratch and write a complete java implementation of both the VM and the assembler.

And that is the story of how the worlds most pointless VM came to be.

## General

The entire FML machine has a retro feel to it and is fairly minimalistic. This is partially due to the fact that the only real experience i have with writing assembly code is for the C64 and I also feel that modern CPU architectures are just not as pretty as they used to be. They literally just try to squeeze in as much crap possible onto the chip. This implementation of the FML machine works with 32 bit signed integers(mostly due to java not supporting unsigned integers.)

Another reason is that the main goal of this VM is that it should be fun to work with. This is why I have tried to keep the instruction set to a minimum but at the same time trying not to sacrifice usability for minimalism. I want people who try to do stuff with the FML machine (and by people I mean only me.) to be able to just start of writing stuff as quickly as possible without having to read trough a bunch of documentations and other annoying crap.

### The architecture.

The architecture is very straight forward. FML only consists of a few basic components:

• Two general purpose register ( referred to as X and Y )
• One general purpose stack ( referred to as S )
• One program counter (containing a jump stack)
• And a RAM memory ( wittily referred to as the RAM )

A very general flowcharty picture of the VM

The FML has 16 IRQ lines. Each of which has a corresponding memory address ( the addresses is implementation dependent) and when the machine receives an interrupt request it will do a subroutine jump to the address stored in the memory location corresponding to what IRQ line is being used.

As of now the machine uses no virtual addressing due to the fact that I have not had any need for it as of now. But it could easily be implemented.

### ISA

The ISA is as previously mentioned designed to be easy to use and to understand.

Each instruction is made up by 16-bits. The word lengths can be either 16bit, 48bit or 80bit each instructions may use up to two 32bit constants. The memory is not word aligned.

In each instruction the bits in the instruction word corresponds to the following:

• bits 0-3: The first argument
• bits 4-7: The second argument
• bits 8-11: Specifies whether or not this is an Operation instruction and in that case which operation.
• bits 12-15: Specifies whether or not this is an Action type instruction and which sort of action it is.

Each of the the four bits specifying each of the two arguments are decoded as following:

• bits 0-1:
00: The Stack
01: X register
10: Y register
11: Numeric constant
• bit 2:
0: Use argument as value
• bit 3:
0: This argument is used
1: This argument is not used.

Whether or not the argument is to be read from or written to depends upon the instruction itself.

A slightly more detailed description of the ISA can be found here.

#### Operation type instructions.

The operation type instructions are the instructions that does stuff with things. Such as adding, subtracting, comparisons and such. All operations apart from the increment (INC) and decrement (DEC) always pushes its result onto the stack. All operations takes at least one argument.

#### Action type instructions

These are the operation that does stuff to the VM. Such as moving data, jumping, conditional branches and this is actually pretty much all of what they do. FML has two different types of conditional branches: The skip operations and the jump on stack operations. The skip operations are a bit weird and i don’t really know why i decided to implement conditional branching like that but it has turned out quite nice. The skip operations work by taking two arguments and if the condition (depends on the instruction) holds the machine will skip the next instruction. The conditional skip method of doing branches have actually been a fairly good design choice. The skip branching works great for branches which only require one condition to be tested which the majority of all the branches in the programs I have written have been. There is also another way of doing conditional branching. It is the jump on zero, jump on one, subroutine jump on one and subroutine jump on zero. They execute the jump if the top element of the stack is either one or zero depending on the instruction these are slightly more neat looking especially when the condition is more complex.

## The Assembler

In conjunction with the actual virtual machine i also wrote an assembler which is fairly trivial but quite usable. The assembler has a small number of nice features such as the handling file inclusions in a nice way avoiding conflicts, the ability do declare arrays and giving somewhat sensible error messages.

Below follows a short example of assembly code for a subroutine which puts a pixel on the screen and does bounds checking for the coordinates .

// x

Hopefully you can understand what the code does. arguments starting with $are references to a memory address. So for example : MOV$std.screen.color  \$s
moves what is at the memory address std.screen.color (which will be resolved to a number by the assembler) to the address which is at the top of the stack.

Lines starting with # are labels and they will be resolved into the address of the operation immediately following it. Lines starting with a @ are pointers/variables they will an assigned an arbitrary address. A declaration of a pointer can be augmented with a plus sign and a number such as @name+100 which would tell the assembler to not use the 100 addresses after the address assigned to name. This is used to create arrays.

I will not go into any more detail on the operation of the assembler. You can spam me if you want more info on how it works in more detail.

## Conclusion

In contrast to most of my other projects this endeavor has been a great success. Both the assembler and the VM work really well and I have had no need to do any major changes to the original design. But the most important thing is that the FML machine is actually easy to write code for. The code is easy to read and journey from idea to code is not that long at all.It has been really interesting to learn to program a language of my own design. No naming or indentation standards. No one to ask for help. No libraries. As of now I feel that the FML machine and its assembler is definitely in a good enough state for me to start writing a language for it without having to worry about any shortcomings of the target machine. There are some minor issues with the java implementation of FML mostly concerning its performance. The only real thing left to do before starting work on my own language is to write a good disassembler and runtime monitor.

Before you all start complaining about the flaws and obvious stupid design choices I just want to make it clear that i have no real experience with any of these kind of things.

If you have any questions just fire away! All the code for the project is available at https://github.com/TheGrandmother/jFML.

I have for the last month or so been working on a neat little self learning asteroids game. The game itself is very basic. There are a few ships flying around in 2D trying to shoot each other. Which in and of itself is not super exciting but the neat part is that the ships learn how to behave without the need of any human intervention. This is accomplished trough the use of a genetic algorithm.

All the source code(Java) for this project can be found here.

Lets start with the basics.
First of all the space in which everything happens is a closed rectangle. So when things collide with the walls they either die or bounce. There are only three kind of objects: Ships or Crafts as they are referred to in the code, Missiles and Asteroids. Since the ships are the objects where the cool things happen i will focus mostly on them. I have also found that including asteroids yield less cool results so we wont discuss them or use them in any example.

### The Ships

Lets go in to some detail of how the ships work.

#### Input

The only input that the ship receives is what the ship can see which is three cones: one cone looking straight forward, one to the left and one to the right. And in each of these cones only one thing can be chosen as input. Even though several objects can reside in the cones only the object that has the highest priority will be chosen. In practice this is fairly boring since we only have ships and we let the missiles have 0-priority so they get ignored. So the ship can at any one time see at most three different objects.
For each of the objects seen only three attributes are recorded: The type of the object, its heading(relative to the observer) and weather or not it is within firing distance.

The game in debug mode.

#### The AI

The AI is very simple in its construction. It simply just maps a input to a decision. Each decision is just a set of actions such as move,accelerate,fire etc. This is represented as a hash-map using the input as keys and the decision as values. When the ship encounters a input it has not previously seen a random decision will be mapped to that input. Each ship will have its own AI.

### The Genetic Algorithm

So, on to the fun part. The genetic algorithm here is the thingamagoop which makes the ships AI get better(or it’s at least supposed to do that). This is done in a fairly trivial way. The algorithm chosen for this project uses tournament selection, random crossover and a fitness function  defined as the total score(the score accumulated trough it entire life) of the ship divided by the age of the ship. The age of the ship is the number of times it has been in a tournament.

Lets go trough the algorithm step by step.

1. First a set of ships is created. This set will be referred to as the phenotypes.
2. Then a series of “warm-up rounds” are played. These are played in order to fill the decision hash-map with some entries before the actual tournaments get started.
3. A tournament is started.
4.        Chose N distinct ships from the set of phenotypes.
5.        Have these ships battle each other in several battles of epic proportions. With each these N ships partaking in only one battle.
6. From these N ships. Find the two ships with the highest fitness and the one ship with the lowest fitness.
7. Let the two best ships get on with some sweet sweet love making. The resulting offspring will have a combination of the two parents decision-map. This is accomplished trough a random crossover.
8. Sacrifice the crappiest ship in public to set an example for the other ships and let the new offspring take its place.

### Conclusion

Well.This project has turned out pretty crappy. For several reasons. First of all, i suck at coding. This project has been full of serious bugs. Especially really dumb things in the code which makes me wonder why my parents didn’t just say “No hes going to be shit at coding. Throw it away” when I was born. Also the genetic algorithm has been really annoying. I have had a lot of problems getting it right. Both due to bugs in the code and other stupid stuff such as poor choices of fitness functions etc.

#### Lessons learned:

• I suck at programming.
This was something I already knew but this project has really shown me the extent of my incompetence.
• Be careful when you copy paste your own code!!
Several serious bugs in the program has been due to me copy pasting some loops and such and forgotten to change all of the details.
• Don’t do humpty-dumpty half assed refactoring.
A lot of bugs has been introduced trough me saying “ouu I should change all of this to do things better” and only changed some of that and
ended up breaking things.
• Fix ugly code when you realize that it’s ugly and not later when you have forgotten how that ugly code actually works.
• Never underestimate how annoying genetic algorithms can be.  You might have to tweak seven thousand different parameters to get good results.

### And here is a little video.

It looks cooler than it is.

ALL WHICH IS WRITTEN HERE IS CRAP!
FOR A SLIGHTLY LESS CRAPPY DESCRIPTION
https://thegrandmotherblogg.files.wordpress.com/2014/05/document.pdf

In this post I will introduce a little method I figured out in order to “measure complexity”. I know that this method is most likely nothing new or useful by any stretch of the imagination but i had a lot of fun playing with it and trying to make it work. I came up with this method after having written a cellular automaton. The problem was that there where 2^64 different rules and some of them produced really cool and complex patterns but most where just random and boring. So i set out on the futile quest to find a way to identify these cool patterns automatically. The fact that the patterns i was trying to find was the patterns that made me go “ouu that is awesome!”  when I looked at them did not really aid in finding an algorithm to find them. So essentially I was trying to find a needle in a haystack when I had no real idea of what a needle was.

This project has been an epic one filled with elation and despair (mostly despair). Although i have been aware that the measure was “decent” ever since i first tested it i have run in to so many roadblocks and i have been so wrong so many times during this project that i frequently thought about just giving up, banishing this project to the crowded folder of failed ideas.

I will not publish any code used in this post due to it being so ugly that if you where to wake up next to it after a night of heavy drinking you would spend the day crying in the shower, vowing never again to drink alcohol. But if anyone of you desperately want the code just message me and i might consider it.

# Overview

With no reason what so ever i thought that the best way to start where to try to find a good measure of the “complexity ” and that one way of doing that was with the following assumption/guess:

The complexity of a string is in some weird way proportional to the number of unique symbols in the run-length encoding of the string.

Okay so what do i mean by this:
Lets pretend that we have the following string 10110001101. The run-length encoding of that string would then be (1,1) (1,0) (2,1) (3,0) (2,1) (1,0) (1,1) where (a,b) is to be read as a number of b‘s. Then we consider how many unique (a,b) tuples there are. In this case there are 4 of these. Namely : (1,1) (1,0) (2,1) (3,0). So the string 10110001101 have a complexity of 4 in my measure. One interesting to view this is is that the string 10110001101 can be expressed in a language with a alphabet consisting of only (1,1) (1,0) (2,1) (3,0). This method can obviously be generalized to strings consisting of arbitrary many symbols.

I have also done some extensive but utterly failed attempts at some theoretical explanation for using this method as a measure of complexity. I have tried several different approaches but they have all been futile. Maybe i will give it a go after having taken a course in automaton theory.

The actual implementation of this method is very straight forward but for patterns which are constructed of several strings it becomes more difficult since one has to find a way to interpret the combined. How this was to be done was not obvious but after some extensive guessing i found some methods which worked.

# Application

So maybe we would like to actually try to use this method to try to find something interesting about some automatons. So lets start with the elementary automaton which we all area familiar with. So lets just go out on a limb here and say that we want to compare the patterns generated by the different elementary cellular automatons using my measure of complexity in a vain attempt to be able to identify interesting behavior of the underlying set of rules. The problem which we are faced with now is determining how to use the measure in order to compare the different patterns.

## Elementary CA

In all of these examples i will use patterns that are of the size 100*100 cells with an initial random configuration. The initial configuration will not be considered when we do the actual comparison. Each comparison is performed 50 times and then averaged. I will only compare the 88 different unique rules.

Don’t care about the actual values of the complexity score. We are only interested in comparisons between different patterns at this time.

Randall Munroe once wrote “i could never love someone who does not label their graphs” well… i say “i could never love someone who could not infer information from a context”.

Rule 110 is highlighted in the plots due to it being capable of universal computation and thus embodying the essence of a “cool” pattern.

Even though only the top 20 patterns are showed each method puts the lame(class one behavior ) patterns at the bottom of the list.

### Total Complexity Method

Lets start with a pretty direct approach where we just add the complexity of both the rows and the columns of the pattern. We then end up whit this list:

legend.
rank : rule : complexity

We can see that It picks out some interesting patterns for the top but otherwise there are a lot of bland and boring patterns in the top 20. The reason for this is that a pattern where all the lines/columns are very similar but have high complexity values will contribute to a higher overall value for the pattern.

A plot of the sorted values of the complexity measures.
The red dot is rule 110

Looking at a plot of the sorted measures shows us another thing. Even though it might appear nice that the values seems to be distributed in a nice linear fashion we have no clear distinction between “cool” and “uncool patterns”.

### Maximum Complexity Method

In a vain attempt to avoid the pitfalls of the above method lets sum up the maximum value of the complexity of the rows and the columns. And then we get:

legend.
rank : rule : complexity

We can see a slight improvement in the top patterns but there are still many boring patterns in the top 20. the same problem with just taking the total still appears here. So we are not there yet.

### Recursive Method

Well in in a desperate attempt to avoid boring patterns where the individual lines / rows have great complexity making the way to the top. Lets try to avoid this by incorporating the complexity of the list of the complexity of each row /column to avoid patterns that have a repetitive nature. We do this as follows: Take the complexity of the string of complexity values and multiply it by the maximum value in the list of complexity values. Did anyone follow that? No probably not. But lets see what happens.

legend.
rank : rule : complexity

Well no we start to see some nice result. Lets just note that it picks Rule 110 as the coolest rule. Which is awesome but lets be a bit skeptical still since it could just be a onetime thing and just consider the fact that it seems to group all of the interesting patterns in the top.

Another nice feature is that it seems like the lamer patterns are all also somewhat grouped together by coolness.

Plot of the sorted complexity measures.
The red dot shows rule 110

Well this is nice. With some vivid imagination we can see distinct jumps which seem to correlate between the coolness of the patterns.

### Variance Method

Another approached that can be used in order not to have repetitive patterns getting a high score is to make use of the the variance in of the complexity of the different rows and that of the different columns. This method works in an even weirder way than the previous one and don’t ask me how i came up with it since i just guessed. We get the measure M as follows:

$\emph{M} = (Mean( horizontal )+Mean( vertical ) )*(Variance( horizontal )*Variance( vertical ))$
Where Horizontal and vertical corresponds to the lists of horizontal and vertical values for the complexities.

Here we can see the results of using this method:

Legend
rank : rule : score

This method also picks out rule 110 as the top rule but then its appears to look a little bit worse since it tends to mix patterns of different “coolness” i.e rule 13 coming in at place 13 even though its a very boring rule.

Plot of the sorted complexity measures.
The red dot shows rule 110

We can see here on this plot that the second best rule is only half as good as the best one. One question now becomes weather or not this is a good or a bad thing. Its very hard to tell in this case since the dataset is so tiny.

### Look-Back/Second-Order CA

Well as of now we have seen that this method is not useless and we have some support for believing that its not complete and utter crap. But lets put it to another test! We will now apply both the Variance Method and the Recursive Method to the Look-Back Cellular Automaton.

In these simulations i have used 5362 different rules for each of the methods tested. I have then plotted the sorted values of the measures obtained trough the two methods. I have also normalized the values of the measures for a clearer comparison.

The Blu line is the distribution of the rules evaluated with the Recursive method and the Red is that of the rules evaluated with the Variance Method. The dashed line is the mean and the dotted line is the mean +- the standard deviation.

As we can see the Recursive method is way better distributed. Although on closer inspection we find that both methods have the same percentage of rules which has a score above one standard deviation from the mean. From the previous examples with elementary automatons we saw that both methods cases similar results to the same rules for the most cases. This indicates the usage of the Variance Method might be preferred since its way faster than the Recursion Method.

So lets see a few examples of some of the better rules generated by each method from the previous test.

Well here we have a somewhat disappointing selection of some of the rules generated by the Variance Method. As we can see some really lame rules have jumped up to the top If one continues to search trough the top ranking rules one finds many many lame rules. Why this is i have no idea but its clear that the variance method is not as good which is a pain since its way faster than the Recurrence method. The loss of speed in the Recursion method might just be a problem of my implementation of the method for getting the complexity of strings of an arbitrary number of symbols.

So lets see weather or not the recursive method can perform any better.

Well well now it looks like something didn’t go totally wrong. Even tough the #1 pattern might look quite lame but if one shays hmm… scratches ones complete lack of a beard and look at it for a moment one will see that there is a actually a underlying structure so its not completely pointless. We can also see that all the others except number 500 are really really nice. Well all in all it looks like my method actually manages to find somewhat funny rules in the space of the rules for a second order cellular automaton.

## Details

Don’t trust anything you see here. Its mostly crap.

Now it would be really nice to somehow normalize our measure. So that we can compare the complexity of two strings of unequal length. Why this is interesting is really not obvious since one might find it obvious that a longer string can have a greater complexity than a shorter one. It turns out that when we view complexity in this fashion the relationship between length and maximum complexity is not so obvious.

We must start by finding a way to construct the string having the highest complexity by our measure. To construct this string turns out to be very easily actually. we do this by simply following the easy pattern:1011001110001111000011111… this has the RLE: (1,1)(1,0)(2,1)(2,0)(3,1)(3,0)(4,1)(4,0)(5,1)… it is quite easy to see how this method generates the string with the highest complexity. So lets now ask another question: How long is the shortest string that have a complexity of 10 in our measure. The answer to this question is 30. Lets take a moment to consider why that is…
Lets just construct that string by taking the 10 shortest possible symbols in a RLE. That in this case being: (1,1)(1,0)(2,1)(2,0)(3,1)(3,0)(4,1)(4,0)(5,1)(5,0) and this then gives us the string: 101100111000111100001111100000 which is 30 characters long.This is not the only string of length 30 with a complexity of 10 but there is no string of length 30 with a complexity greater than 10.
If one writes down some more examples and then stare at them in silent contemplation for a while one finds that the minimum length of a string having a maximum complexity of C is given by the formula:

$l(C) = \overset{\emph{C}}{\underset{k=0}{\sum}}\frac{(k-mod(k+1,m)+1)}{m}$
Where m is the number of different letters in the string.

One neat thing to observer is that if we want to have a maximum complexity of 100 we would need a a string of length 2550 but to have a maximum complexity of 101 we would need to have a length 2601. So how do we solve the issue of finding the maximum complexity of a string of a given length. It turns out that there is a answer on a closed form if there string only contains two different symbols.

$C(l) = \lfloor{\sqrt{4 l +1}-1} \rfloor$

But for the case of a string consisting of more than two letters one can analytically find a interval in which C is for any given l and then its just a matter of brute forcing although finding a function for C given both a length and a arbitrary number of letters seems to be very non trivial.

One major thing which i have as of yet not been able to solve satisfactory is the mean complexity of a random string. This is a very important function which would enable some nicer comparisons than one can do without a good measure for it. Although some guessing and general button mashing gives us an approximation for large lengths as roughly:

$\langle C_l \rangle \approx \sqrt{3} log_{2} ( \frac{l}{2} )$

## Conclusion

Oh my God I’m so flippin tired of writing this post. I told myself that i should stop writing such long posts and here i sit with 2500 words of pure mediocreness. I doubt that anyone have bothered reading trough this entire text. I’m not even sure that i would.

But enough with the self pity. Al in all this project has been a great personal successes. I have actually had an idea that kinda works. There is a lot of work that can still be done but weather or not i ever get around to that depends on the response i get here.
I have learned a lot about how to structure a larger project and most of all the importance of proper and continuous testing.

Peace brothers!

This gallery contains 22 photos.

I’m sorry that wordpress rescales images like as if they where hamsters. These are some of the interesting rules which i have found for the look back automaton. All of these rules are for a nearest neighbor and one step look back rules. You can find a proper description of the automaton here.

## Introduction

In this post ill give a short introduction to a new little cellular automaton i made up. I am fully aware that this is most likely nothing new at all (EDIT: I have been informed that this type of automaton is what is called a second order automaton). But i have had a lot of fun developing the model, implementing it and studying the results. This model which i call a look back automaton is a variation of a simple one dimensional cellular automaton. I have found some of the patterns that i observed to be very fascinating a lot of them exhibit very intricate patterns and in a some of them its easy to identify specific parts corresponding to certain reactions and in some others i have observed one rule that spontaneously gave rise to the pattern you get from binary addition. But most importantly some of the  rules generate truly beautiful patterns that one could easily sell as art to some really stupid person with no taste at all.

The Java program can be found here

## Overview

This model differs from the traditional one dimensional CA (from here on we will abbreviate cellular automaton by CA) in one key aspect. As the next state of a cell in a traditional CA depends only on the cells current state and that of it neighbors. But in the look back the new state of the cell depends on the current configuration of the neighborhood and the configuration of the neighborhood in the previous step/steps.

Nothing is stated regarding the size of the neighbor hood or how many steps in to the past one looks or even weather or not any previous  states are “skipped”. Although from here on i will only study the the case for a nearest neighbor (r  = 1) and a look back length of one. The number of possible rules  as to be expected are 2^(2^r*l) so for the case of the most trivial look back automaton the number of rules are 2^64 which is significantly larger than the 256 types of elementary CA that exists.

As you will see later on in the examples section most of the rules generated by the look back automaton are similar in behavior to those generated by  your standard one dimensional CA although some of the patterns differ greatly construction and appearance.

## Implementation

I have written an implementation of this automaton in Java using a very straightforward approach. Since there exists only a finite number of different configurations of the neighborhood and thus only a finite number of transformation rules for each configuration is possible.  The easiest and most straight forward one of these is the case where there exists only two states for each cell. The rule can then be represented by a binary number that corresponds to a unique lookup table in a fairly trivial way.This is easily expandable for as system of arbitrary configurations but the implementation will be somewhat trickier.

I have chosen to let the field that hold the cells to be cyclic. The system then just needs to be given a startup configuration for the field and then one just applies the transformation rules for each cell in the last row of the field to obtain the configuration of the next row.

## Examples

This is just very esthetically pleasing and also quite intricate.

This rule shows several different interesting patterns.

This rule spontaneously give rise to a pattern corresponding to addition.

You can find more examples here.

## Furtherdevelopment

Since there exists a 2^64 number of rules for the most trivial look back configuration it would be very nice to have some reliable algorithm to search trough these rules with some algorithm that can in a somewhat reliable classify the different rules according to awesomeness. I have actually made some progress devising such a algorithm but i still have some work left to do and i think that if the algorithm turns out be useful it might warrant an dedicated post.

Ernst-Hugo Järegård being awesome.

I have been spending the entire weekend writing a assembler in Java for my automaton. I felt that i needed to give it a name while i worked on it so i have decided to call it the Ernst-Hugo Automaton after the Swedish actor and awesome person Ernst-Hugo Järegård. I  have used the the Automaton package that i wrote earlier with some changes. I had to change some of the scoping of the classes to make them work with the new Assembler package.

## The code

The Assembler class is the a very ugly class which contains a constructor which acts as a main loop where the basic input parsing occurs. Most of the methods in the assembler class handles the parsing of the input but it also handles what is to be sent to be assembled and the running of the program. Valid assembly code is then sent from the Assembler class to the Program class when the assemble method in Program is called. The Program class is the program that you are working on. It contains both the assembly code for the program and the assembled program(given that it has been assembled). The Program class also contains the assemble method and the other methods needed for the assembling of the program. I know i should put this in a separate class… but i didn’t.

I have tried to write good looking code but i know that especially the Assembler class is mighty ugly. i  have also tried to implement some form of exception handling. There is also a somewhat extensive documentation of the classes used. I know that the documentation is rather incoherent and does not really adhere to any specific formating convention but i guess its better than nothing.

As of now the assembler does not implement that many features apart from some very basic editing of the input. It would be very nice to add things such as the ability to load and save programs. Right now the assembler cant handle any functions except for a function to add Jump lines.

The Automaton class contains some rudimentary unit testing methods. I really only wrote those for educational purposes and they can be pretty much disregarded but they are included just for fun.

The documentation can be found here.

The JAR containing the actual program and the source code can be found here. To run the program you will need to have a ANSI compatible command line interface or the program will look really stupid. It would not be that difficult to re factor it to normal output not using ANSI formatting. Just rewrite the AnsiDrawer class. It shouldn’t be that hard.

## Operation

The operation of the assembler is pretty basic. The code is simply inputed using the symbols :
( 0 , 1 , . , ! , + , x , & , < , > ).  Which are the same as specified in previous posts although the symbols have been changed to ASCI equivalents.. The index (row) where the code will be added is shown before the “:” on each line. The index can be set by the commands which will be specified in detail later.

The jump command is implemented in this assembler and is typed as “#jump:pos:target”. Only one Jump will be added on a line and the rest of the line will be padded with Drop. You can add a Jump command with the target lying outside of the program. But when you assemble the program all Jump operators has to point to a line in the assembly code. Remember that all of the jump targets are absolute! so inserting lines in your program can screw up your Jumps.

### The commands

All of the commands are preceded with a “§” symbol and are as follows:

new
Prompts the user for the memory size of the program and then creates a new Program instance. This has to be done before any code can be written. Calling new again will over ride any code or assembled program.

run
Runs the program given that it has been assembled. Prompts the user for a starting configuration of the memory. The program will run for a maximum of 200 steps in order to avoid infinite loops.

print
Prints out the assembly code.

assemble
Assembles the program. Will fail if any Jump targets lies outside of the program length. Prints the assembled code when done.

clear
Clears the screen.

help
Prints a help text.

set n
Will set the index to n. The index will then increment from n when a line of code is entered. All lines after n will be pushed down. Using set will screw up your Jumps if you are not careful.

replace n
Replaces line n with the next line of code entered. The index then returns to the end of the assembly code.

reset
Resets the index to the end of the assembly code

quit
quits the program.

## Example

Here follows a little usage example. It is the 1 bit adder that i showed in the first post about this automaton. This example should give you a little better understanding of how the assembler works.

 0:§new
Enter the memory size of the program:
5
You may now begin coding!
0:.....
1:.<.>.
2:<>..x
3:...<.
4:.<>..
5:.&<>.
6:...&x
7:..>..
8:...+.
9:§print
0: .....
1: .<.>.
2: <>..x
3: ...<.
4: .<>..
5: .&<>.
6: ...&x
7: ..>..
8: ...+.
9:§compile
The command is silly! Use a propper one!
9:§assemble
Compiling: default
0: DROP DROP DROP DROP DROP
1: DROP CLONE_RIGHT DROP CLONE_LEFT DROP
2: CLONE_RIGHT CLONE_LEFT DROP DROP XOR
3: DROP DROP DROP CLONE_RIGHT DROP
4: DROP CLONE_RIGHT CLONE_LEFT DROP DROP
5: DROP AND CLONE_RIGHT CLONE_LEFT DROP
6: DROP DROP DROP AND XOR
7: DROP DROP CLONE_LEFT DROP DROP
8: DROP DROP DROP OR DROP
It would appear as if the program has compiled without any major trouble!
9:§run
00101
00101
0 00101
1 00101
2 01111
3 10110
4 10100
5 11000
6 11000
7 11000
8 11100
G 11110
Done!
9:

## Further development

I have been thinking a bit about writing some form of language for the system capable of some more complicated operations. I have figured out a way of creating a addressable memory space with the Ernst-Hugo automaton which would enable more advanced programming using a fixed but arbitrary word length. Although creating a new language would require a lot of  effort to implement but it would be really fun to be able to write more complicated programs for it. But that depends on the interest from you folks. If you really want me to write one i would definitely do it or if anyone of you would like to do that you are more than welcome! Just email me if you have any questions about the code or want to start a collaborative project with the Ernst-Hugo Automaton!

I have been spending the weekend rewriting the automaton in Java and watching Beavis and Butthead. I must say that MTV was fucking awesome back in the day. Seriously Nitzer Ebb, The Cult, WALL OF VOODO! and tons of other great bands and nowadays the only thing they show is fucking snookie. Well back to the geeking.

### The JUMP operator

Apart from the rewrite i have made some minor changes. I have changed the names of the “take left” and “take right” operators to “Clone Left” & “Clone Right” which makes much more sense as cloning whats on the left is actually what the operators do. I have also implemented the Jump operator. The Jump operator is very special since it requires a Target to be defined for each Jump . Each row of the maze can only contain one Jump operator and any operators to the right of the Jump operator will not be evaluated in my Java implementation. The Jump operator will only issue a jump to its specific target if the symbol directly above it is a 1 otherwise the jump operator will be equivalent with the Drop operator. Its wise to only put one Jump operator on each row of the maze and no other operators.

The Java implementation is pretty straight forward. You can find the code here and just compile the Automaton.Java file. Oh and you might want to have a ANSI capable terminal.

The next update will be about the system for creating a addressable memory arbitrary word size i figured out and hopefully some form of assembler for the system. I have to read up on regular expressions in Java.