Iteration 1 Experience

| No Comments

Making the scanner for iteration 1 was the first step in our language translator and the process went smoothly in some parts and had its pains in others.

First let me tell you what worked out great for my partner and me. The creation of the tokens were quite straight forward, considering that we basically had to copy the match regex function and check the number of the matched characters for each type of token. Creating the test cases also went smoothly since we knew what results we should be getting, for the functions used in the scan function and the scan function itself, if a certain input was given and thus we could easily test them for the expected results using the cxxtest framework.

When we started testing for the correct list of tokens to return is when our "pain" started. The first thing that caused for our needless head banging was the problem with strings. When we made a token, we would have the string that was matched to a token type and the token type itself ready to be added into a token object. The token would have the correct string and the correct token and we did not understand for a long time as to why the tests that were already given to us were not working because the testing framework would result in something like this: "abc != abc". This to my knowledge made no sense and after about an hour of having a staring contest with computer monitor, a friend suggested using a string type instead of a char* type for the lexeme. I tried as he suggested and no more "abc != abc". That is when I realized to pay more attention to what the tests were actually testing for in the testing framework. The second "pain" that almost made me want to jump over a bridge, was the problem with the list add function. We made an add function for adding tokens to a list and at first it was implemented by using an iterative approach, by iterating to the last element in the list from the first, so that the returned list can be of type Token* and not some other structure that we can make. When we tested the list of tokens being returned, we noticed that segmentation faults would keep occurring and after some probing we realized that the list was smaller than it should be and so when the testing framework would look farther, the pointer to the next element would be null. This was fixed by changing the add function so that it worked recursively. In both of the major problems we encountered in the first iteration, one thing that could have sped up the fixing process was testing each function immediately after it was created. When we encountered these problems, we tried to find where exactly the bug occurs, but we couldn't easily pinpoint it because we created other functions and added other code that used the bad functions which forced us to check all other code that was added for the place that was giving the errors. That is why in the next iteration, I plan to employ the tactic of checking if every function I make is valid right after I finish writing it.

Other than the code problems there was also the problem of getting used to writing in the new style of code. Most of the time I forgot to use the new style of coding and my partner who was more used to it, kept spotting them. So I decided to check for any style problems after I was done typing and that seemed to work for me, so I think I will continue that in the next iterations.

Working with partners was not exactly spiritual, but having a partner was definitely better than not having one. Having a partner worked for me because my partner would spot some of the errors that I couldn't spot in the code and also because working as a team significantly reduced the total work time. Although some quarrels did occasionally occur with my partner about coding style or what constitutes a cool name for a variable, working with a partner was overall a good experience.

Recent Entries

Find recent content on the main index or look in the archives to find all content.