The first challenge for the translater was to create a scanner. To do this I wrote up the scan method on paper, using a variety of functions my partner and I did not actually have yet. These functions were:
1. to make the regular expressions
2. to recognize the regular expressions
3. to eat whitespace
4. to remove the first x characters of a string and return them as a string
5. to add a node to the end of a list given the list's head.
Then we wrote and tested each function, and once these were done, I wrote and tested the scan method.
I wrote functions 1-3. This was not especially hard because I took most of the code from lab 3. I stored the regular expressions in global variables. I know using global variables is poor practice, but I didn't know how else to avoid passing 50+ variables between functions. Another problem was matching keywords. At first I tried matching word boundaries, but for some reason the tests didn't pass. I ended up ignoring the issue and relying on the fact that "mainbar" would match more characters as a variable than as a keyword.
To test, I merely used the testing structure that the website provided us. Every time I implemented a small feature, I would create a test for the feature. I made tests for the first lexeme, after a few more lexemes, and then after all lexemes were done.
When the time came to integrate the functions, some of the tests for scan did not pass. I opened the program in gdb and stepped through until I found anything strange happened. It turned out that my whitespace function was wrong and I hadn't tested it properly.
Working with a partner was slightly challenging, because he is new to programming and required some assistance building his functions.
Overall there were no major hangups and we plan to employ a similar iterative design process for Iteration 2.