My name is Michael Burling and I'm an undergraduate at the University of Minnesota in the Linguistics Department. I hope to be graduating this semester with a B.A. in Linguistics and a minor in Computer Science. My partner for the long-term project in 3081W is Patrick Farrell, an undergraduate in the Computer Science department with some experience with software development in the IT industry.
For this iteration, we found the beginning particularly challenging, but as soon as we learned to stop blaming the professor and TAs for the difficulties in interpreting the requirements for the scan method, we found our way into creating some decent code. We spent quite a bit of time pouring over the notes for the requirements in order to better perdict some of the pitfalls we might encounter. We put actually writing any substantial code off until we felt that we fully understood the requirements as they stood. We sought the hints and clues within the formal requirements, but most importantly, when we didn't understand something in front of us, we asked questions.
It was an interesting departure from the pervasive hand holding that I've become accustomed to in many of the computer science courses I've taken thus far in my limited experience. Yes, the requirements were provided, and the labs leading up to the implementation of iteration one were intended to prepare us for the first iteration, but creating something that seemed so simple from what it was supposed to do rather than how it was supposed to do it was tough. We actually found some benefit in sticking to some of design and planning principles discussed in the McConnel text.
Planning and Design
From the beginning of our 3081 lectures, the notion of test-driven development has been pounded into our heads. But, testing can be really tricky. When thinking about it, it's somewhat like a chicken and the egg scenario -- just what does come first? While the tests are to be written so as to quickly discern the accuracy of our implementation, our implementation is to be written in order to satisfy the requirements that our test cases model. Without fully understanding how the various functions to be tested will work, I don't think anyone can just start writing tests cases. I don't believe either can come first, exactly. Instead, I believe the use of high-level psuedo code, even when design questions remain questions is probably the very first step in implementation and testing. We can write things like "do stuff" for portions of the functions that we're not sure of so we can at least determine the kind of data that a function is to take, and the intended results of the function. Determining input/output is crucial. And this is precisely what we did. On a white board, we determined the inputs and outputs of all of the major supporting functions with many remaining hollow 'do stuff' functions until the tests were explored.
Pair Programming and Implementation
After the initial design phase, we started thinking about the implementation and how we might divide the labor in our code. It's somewhat of a sticking point of this class, in that the instructors would like to see an equitable arrangement, which is really difficult. Patrick and I are busy people after all, and it's hard to get together and put precisely the same amount of work in on any given day of the week. I think we decided to just do whatever we could without much regard for who was doing what. It just so happened that I believe we put in an equal share in the work without formally assigning any particular duties.
For instance, Patrick created the Makefile entirely, but being such a short task, that isn't particularly hard to believe. At first, I was wondering how we would get around something like this... as if I needed to come up with some sort of Makefile-related task to make it seem as though I was involved in writing it. In the interest of making the assignment more realistic, we focused on the actual problems, rather than engineer problems for ourselves to solve in order to have an absolutely fair share of the work. It's just not possible.
As the iteration continued, we found ourselves working in two different aspects: testing and implementation. I don't remember what the exact division was, but let's say it was 25%/75% where One worked on 25% of the testing and 75% of the implementation, while the other worked on 75% of the testing and 25% of the implementation.
In Defense to Our Division of Labor
Now, common wisdom suggests that dividing along the lines of implementation and testing is a generally bad choice. As Professor Van Wyk has noted, whoever implements a particular piece of code should be responsible for the functionality of it, including the testing. However, the McConnel text extols the virtues of different types of testing, particularly the black and white box testing. Because our team was limited to two members, myself and Patrick, we didn't necessarily have the resources to commit to the various kinds of testing. So in a way, without the division as it was, we ended up covering both black and white box testing. Having one person largely responsible for the testing with less emphasis on the implementation, they could effectively write tests without the specific and domain knowledge of how the function worked, just what the function shoudl return. While the other person could focus on the implementation, and respond to how well or how poorly tests ran, with full knowledge of the implementation. I think it worked well, but I can understand how this might be viewed as a lazy sort of practice, wherein the stronger programmer would strictly write the implementation while the weaker one would only write tests. Without even thinking about such an arrangement, it turns out the more experienced programmer, Patrick, ended up writing the majority of the test cases, while myself, the weaker of the two by far, wrote the majority of the implementation. In my mind: perfect.
Especially toward the end of writing this thing, the majority of the implementation was written together. That is, Patrick sat beside me on his workstation, and I sat in front of my own. By virtue of the fact that we were sitting right beside one another, in sort of cheapened the whole SVN experience. Further, because we were effectively working on two different pieces of code, there weren't really any sort of discrepencies that needed to be resolved. So in the end, I don't think we were able to use SVN to its fullest capacity. I was aware of everytime Patrick was getting ready to commit, and he was aware of everyone of my commits. Almost entirely without exception, we were able to update mere seconds after the other committed, and we both tested eachothers work as soon as our working copies were up to date.
To that end, SVN worked really well. We were sending eachother files back and forth to see what was going on, we were able to have the centralized version control handle all of that for us. But it seems like a marginally more convenient way to handle things, not really taking full advantage of SVN's capabilities of two people working on the same code, commiting and updating and collaborating in that way. We continued in this fashion until the implementation was complete. It was quick and efficient, just not very stylish, I suppose.
In the end, I think we did a fine job. We had something on the order of 124 test cases, all of which passed. We spent some time after implementing the various functions and tests to add good commenting and add consitent formatting for readability. In the future, I hope we can take full advantage of what SVN has to offer, and reexamine our division of labor -- maybe we ought to split it along different lines in the future.