Spring 2012, Computer Science 3081W Iteration 1 Blog Entry
Star Date: 65650.9 (http://www.trekguide.com/Stardates.htm#Today)
Gregorian Date: 3/8/2012 5PM
Author: Roger Smith, 1276103, Team 34
Audience: CSCI3081W Students
Professor Eric Van Wyk chose the development of a language translator as the class project. The process of completing the translator will enforce all of the software development habits presented within the class material. As with any software project, many tools, techniques, and skillsets must come together in due diligence to meet the project goals and timelines. But just as a carpenter must build his framework level by level, we will build the language translator iteration by iteration.
This blog entry presents the challenges and successes of completing the first of four project iterations. The first iteration resulted in the development of the Scanner portion of the project. The purpose of the Scanner is the read the input file, recognize each word within the input file as a portion of a language instruction, then organize those instructions into data classes for later use by the Parser scheduled for development in Iteration 2.
2) Development Journey
This project is really about the journey, not the destination. This section walks through the techniques and tools applied during our journey to completing Iteration 1.
a) SVN Teamwork
The language translator will include many different types of files. These files can include C++ source code files, compiled files, compile instruction files, and documentation files, just to name a few. In order to retain and properly exchange the latest copies of these files among team members, teaching assistants and our instructor, we must use a reliable file repository system. Our version control system of choice is Subversion.
Installation and first use instructions for Subversion can be found within the Lab 1 instructions located at url:
Team Programming instructions and exercises are communicated in lab 4 instructions, located at url:
Many students claimed to have trouble with committing to the Subversion repository. Our experience with subversion was excellent. We always received the expected results with every command. Here's hoping that you also had the same positive experience.
b) Scanner Beginnings
Our first foray into producing the Scanner was slow going. Our instructions were to take the Iteration 1 provided development and add a scanner.cpp file with an respective scanner.h file, all resulting in a compiled and executable application. This in itself turned out to be a simple matter, but from the outset, we felt the instructions were a bit vague. Nevertheless, with the assistance readily available from the instructor, teaching assistants and piazza, we plowed forward with our best attempts at producing the expected results.
The trials that we experienced boiled down two challenges. One, neither of our C++ skills are anything to brag about. Although we both have successfully completed nearly all of the computer science classes required for graduation, we didn't understand the subtle nuances necessary to create the class and respective header code. Second, we found that the compiler errors tended to be misleading or too cryptic to interpret.
Yet, in the end, with the aid and guidance of the many sources of assistance, we finally succeeded in producing a functional skeleton of the Scanner requirement. As we discovered, the minute detail that tripped us up was that we were missing the curly brackets on the class constructor definition, which is shown here:
c) Testing Principles
The idea of automating the application testing was not a new idea to me, but the action of planning and integrating automated and reusable test functions within a project was a new exercise for me personally. To put that into terms of experience, I have worked on dozens of projects over the years at more than two dozen companies spanning a dozen industries, and only one of those projects included automated testing. Although I wasn't involved in that project's effort to produce and exercise tests, the project readily reaped the benefits daily, since we published a daily revision of the entire application. But that published version was not released until all of the tests were successful. The team called this "smoke testing", which meant that the application passed all of the automated regression tests. The end result is that a functionally reliable build could be released daily within a few minutes of completing the compile. That one experience was enough to convince me that automated testing is hands down the most important factor to consistently producing a functionally accurate and reliable product.
For more reading on how to integrate automated testing, I refer you back to the pertinent slides found here...
d) Tagging Our Work
Our next challenge was to produce a bash script file to validate that the work that was committed to the subversion repository will actually compile when graded. Again, the instructions seemed simple enough, except that nearly at the end of lab, we discovered that those weren't the instructions.
Our misinterpretation of the instructions was discovered after we completed a script that would fetch the latest revision from the source control repository into a new folder location. After the script file fetched the latest code, it then called the Makefile command (discussed in the next section) to compile the application. And finally it executed the application. This all worked great, so we thought we were making terrific progress. But then we needed to determine how to add tagging commands to our "tagit" script file. That was when the TA explained that the tagit file had nothing to do with validating that the application had been successfully committed. It was then that we quickly understand why clowns are always on the verge of laughter, because they must understand all too well how students can take simple instructions in so many different directions.
Now that we understood what the intentions of the tagit script file, all came together quite readily for us.
e) Make File
By now, we were getting pretty excited about reaching the end of Iteration 1 development. The last thing we needed to do was amend the Makefile to compile the i1_assessments_tests files in place of the scanner_tests files. Here again, we found that we both had limitations in our understanding of Makefile instructions.
The attempted effort was to copy the two scanner_tests instruction sets as two new i1_assessment_tests instruction sets. But this simple effort did not work for us. Applying close scrutiny to the Makefile script did yield additional commands that needed replicating. Yet, no matter what adjustments we made, our efforts still failed to yield a successful compile. But have no fear, Dan the TA is here. Dan noticed the little nuance that was forgotten, which he aptly added to the Makefile instructions, and one could almost hear the little Makefile crowd cheering when making the run-tests succeeded.
We weren't wholly successful in completing Iteration 1 on time. We believe that we did place enough concentration on the rubric factors assessed by humans, with the understanding that the automated validations could be completed and graded in later Iterations. Yet, we still enjoyed enough success that we were comfortable with the level of work completed.
Our challenges ended up being mostly due to lack of experience in developing a C++ application. We were initially challenged with the initial Scanner class definition. Then we found the Makefile changes also required a bit of learning. And let us not forget the misinterpretation of the tagit instructions.
I believe our greatest success was in the area of teamwork. We both were challenged, yet supported each other, and in the end, each contributed a vital piece to the final puzzle. And we felt comfortable with our ability to successful complete Iteration 1, as each showed great commitment to meeting and overcoming each challenge.
For our next Iteration, I believe we will continue to carry out development without change. We found that we work together very well. Without much discussion at all, each task required of us was almost evenly divided as if orchestrated. We found that each of us equally contributed to any discussions needed to resolve challenges. And we both simultaneously acknowledged when assistance was needed. All in all, the challenges that we faced were not nearly as daunting as they could have been had we been flying solo.
I firmly believe that we already have a better advantage at success in future Iterations due to the much needed basic experience that we both gained in the areas of C++ development and Makefile instructions. But one thing that I am hoping to find that will easily aid us in meeting the upcoming challenges is the better and more easily understand the written explanations of each upcoming lab assignment. In addition, we both found that having the basic coded structure of the scanner thrown at us without any introduction was a bit daunting. Our recommendation for future classes is to include a brief walkthrough of each provided file of the scanner. That would have given us the much needed edge at the beginning of the scanner development.
4) Closing Comments
I was going to add a hasty, and hopefully funny, brief of the additional outside forces that I personally experienced as huge detractors from completing Iteration 1 in its entirety. Those forces included being a significant contributor to two highly visible projects at work, preparing my house for market, and having the less than enjoyable experience of an impacted and infected tooth. But all of that would only be shared to indicate that challenges will rise at the worst time possible, so we can do nothing but lower our shoulders and plow ahead.
Best Wishes to a successful and beneficial semester.