This is the first of a series of posts I’m planning to write, which I call “Being Serious about Software Development”. In this series, I’d like to share my perspective on various aspects of software development.
In this first entry, I’d like to focus on the known metric called Lines of Code (LOC), and it’s relationship to quality.
Lines of Code vs. Quality:
Let’s say you’re very proud of the project you’ve been undertaking for the last few months. What metrics would you expose to your audience to brag about what you’ve been doing? A common metric is of lines of code (LOC).
As during the past two years I’ve invested my life in improving a development organization, I needed to identify the quality metrics that will enable me to measure progress and make good decisions.
I personally never found the LOC metric to be particularly usable as it is really nothing but bragging about how fast code generators work
What does it mean? A team was sitting at their keyboards and producing tons of code. Is it good? Is it bad? We can’t really tell.
I realized that the LOC metric is too simplistic to be taken seriously. We may have a lot of code in our project, but we have to make sure it’s of good quality. One that is easily maintainable in the short and in the long term.
Obviously simply measuring the amount of code produced in a project is not enough to attest functionality, design and quality.
For example, it is very important to measure the number of tests covering the production code. We also need to take into account the type of tests that are created and their effect. Obviously, in complex products/projects, unit tests cannot guarantee quality. Unit tests when written well, verify functionality at the component level. But it is also very important to make sure other aspects of the code base are sufficiently covered. I would add metrics of integration, concurrency, load, soak and durability as the minimal set of tests in addition to unit tests.
I know that it takes a lot from an organization to be able to map all these metrics. We all understand that building a picture that reflects all of these aspects is very difficult.
Another view that is mandatory for products is to make sure all tests are run equally on all supported platforms: hardware, operating systems, JVM vendors and JRE versions and so on. Obviously to achieve this level of quality and coverage a serious development organization needs to invest heavily in setting up quality control tools, methodology and discipline.
Now to the last step, which is equally important. A product is never complete until it’s “real” users provide their feedback on the functionality and the level of maturity. The large enterprises tend to release this through beta programs. This is a very good approach; it enables companies to keep close to the chest, through contracts, any knowledge of bugs that have not been discovered through the internal QA efforts.
Another approach, which we have been taking in GigaSpaces (starting with release 6.5), is to constantly release product milestones, and get feedback from all types of sources. Although sometimes nasty bugs that have not been identified by the team appear, I can attest that great feedback is coming in and the product is getting better faster. In addition, when a nasty bug is found by an early access user, obviously it is immediately fixed. More importantly, this bug exposes a black spot on the dev/qa process; a black spot that can immediately be identified and fixed. The result: these types of bugs are eliminated from the release process.
Next in this series – Counting Bugs…
Looking forward to your comments,