Chapter 1. An introduction to SonarQube

 

This chapter covers

  • Why SonarQube
  • Running your first analysis
  • The Seven Axes of Quality
  • Languages SonarQube covers
  • Interface conventions

For as long as software developers have been writing code, we’ve been asking ourselves and our teammates, “Did we do it right?” Until fairly recently, there weren’t a lot of good answers.

Unless you worked for NASA, the answer was “Well, it compiles.” Or, “Um, it seems to work.” And then there’s the perennial favorite: “The users aren’t complaining.”

Sometimes that was enough. Until the users did start complaining. Or until we had to add new features. Which is when we realized just how “not right” we had done it.

More recently, people have tried to answer these questions with automated test suites. But how do you know you’ve written enough tests? What about the things tests can’t cover?

As much as developers have struggled to understand when they’ve “done it right,” their bosses have struggled even more. It’s easy enough to evaluate salesmen (product sold), and lawyers (cases won), and factory workers (whatzits produced with acceptable quality). But how do you evaluate a coder?

In the past, people have been so stuck for an answer that they’ve resorted to the factory worker model. Only instead of whatzits, lines of code were counted. Not even “lines of code with acceptable quality,” just “lines of code.” Because measuring quality was hard.

Now it’s not. Welcome to SonarQube.

1.1. Why SonarQube

1.2. Running your first analysis

1.3. Seven Axes of Quality

1.4. The languages SonarQube covers

1.5. Interface conventions

1.6. Related plugins

1.7. Summary

sitemap