In these days of tight budgets and overloaded resources, testing the performance of an ecommerce website often falls through. A common scenario for a web development project is to have a phase towards the end of the project that says something like “load testing” or “performance testing”. When that timeframe arrives, the team often finds that the website actually doesn’t perform and they must run to optimize it, perhaps affecting other tasks and even the go-to-live deadline.

Web sites, if left alone, do not tend to naturally improve their performance. An effort must be put not only to optimize it, but to keep it acceptable over time. Here at SiriusWay we believe in iterative development and automation. In this article I will stress the reasons you should consider introducing performance tests early in the development cycle, and automating their execution periodically so that you can react quickly if a performance issue shows up in your website.

Strategies for performance testing

Thinking when to do performance testing
There are several strategies for allocating time in a project to do performance testing. I summarize some of them based on my experience:

Do it at the end

Plan a single performance optimization task towards the end of the project. This strategy is mostly found in waterfall-style projects, and it is the one that probably makes more sense for them. The main risk with this approach is that, being at the end of the project, other considerations gain more priority and the final result is an under-performing site.

Do it all the time

The team is instructed to always code taking performance into account. The main drawback to this strategy is that it leads to over-engineered code. Premature optimization means not focusing on what it’s more important: functionality, maintainability, simplicity. It is always better to make a clean design and follow your programming language and platform best practices, as they are constantly being improved over time, as this will surely yield a robust site and will facilitate maintainability of your code.Make clean code, test the performance iteratively and tune only when you have evidence proving that you need it.

Assume it will be done

Assume the team will do it right. This is more common than you might think, and it usually translates to “do it never”. Many projects consider performance testing a hardcore technical task and just assume that the team will deliver optimized code, in the same sense as the team is assumed to deliver well-written code according to the specs.It turns out that performance has a relationship to many factors other than code. It’s ok to make good assumptions about your team, but you can’t forget about the rest of the factors. Well written code still should pass a unit test suite, as well as well-written code must be checked against performance SLAs

Assume it will never break

When a project is finished, deployed to production and running under an acceptable performance, maintenance projects often don’t include performance tuning in their schedules. Instead it is only taken into account when a user or a stakeholder makes a complaint, and then everyone must run to fix it. It turns out that publishing fixes or new functionality, or increasing the number of users, or many other factors, can start degrading your web site performance.Never miss to check your performance goals periodically

The SiriusWay proposal: do it iteratively from the beginning, with goals


There is a great advantage in developing a project iteratively, releasing versions often so that end users and stakeholders can give you feedback to make sure the web site is aligned with their expectations and requirements. Normally those expectations are related to the functionality of your site, but there are other “non-functional requirements” that shouldn’t be forgotten, and performance is one of them. By including it in your frequent release cycle you are acknowledging its importance, and you can watch how it evolves with your project, and raise an alarm when it fails to meet the requirements.

In the same manner as there are functional specs, there must be performance specs or SLAs in place at the beginning of the project. The team need to have performance goals in order to be able to decide if their optimization actions are successful or not. You can start with some basic goals like 2 seconds load time or 400 requests/minute, or “better than my current website” or “20% better than the average of this list of my competitor’s web sites”. But at the start of each cycle, you need to assess the results with your goals. Then you can make those goals evolve and more precise as you gain insight.

In our experience the best way to achieve those SLAs is to include a simple methodology in the development cycle. We have been using with great success the Test Driven Development mantra (red – green – refactor) for years now, and it can be very well adapted to performance tuning with the following sequence:

  • Measure: Use tools to get performance metrics.
  • Analyze: Use tools to look where and how to improve the metrics
  • Tune: Change code or configuration. Focus on one change at a time.

Repeat those steps until you meet your SLAs. It is very possible that a given change doesn’t deliver any improvement, or perhaps it can even make it worse. Then it is better to discard it, specially if it adds complexity.

There are many tools in the market, both commercial and open source, both free to use or for a fee, that will provide you different levels of detail about performance related metrics. Response time, request rate, CPU and memory usage, slow queries, content distribution over time, type or size, image analysis, load generation, code analysis, and many more. See our follow-on posts on Performance tools and metrics for details on the range of tools available and the most useful magnitudes to gauge in your site.

The idea under this simple methodology is to take advantage of these tools and find the hotspots in your site, whether they are in the code, configuration, design, architecture, topology, or anywhere else, that will allow you to efficiently tune up your site until you meet your SLAs. If a project allocates time for some Measure/Analyze/Tune at each iteration during development, and periodically during maintenance, performance will hardly fail to meet SLAs, and user satisfaction will translate to more sales.

Conclusion

We have summarized different approaches to performance tuning that we have seen in practice at different software development projects we have been involved. For an ecommerce site performance is critical to increase revenue and drive new customers. We have come up with a simple methodology that you can apply in order to achieve those goals. It is based on repeating a simple three step scheme: Measure, Analyze, Tune.

The range of skills needed to successfully is very broad and transcends plain web development and design reaching fields like systems architecture and scientific analysis. SiriusWay has those skills and offers affordable services to help you in improving the performance of your ecommerce. Contact us for more details.

Do you have any tip or success story about performance tuning methodology? You are welcomed to share it in the comments.

Did you like our Measure / Analyze / Tune approach? Share it using the buttons below.

If you like to be notified when new content appear about performance in our blog, you can subscribe to the SiriusWay RSS feed.