By Asif Lala
WebSphere Commerce Performance Tuning and Testing is a repetitive process which involves tuning of various components that are part of a WebSphere Commerce environment. At the core level, these components involve Web Server, Application Server, and Database Server.
For any “Performance Tuning and Load Testing” project, it is always recommended to start with the baseline values define by IBM and followed by most of the WebSphere Commerce customers.
Once defined, then monitor performance metrics see if any of the values needs tuning or modifications based on the performance benchmark you want to achieve. Most of the time tuning of these baseline values depend on the targeted numbers you set for the parameters below, while performing load testing. However, every WebSphere Commerce environment varies , hence baseline values need to be tuned accordingly.
JDBC Connection Pool
Solr Search Caching
Solr Search Tuning
Time to First Byte. E.g. 0.4 seconds.
Web Page Load Time. E.g. between 3 to 4 seconds.
Web Server Response Time. E.g. under 1 second.
Average throughput i.e. Number of Requests per minutes. E.g. 10k Requests per minute.
Number of Page Views.
Time To First Byte
TTFB is defined as: “Time taken by the browser from making HTTP request ex: www.google.com, till the time it takes to receive from the web server”.
At an average, if your WebSphere Commerce site home page TTFB is in the range of 200 to 400 milliseconds , this is considered the best range . Anything higher than this time means it needs some work to see what is causing the delay.
From the end user perspective, TTFB is very important in a way that; it makes a user feel as if that page has started loading. In the case of high TTFB, sometimes user think that page has become irresponsive and then he again sends the request by using the same URL which eventually puts more load on the server and at the same time consume more server resources.
Web Server Response time is the amount of time it takes for a web server to answer to a request from a browser. Feedback time can be recorded in micro-seconds/seconds, in the access log file located by either the TransferLog or CustomLog directives stated in the IBM HTTP Server's (httpd.conf) file as long as the LogFormat directive is changed to include either the %D or %T format parameters.
E.g. LogFormat "%h %l %u %t TIME: <strong>%T</strong> \"%r\" %>s %b" common.
LogFormat "%h %l %u %t TIME: <strong>%D</strong> \"%r\" %>s %b" common.
“Throughput” is the amount of transactions created over time during a load test. It’s also expressed as the amount of capacity that a website or application can manage. Usually, prior to commencing a performance test it is quite common to have a throughput aim that the application needs or should have to manage a specific number of requests per hr / minute / second.
Tools which are used to measure throughput during load test, includes: NewRelic, Jmeter etc.
This represents the number of page views for each user per minute. In other words, these are the number of pages, one user would navigate during single load test cycle.
Ramp up time is a time when users start loading / jumping the site. For e.g. if you are running a load test of 50 Users and you have setup 5 seconds as Ramp-up time, then in every 5 seconds, one user from the pool of 50 users, will logon on the site.
Think Time is a time which users take to navigate among different web pages on the site. For e.g. your Load Test comprises of 12 Page Views Per User and you have setup think time of 5 seconds, then one user can view all the pages in 1 minute i.e. in 60 seconds.
When it comes to Performance Testing of any application the usual norm is to discover the objective during the course of execution and the more stress is upon the tool which any organization is going to use for the activity.
This is what we experience during the performance testing activity for one of our client in the retail industry. As Royal Cyber is a premium IBM partner and we have higher level expertise on the IBM Rational Test Workbench suite. It was a by default choice to use IBM Rational Performance Tester for this activity and we did it rightly so.
Initially, this activity was started with no clear objective except to maintain the Average Response Time for 100, 500, 1000 and 1000 users. It was a retail store built on IBM Commerce and had a complete backend on Commerce on Cloud. Maintaining an average response time was a herculean task. As all the images were getting downloaded from the third party services with caching enabled, it was bound to increase Response Time especially the response time of the Home Page, where all the banners were displaying and other categories including new offers etc. along the Product Display Page.
Another, challenge faced was getting the throughput on new relic monitoring server based upon the calculation done for user interaction on each web services. As our IBM RPT server resides in house on our local network which has its bandwidth limitation beyond a certain number of virtual users, we were not able to get the required throughput as per the calculations based on number of users and their direct interactions with the application.
However, we managed to find and optimize the web services and other jQuery calls utilized on the home page as well as the product display page to get the viable Average Response Time.
Proceeding with the cloud-based solutions like BlazeMeter helped us in getting the required throughput on new relic tool and allowed us to generate the load from dedicated geographic locations. This allowed us to imitate more realistic scenarios.
Objective (To maintain a good response time or benchmark the application).
Calculation of Ramp up time and think time (Behavior of virtual users are directly proportional to the think time and ramp up time, realistic calculations can only get you the desired results).
Realistic scenarios (Not all users should be doing one activity, schedules should have test cases based on multiple real-time scenarios like browsing products, checking out or continuously adding products to the cart).
Server monitoring during the execution.
Dedicated Performance Team including server-side admin (WAS specialist in our case), Senior Performance Tester and a Developer (to take care of the reading logs and interpreting exceptions).
Check out our case study here.
Ready to get started with Performance Tuning and Testing? Look no further. Royal Cyber has a strong domain exposure and profound experience in retail performance tuning and testing. We have the industry knowledge and technology competencies to deliver premium service in retail software tuning and testing. For more information email us at firstname.lastname@example.org or visit www.royalcyber.com.