Before you deploy the application built over Web NMS, it has to be tested well in the labs, by simulating the deployment scenario. The main objective of this testing is to ensure that the package would work well, when it is deployed. Hence, it has to be tested to measure the system performance, with a load that is comparable to which it needs to support, when it is deployed. In this document, we will look into the various aspects of measuring the performance of the application in the deployment scenario.
Objectives of Deployment Testing
The objectives of testing the deployment scenario are as follows:
To measure the scalability and stability of the system.
To measure the performance rates, such as discovery rate, trap processing rate, and data collection rate.
To identify issues pro-actively that would occur in the deployment site, etc.
Measuring the Server's Performance
The server performance of an application depends on the number of network notifications it can handle, amount of data that it can collect, rate at which the devices in the networks are discovered, etc. Following are some of the ways of measuring the server's performance:
Measuring Data Collection Rates
To measure the data collection rates, create PolledData objects and store it in the database using a standalone program. Simulate the agent and perform get operations to the agent. Write the collected data into another database table and measure the time taken for the data collection.
Measuring Processing Rates of Network Notification
Processing rates of Network Notifications are measured by sending messages from an external agent to the appropriate port on the BE server. Programs that generate the network notifications at a given rate can be used for this purpose. The time taken for these notifications to get converted into an Event is recorded, from which you can know the notifications processed per second.
Measuring the Discovery Rate
Simulate the devices to be discovered and configure Web NMS to discover these devices. The time taken between the Web NMS discovering these devices and storing the ManagedObjects in database, is measured and this will provide the discovery rate.
Measuring the Failover Time
When the "High Availability" deployment model is chosen, you need to measure the time taken by the stand-by server to take over the primary server. When Web NMS is deployed in mission-critical environments, this factor gains more importance. The Failover time can be defined as follows:
Failover time = (Heart Beat Interval Time - Time elapsed since last health check) + BE server startup time
For example, assume that you had set the heart beat interval as "60 seconds". Once the health of the BE server is tested at 10.00 AM, the health will be checked again at 10.01 AM. If the BE server fails at 10.00.40 AM, then the BE failover time will be equal to "20 seconds + BE server startup time".
Measuring the Client's Performance
Following are some of the measurements to be made to know the performance of the client:
Client Loading Time
Client loading time is the time taken by the client for its startup.
Latency Period is the time taken for the information present in the server to reach the client. For example, you can measure the latency period for the Alerts, by measuring the difference between the time at which the Alert is created in the server and the time at which the Alert reaches the client.
Request Processing Time
The total time taken by the server to handle a single request from the client is called "Request Processing Time".
Testing for Out of Memory
To test for out-of-memory problems, perform a stress testing on the Web NMS with heavy data collection rates and notification processing rates. The system should run without posing out-of-memory issues.
Points to Ponder
Try to simulate the actual deployment environment, where you have large number of users connecting to the users, to obtain accurate values.
Try to use the same hardware configuration that is suggested for the deployment site.