Page tree
Skip to end of metadata
Go to start of metadata


Optimizory Technologies is sharing the performance data with users, just as a reference to indicate the level of performance which RMsis is capable of achieving on a specific platform. We provide no warranties with respect to these performance figures, but running RMsis on a server class machine and with a high end Database server, users should be able to achieve higher performance levels.

Since the performance depends on many site specific factors, users are advised to base their decisions on system performance at their own sites.

Note : These characteristics are applicable to RMsis 1.8.0-r260 and there may be deviations in other versions of RMsis.



Table of Contents


Many customers have reported performance issues with RMsis and we have done considerable amount of optimization in RMsis 1.8.x series. Here is a report focused on functions (where performance issues were reported) on what level of performance can be expected with this version of RMsis.

We realize that this performance level can be further improved and we will continue to raise the bar in futuristic versions.

Executive Summary

Performance has been a primary focus area for us since early 2013, when we started to look at the performance bottlenecks, optimization opportunities and design alternatives to improve performance. After a series of experiments with technologies and design alternatives, RMsis 1.8.0 is finally ready to support large projects and large number of users.

Talking about the current status, performance of RMsis 1.8.0 is primarily constrained by operations, which

  • are synchronized by locks
  • and have long critical regions

The synchronization happens at a Project level; for example two concurrent requests should not simultaneously modify the Requirement hierarchy in a single project. This has the following implications:

  • Projects with large number of Requirements will become bottleneck in the system performance and will impact users of that project.
  • An RMsis instance with large number of projects containing fewer Requirements will perform better.

Based on our performance tests on RMsis 1.8.0, we would classify a Project with 25,000 - 30,000 Requirements as large. This looks adequate, since most of the projects of our users have fewer than 5,000 Requirements per project.

Based on our assumption that a user will perform and operation every 50 seconds, we think that an RMsis instance should be able to easily support 1000+ users. Please take a look at the performance characteristics in the following sections.

If the system falls short of your expectations on the performance dimension, we will be eagerly waiting for your feedback at

Test Details

Test Details describe the objectives, assumptions, design and environment of the tests resulting in the performance characteristics shared below.


Improvement in rendering time on Browser with 1.8.0

This graph compares average display time (server response + network + rendering) on browser of RMsis 1.8.0 versus RMsis 1.7.8 for a project containing 25,000 entities of each type (Requirements, Test Cases, Issues) after a request is made by the user.

This comparison clearly depicts that RMsis 1.7.8 poses serious usability challenges for large projects and significant improvements offered for the same data set by RMsis 1.8.0.

It may also be noted that

  • Now the Planned Requirements and Traceability tables are implemented as scroll (large data) tables and hence an increase in number of Requirements in the project will not significantly impact the rendering time.
  • All the other tables are paged tables, capable to displaying maximum of 1000 rows in one table. Hence the time taken for rendering is also not expected to increase significantly with increase of data size.





Performance with Concurrency

Performance for mix of concurrent requests - peak load

Few comments on the graph below

  • This graph shows the average read response time with a mix of concurrent requests at an instant. 
  • The idea is to determine, how most of the users will experience system responsiveness.
  • At the maximum limit, all requests are successfully served without any timeout errors. We perceive that this limit can be pushed further if users are willing to accept response of 15-20 seconds.



Functional performance with increasing concurrency level

The graphs in the following two subsections show the average response time for each function at an instant with increasing concurrency level. The maximum limit indicates the level at which all requests are successfully handled without any timeout errors. Two sets of graphs are plotted for

  1. Long running operations
  2. Lightweight operations

Performance of Long Running Operations

All the operations which have slow responses are clubbed together in this graph. Typically these are operations, which

  • are synchronized by locks
  • and have long critical regions

Some key observations are enumerated below

  • Worst performing operations, in that order are
    • Create / Delete test case at the top of the list, which requires order of all the test cases to be modified
    • Create / Delete Requirements at the top of the list, which requires order / hierarchy of all the Requirements to be modified and all currently referenced objects to be updated.
    • Indent / Outdent operations, which require modification in Requirement Hierarchy 
  • Our opinion on the impact of the responsiveness of Create / Delete operations
    • Users tend to create new entities at the bottom of list. So the delay will be insignificant in that case.
    • Moreover this is the responsiveness for a large project containing approximately 25,000 entities of each type. Such projects are very rare.
    • Overall we think that such situations will be rare and most of the users will not be impacted by this dimension.
  • Our opinion on the impact of the responsiveness of Indent / Outdent operations
    • First of all, impact will be significant only in large projects.
    • Users will do well if they avoid bulk operations for Indent / Outdent or at best choose only a few of these for bulk operations.
    • Overall we think that such situations will be rare and most of the users will not be impacted by this dimension.


Performance of Light Weight Operations

In the graph below, we observe that the worst case response is well under 15 seconds, even at a concurrency level of 50. So we perceive that the users should feel that the system is fairly responsive for most (90%) of their operations.





Functional Performance vs. Project Size

These test cases show the degradation of performance with increasing data size in a project. The details are available at Functional Performance vs. Project Size


Memory Bottlenecks

RMsis 1.8.0 was tested for Memory limits and some observations indicate the extent of memory required for a typical instance.

The following table indicates that

  • 4GB is likely to meet the memory requirements for an instance containing large projects (max of ~ 25,000 requirements in a single project).
  • but a larger allocation of 6-8 GB is desirable to prevent any extreme cases.
Test CaseJVM with 1 GBJVM with 2 GBJVM with 4 GBJVM with 6 GB
25000 Requirements in a single project and generate PDF for 1000 Requirements.FailedPassedPassedPassed
25000 Requirements in a single project and generate PDF for all.FailedPassedPassedPassed
CSV Export of Traceability with 25000 Requirements, Issues & Test casesFailedPassedPassedPassed
Custom Reports  with 25,000 Requirements, 25,000 Test Cases, 25,000 IssuesFailedFailedPassedPassed
Custom Reports  with 50,000 Requirements, 50,000 Test Cases, 50,000 IssuesFailedFailedFailedPassed

Note : Please note that the table shared above is just an indicator for a typical data set and for a specific instance. For another instance, the observations may be different (based on the number of custom fields, relationships, number of issues etc).


  • No labels