WebSpellChecker.net Wiki

Home

Results of WebSpellChecker (WSC) Load Testing


Test Cases

The following test cases were performed:

Test Case 1: Loading dependence on the number of concurrent users


Steps:

  1. A user opens a page with WSC and clicks the WSC button for a random English text (text with different size from 20 to 2000 words, HTML markup complexity, number of misspellings from 4% to 90%).
  2. SpellCheck tab is opened and several POST HTTP requests are sent to the SpellCheck engine.
  3. The user perform “Change to” command that sends the POST HTTP request to the SpellCheck engine.
  4. The user opens Grammar tab and perform “Change to” that invokes several POST HTTP requests to the Grammar engine.
  5. The user opens the Thesaurus tab and perform “Change to” that invokes several POST HTTP requests to the Thesaurus engine.
  6. The user performs “Finish Checking” command that closes the session.

The test case was performed for the following numbers of simultaneous users:

  • 100 users
  • 200 users
  • 500 users

Test Case 2: Loading dependence on text size for SpellCheck, Grammar and Thesaurus engines


Steps:

  1. 200 concurrent users open a page with WSC and click the WSC button for the English text with 4% of misspellings
  2. Corresponding tab (SpellCheck, Grammar, Thesaurus) is opened and several POST HTTP requests are sent to the server
  3. The users perform “Finish Checking” command that closes the session

The test case was performed for the following options:

  • SpellCheck engine, 20 words text
  • SpellCheck engine 2,700 words text
  • Grammar engine, 20 words text
  • Grammar engine 2,700 words text
  • Thesaurus engine, 20 words text
  • Thesaurus engine 2,700 words text

Test Case 3: Loading dependence on number of misspellings for SpellCheck, Grammar and Thesaurus engines:


Steps:

  1. 200 concurrent users open a page with WSC and click the WSC button for the 600 words English text
  2. Corresponding tab (SpellCheck, Grammar, Thesaurus) is opened and several POST HTTP requests are sent to the server
  3. The users perform Finish Checking command that closes the session

The test case was performed for the following options:

  • SpellCheck engine, 4% of misspellings in the text
  • SpellCheck engine, 50% of misspellings in the text
  • SpellCheck engine, 90% of misspellings in the text
  • Grammar engine, 4% of misspellings in the text
  • Grammar engine, 50% of misspellings in the text
  • Thesaurus engine, 4% of misspellings in the text
  • Thesaurus engine, 90% of misspellings in the text

Test Case 4: Loading dependence on language (size of the dictionary)


Steps:

  1. 200 concurrent users open a page with WSC and clicks the WSC button for the random (from 20 to 2,700 words, from 4% to 90% of misspellings) text
  2. SpellCheck tab is opened and several POST HTTP requests are sent to the server
  3. The users perform Finish Checking command that closes the session

The test case was performed for the following options:

  • American English language (size of the dictionary is ~300KB, ~100,000 words)
  • Spanish language (size of the dictionary is ~700KB, ~300,000 words)
  • Test Bed configuration
  • Windows Server 2012 and Ubuntu Server 14.04 operating systems have been chosen as two main test beds (Windows Server 2012 and Ubuntu Server 14.04).

Test Bed configuration


Windows Server 2012 and Ubuntu Server 14.04 operating systems have been chosen as two main test beds.

Windows test bed Linux test bed
Amazon Instance t2.medium t2.medium
OS typeWindows Server 2012Ubuntu Server 14.04
Web ServerIIS 8.5Apache 2.4.7
CPUIntel Xeon Core v2 2.5 GHzIntel Xeon Core v2 2.5 GHz
RAM4 GB4 GB
HDD30 GB8 GB

Load Testing Tool: Apache Jmeter with PerfMon Metrics Collector and Server Agent (Standard Set of basic plugins)

Network topology: AWS EC2 Subnet

A machine with the application and a machine with JMeter tool are in one subnetwork. There are no any nodes between them.

NOTE The results of the performance testing will be different for the different server configurations. WSC performance depends on the following options:

  • Hardware configuration (CPU, RAM)
  • OS and Web Server types (Windows with IIS, Linux distributions with Apache)
  • Network topology (number of nodes between an application and AppServer, network structure, etc) and bandwidth
  • Spelling Check language (size of the dictionary)
  • Engine (SpellCheck, Grammar, Thesaurus)
  • Text complexity

Results


Test CasesTest DataResponse Time (ms, Windows)Response Time (ms, Linux)
By the number of concurrent users 100 concurrent users 305 34
200 concurrent users 1,071 125
500 concurrent users 3,580 907
SpellCheck Engine
By text size20 words text 694
2,700 words text 4,863
By the number of misspellings4% misspellings 1,930
50% 17,000
90% 62,832
By LanguageAmerican English 1,200
Spanish 3,500
Grammar Engine
By text size20 words 762
2,700 words 39,715
By the number of grammar errors4% 9,920
50% 9,993
Thesaurus Engine
By text size20 words 762
2,700 words 39,715

Windows Server 2012 + IIS 8.5


100 concurrent users
Time response: 1,809 ms

200 concurrent users
Time response: 3,334 ms

500 concurrent users
Time response: 8,695 ms

Conclusions: The test showed that CPU utilization increases from 60% to 100% according to number of concurrent users. When CPU utilization is limited to 100%, the average time response of users increases in direct ratio (from ~1,800 ms for 100 users up to ~ 8,700 ms for 500 users). It can be concluded that the main metric of WSC loading is the response time of the requests.

Ubuntu Server 14.04 + Apache 2.4


100 concurrent users
Time response: 1,529 ms

200 concurrent users
Time response: 3,188 ms

500 concurrent users
Time response: 8,039 ms

Conclusion: This test confirmed the previous one and showed that Performance of Ubuntu + Apache configuration is slightly higher than Windows + IIS with the same hardware configuration. The slight difference in performance (~1800 ms for 100 users on IIS and ~1500 ms for 100 users on Apache) is explained that all the work in WSC is performed on Server Side (AppServer) as a web server (Apache and IIS) has no significant impact on the response time.

Spell Check Engine


600 words text with 4% misspellings
200 concurrent users
Time response: 1,930 ms

600 words text with 50% misspellings
200 concurrent users
Time response: 17,000 ms

600 words text with 90% misspellings
200 concurrent users
Time response: 62,832 ms

Conclusions: The test showed that average time response increases with number of incorrect words increasing from ~2,000 ms for 4% of misspellings to ~63,000ms for 90% as AppServer needs more time to process misspellings in the text. The “get all suggestions for the word” requests make the maximums of CPU utilization on the graphs. These requests are the most hardware and time consuming.

20 words text with 4% misspellings
200 concurrent users
Time response: 694 ms

2,700 words text with 4% misspellings
200 concurrent users
Time response: 4,863 ms

Conclusions: This test demonstrated that the average time response of the Spellcheck engine changes depending on text size (from 700 ms for 20 words to 4,900 ms for 2,700 words).

Grammar Engine


600 words text with 4% grammar errors
200 concurrent users
Time response: 9,920 ms

600 words text with 50% grammar errors
200 concurrent users
Time response: 9,993 ms

Conclusions: The test displayed that the working principle of Grammar engine is structured that the number of grammar problems doesn’t affect the CPU utilization and time response. CPU utilization and time response equal to 40-50% and about 10,000 ms correspondingly and remain the same for different number of grammar problems. Grammar engine doesn’t use the resources for suggestions generations as it can be see in Spellcheck engine. The number of grammar problems is limited and they already have the complete list of suggestions.

20 words text
200 concurrent users
Time response: 762 ms

2,700 words text
200 concurrent users
Time response: 39,715 ms

Conclusions: This test demonstrated that the average time response of the Grammar engine changes depending on the text size from 800 ms (for 20 words text) to 40,000 ms (for 2700 words text).

Thesaurus Engine


20 words text
200 concurrent users
Time response: 762 ms

2,700 words text
200 concurrent users
Time response: 39,715 ms

Conclusions: This test demonstrated that the average time response of the Thesaurus engine changes depending on text size from 800ms for 20 words to 40 000 ms for 2700 words. The test with the number of thesaurus problems hasn’t been handled because the number of problems is the same for every text. Thesaurus engine just provides the synonyms.

English Dictionary


Random text from 20 to 2,000 words
200 concurrent users
Time response: 1,200 ms

Spanish Dictionary


Random text from 20 to 2,000 words
200 concurrent users
Time response: 3,500 ms

Conclusion: The dictionary size (Spanish dictionary twice larger than American English one) affects the average time response of the system. It increases from 1200 ms to 3500 ms.

General conclusions

  1. Time response is the main metric of WSC load testing. After CPU utilization reaches 100%, the system works but the time response increases.
  2. The graphics showed that the memory usage (RAM) hasn’t changed during all tests significantly.
  3. Performance of Ubuntu + Apache configuration is slightly higher as of Windows + IIS (response time is 1,800 ms for 100 users on IIS and 1,500 ms for 100 users on Apache) .
  4. Average time response increases according to:
    • number of concurrent users (100 - 500)
    • text size (for all engines) (20 words - 2,700 words)
    • number of misspelled words in the text (for SpellCheck engine) (4% - 90%)
    • size of the dictionary (100,000 words - 300,000 words)

Back to Top