Skip to main content

Tuning and optimization J2EE web application for HighLoad

Last few months we are developing a portal for 3rd largest bank in Europe. Unique visitor of the bank grows more than 1 million visitor in a day. The main non functional requirements of the project is the high availability of the portal and giving high through output. One of the main feature of the portal is to giving user to customize their pages with widgets and provide different services for targeted auditory. After a long discussion and analysis, bank decided to use java based engine to build up the portal and we have got the following stack:
1) Java 1.7_47
2) IBM WEBSphere 8.5 as Application server
3) Nginx as web server
4) Alteon as load balancer
5) Oracle 11gR2 as DataBase
6) SOLR for content search
Main challenge for us to supported legacy browser such as IE8, opera 12 e.t.c and one portal for all device (desktop, smart phone and tablet pc). Java based portal engine generated a lot of java script which didn't give us very good performance. For these above reasons we decided to use hybrid method of page rendering (server side (jsp) + client side (java script)) and rest service for business functionality. We eliminate of implementing any business logic in DataBase because, RDBMS is not suitable for scaling and minimize the network roundtrip. Here is our main design decision:
1) Implementing business logic through Rest service in application server
2) Serving all the static content from web server
3) Cashing is as much as possible in every layer
4) Hybrid method of page rendering (server side (jsp) + client side (java script))
Now it's the time for describe briefly what we have done in every layer, most of all steps are well known and i would like to summarize it in one place:

1) Web server optimization (nginx):
- Gzip compression level 6 for xml, json, css, html e.t.c
- Cache control Http header for 3 days
- Cache control for java script
- Etag
2) Client side optimization:
- Minify java script and CSS
- Minimize http request from browser to server. At the beginning we have more than 150 http request from browser to server. It should be remember that modern browser can make 7-8 request at a time to one domain
- Optimize every images (lossless)
- Using CSS sprite
- Aggregate CSS and JS in few files
- Minify Html
3) Server side (backend) optimization:
- Caching every rest response
- using distributive EhCache
- Hibernate + MyBatis second level Cache
- Optimize Connection Pool for database in IBM WAS
- Optimize heap size and GC policy for JVM in IBM WAS
- Optimize thread pool size in IBM WAS
- Optimize session management for IBM WAS
- Scheduler to drop long running and hanged SQL connection from IBM WAS (There is a bug in IBM WAS with connection pool)
4) Database optimization
- Using result cache for Data dictionary
- Move Data dictionary to Oracle KEEP POOL

If you like this article, you would also like the book

Comments

Popular posts from this blog

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b...

Analyse with ANT - a sonar way

After the Javaone conference in Moscow, i have found some free hours to play with Sonar . Here is a quick steps to start analyzing with ANT projects. Sonar provides Analyze with ANT document to play around with ANT, i have just modify some parts. Here is it. 1) Download the Sonar Ant Task and put it in your ${ANT_HOME}/lib directory 2) Modify your ANT build.xml as follows: <?xml version = '1.0' encoding = 'windows-1251'?> <project name="abc" default="build" basedir="."> <!-- Define the Sonar task if this hasn't been done in a common script --> <taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml"> <classpath path="E:\java\ant\1.8\apache-ant-1.8.0\lib" /> </taskdef> <!-- Out-of-the-box those parameters are optional --> <property name="sonar.jdbc.url" value="jdbc:oracle:thin:@xyz/sirius.xyz" /> <property na...

Quick start with In memory Data Grid, Apache Ignite

UP1: For complete quick start guide, see also the sample chapter of the book "High performance in-memory computing with Apache Ignite" here . Even you can find the sample examples from the GitHub repository . IMDG or In memory data grid is not an in-memory relational database, an NoSQL database or a relational database. It is a different breed of software datastore. The data model is distributed across many servers in a single location or across multiple locations. This distribution is known as a data fabric. This distributed model is known as a ‘shared nothing’ architecture. IMDG has following characteristics: All servers can be active in each site. All data is stored in the RAM of the servers. Servers can be added or removed non-disruptively, to increase the amount of RAM available. The data model is non-relational and is object-based.  Distributed applications written on the platform independent language. The data fabric is resilient, allowing non-disruptive au...