Skip to main content

Continuous Integration (CI), A review

A few years ago (2011) in Java One Conference in Moscow, i participated with presentation about CI. During this time a lot of changes has been made with this fields. By the years many tools, plugins and frameworks has been released to help devOps to solve problems with CI. Now CI is one of the vital part of the development life cycle. With the aggressive use of cloud infrastructure and horizontal scaling of every application, now most of all application deployed in a lot of server (virtuals and dedicated). Moreover, most of the systems are heterogeneous and always need extra care (scripts) to successfully deploy the entire system. Most of the time development environment is very different from the production environment. Here is the common workflow from the development stage to production
DEV environment -> Test Environment -> UAT environment -> Production environment.
Every environments has their own characteristics, configurations. For example, most of the developers use jetty or embedded Tomcat application servers to fast development but in production environment often meet IBM or WebLogic application servers. Deployment process in jetty or IBM is very different, also In production environment frequently uses DR (disaster recovery). Workflow of the deployment process in Production environment are as follows:
1) Stop part of the application servers
2) Replicate session from the stopped servers
3) Update database with incremental scripts
4) Update new artifacts in application servers
5) Update configuration files
6) Start application servers

Their are a lot of tools in open sources to achieve the above workflow such as:
1) Puppet
2) Chef
3) Ansible e.t.c

Ansible is one of easiest and simplest tools to install, deploy and prepare environments. We have following DevOps tools in our portfolio:
1) Jenkins
2) Flyway DB
3) Ansible

A few words about flyway DB, its database migration tools to do incremental update of database objects. Supports ANSI native SQL scripts for any DB. For me it's very suitable to debug or review any sql scripts.
Ansible is a simple IT automation platform to deploy through SSH. Very easy to install and configure, working through ssh with no agent install in remote system. Ansible has very big community and a lot of plugin already developed for using in automation. With this three tools we have the following approach:

Jenkins for build project
Flyway to data base migration
Ansible for deploy application in several environments and build installation package in production environments. Most of the time in meetup or conference, i got the question how we manages and rendering different configuration files for different systems such as DEV, UAT. We uses very simple approach to solve the problem through templating. For every configuration we have some kind of template as follows:
# MQ Configuration
mq.port=@mq.port@
mq.host=@mq.host@
mq.channel=@mq.channel@
mq.queue.manager=@mq.queue.manager@
mq.ccsid=@mq.ccsid@
mq.user=@mq.user@
mq.password=@mq.password@
mq.pool.size=@mq.pool.size@
and for every environments we have defined values in xml file. For example for DEV environment we have dev.xml, for UAT environment we have uat.xml. Every xml files contains all the values such as
<property name="mq.gf.to.queue" value="MNP2GF"/>
<property name="mq.gf.from.queue" value="GF2MNP"/>
<property name="mq.port" value="1234"/>
<property name="mq.host" value="192.168.157.227"/>
<property name="mq.channel" value="SYSTEM.DEF.SVRCONN"/>
<property name="mq.queue.manager" value="venus.queue.manager"/>
<property name="mq.ccsid" value="866"/>
<property name="mq.user" value="mqm"/>
<property name="mq.password" value="mqm01"/>
<property name="mq.pool.size" value="10"/>
<property name="mq.pool.size" value="10"/>

Every time after successful build, jenkins runs one simple python script which generates all the configuration files based on template. Such way we can deploy application in different environments and building distributive package.

Comments

Popular posts from this blog

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b...

Tip: SQL client for Apache Ignite cache

A new SQL client configuration described in  The Apache Ignite book . If it got you interested, check out the rest of the book for more helpful information. Apache Ignite provides SQL queries execution on the caches, SQL syntax is an ANSI-99 compliant. Therefore, you can execute SQL queries against any caches from any SQL client which supports JDBC thin client. This section is for those, who feels comfortable with SQL rather than execute a bunch of code to retrieve data from the cache. Apache Ignite out of the box shipped with JDBC driver that allows you to connect to Ignite caches and retrieve distributed data from the cache using standard SQL queries. Rest of the section of this chapter will describe how to connect SQL IDE (Integrated Development Environment) to Ignite cache and executes some SQL queries to play with the data. SQL IDE or SQL editor can simplify the development process and allow you to get productive much quicker. Most database vendors have their own fron...

Using Apache Ignite thin client - Apache Ignite insider blog

From the version 2.4.0, Apache Ignite introduced a new way to connect to the Ignite cluster, which allows communication with the Ignite cluster without starting an Ignite client node. Historically, Apache Ignite provides two notions of client and server nodes. Ignite client node intended as lightweight mode, which does not store data (however, it can store near cache), and does not execute any compute tasks. Mainly, client node used to communicate with the server remotely and allows manipulating the Ignite Caches using the whole set of Ignite API’s. There are two main downsides with the Ignite Client node: Whenever Ignite client node connects to the Ignite cluster, it becomes the part of the cluster topology. The bigger the topology is, the harder it is for maintaining. In the client mode, Apache Ignite node consumes a lot of resources for performing cache operations. To solve the above problems, Apache Ignite provides a new binary client protocol for implementing thin Ignite cl...