Skip to main content

Ingest data from Oracle DataBase to ElasticSearch

One of my few blog posts i have mentioned how to use and the use cases of using Oracle DataBase changed notification. When you have to need context search or count facts over your datas, you certainly need any modern Lucene based search engine. Elastic search is one of that search engine that can provide those above functionalities. However, from the previous versions of ES, elastic search river is deprecated to ingesting data to ES. Here you can get the whole history about deprecating river in ES. Any way, for now we have two option to ingest or import data from any sources to ES:
1) Implements or modify your DAO services, that can update data in ES and DataBase same time.
2) Polling, implements such a job, which will polling data in some period of time to update data in ES.

First approach is the very best option to implements, however if you have any legacy DAO services or 3rd party application that you couldn't make any changes is not for you. Polling to data base frequently with huge data can hard the performance of the DataBase.
In this blog post i am going to describe an alternative way to ingest data from data base to ES. One of the Prerequisites is that, you have Oracle Database with version higher than 9.0. Also i added the whole code base in github.com to explorer the capability.
Here is the data flow from Oracle to ES:
Data Flow is very simple:
1) Registered listener getting changes (RowID, ObjectId[~tableID]) for every commit in DB.
2) Listener send the changes (RowId, ObjectId) in xml to any MQ, we are using Apache Apollo.
3) Consumer of the queue collects the messages and query to the DataBase for Table meta data and result set by rowId.
4) Consumer build the result set in JSON format and index the result set in ES.

Git hub repository contains the three module to implements the data flow.
[qrcn] - collect notification from Oracle and send the notifications to any existing queue [apollo].
[es] - consumer, collects the message from the queue and index in Elastic search
[es-dto] - common dto
You can change your Query notification on QRCN module (file connection.properties).
querystring=select * from temp t where t.a = 'a1';select * from ATM_STATE
Also i add zookeeper to QRCN module to be fault tolerate.
Any way, at these moment project contains the following prerequisites:
1) racle JDBC 11g driver needs to compile the project.
2) apache zookeeper
3) apache Apollo
4) elastic search
Any way, you can always make any changes by your requirements. Happy weekend.

Comments

Popular posts from this blog

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b...

Analyse with ANT - a sonar way

After the Javaone conference in Moscow, i have found some free hours to play with Sonar . Here is a quick steps to start analyzing with ANT projects. Sonar provides Analyze with ANT document to play around with ANT, i have just modify some parts. Here is it. 1) Download the Sonar Ant Task and put it in your ${ANT_HOME}/lib directory 2) Modify your ANT build.xml as follows: <?xml version = '1.0' encoding = 'windows-1251'?> <project name="abc" default="build" basedir="."> <!-- Define the Sonar task if this hasn't been done in a common script --> <taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml"> <classpath path="E:\java\ant\1.8\apache-ant-1.8.0\lib" /> </taskdef> <!-- Out-of-the-box those parameters are optional --> <property name="sonar.jdbc.url" value="jdbc:oracle:thin:@xyz/sirius.xyz" /> <property na...

Writing weblogic logs to database table

By default, oracle weblogic server logging service uses an implementation, based on the Java Logging APIs by using the LogMBean.isLog4jLoggingEnabled attribute. With a few effort you can use log4j with weblogic logging service. In the Administration Console, you can specify Log4j or keep the default Java Logging implementation. In this blog i will describe how to configure log4j with weblogic logging service and writes all the logs messages to database table. Most of all cases it's sufficient to writes log on files, however it's better to get all the logs on table to query on it. In our case we have 3 different web logic servers in our project and our consumer need to get all the logs in one central place to diagnose if something goes wrong. First of all we will create a simple table on our oracle database schema and next configure all other parts. Here we go: 1) CREATE TABLE LOGS (USER_ID VARCHAR2(20), DOMAIN varchar2(50), DATED DATE NOT NULL, LOGGER VARCHAR2(500) NOT...