Skip to main content

Tip: SQL client for Apache Ignite cache


The Apache Ignite bookA new SQL client configuration described in The Apache Ignite book. If it got you interested, check out the rest of the book for more helpful information.

Apache Ignite provides SQL queries execution on the caches, SQL syntax is an ANSI-99 compliant. Therefore, you can execute SQL queries against any caches from any SQL client which supports JDBC thin client. This section is for those, who feels comfortable with SQL rather than execute a bunch of code to retrieve data from the cache. Apache Ignite out of the box shipped with JDBC driver that allows you to connect to Ignite caches and retrieve distributed data from the cache using standard SQL queries. Rest of the section of this chapter will describe how to connect SQL IDE (Integrated Development Environment) to Ignite cache and executes some SQL queries to play with the data. SQL IDE or SQL editor can simplify the development process and allow you to get productive much quicker.

Most database vendors have their own front-end specially developed IDE for their database. Oracle has SQL developer and Sybase has Interactive SQL so on. Unfortunately, Apache Ignite doesn’t provide any SQL editor to work with Ignite caches, however, GridGain (commercial version of the Apache Ignite) provides a commercial GridGain web console application to connect to Ignite cluster and run SQL analytics on it. As far as I work with a multi-platform database in my daily works, the last couple of years I am using Dbeaver6 to work with different databases. A couple of words about Dbeaver, it’s open-source multi-platform database tool for Developers, Analytics or Database administrators. It supports a huge range of Databases and also let you connect to any Database with JDBC thin client (if the database supports JDBC). Anyway, you can also try SQuirrel SQL client or Jetbrains DataGrip to connect to Ignite cluster, they all supports JDBC.

Note that, Cache updates are not supported by SQL queries, for now, you can only use SELECT queries.

How SQL/Text queries work in Ignite: it’s interesting to know how a query is processing under the hood of the Ignite. There are three main approaches to process SQL/Text query in Ignite:

- In-memory Map-Reduce: If you are executing any SQL query against Partitioned cache, Ignite under the hood split the query into in-memory map queries and a single reduce query. The number of map queries depends on the size of the partitions and number of the partitions of the cluster. Then all the map queries are executed on all data nodes of participating caches, providing results to the reducing node, which will, in turn, run the reduce query over these intermediate results. If you are not familiar with Map-Reduce pattern, you can imagine it as a Java Fork-join process.

- H2 SQL engine: if you are executing SQL queries against Replicated or Local cache, Ignite admit that all the data is available locally and runs a simple local SQL query in the H2 database engine. Note that, in replicated cache, every node contains replica data for other nodes. H2 database is free database written in Java and can work as an embedded mode. Depending on the configuration, every Ignite node can have an embedded h2 SQL engine.

- Lucene engine: in Apache Ignite, each node contains a local Lucene engine that stores the index in memory that reference in local cache data. When any distributed full-text queries are executed, each node performs the search in local index via IndexSearcher and send the result back to the client node, where the result aggregated.

Note that, Ignite cache doesn't contain the Lucene index, instead, Ignite provides an in-memory GridLuceneDirectory directory which is the memory-resident implementation to store the Lucene index in memory. GridLuceneDirectory is very much similar to the Lucene RAMDirectory.

To running SQL queries on caches, we already added a complete Java application (HelloIgniteSpring) in chapter installation. You can run the application by the following command.

java -jar .\target\HelloIgniteSpring-runnable.jar

At this moment, we are not going to details all the concepts of Ignite cache queries here. We will have a detailed look at Ignite SQL queries on chapter four. For now, after running the HelloIgniteSpring application, it always put a few Person objects into cache named testCache. Object Person has attributes like name and age as follows:


Property Name Property Age
1 Shamim 37
2 Mishel 2
3 Scott 55
4 Tiger 5

After completing the configuration of the Dbeaver SQL client, we will run a few SQL queries against the above objects. Now it’s the time to download the Dbeaver and complete the JDBC configuration on it.

Step 1:
Download the Dbeaver Enterprise edition (it’s free but not an open source product) for your
operating system from the following URL:

http://dbeaver.jkiss.org/download/enterprise/

Step 2:
Install the Dbeaver, please refer to the install section of the Dbeaver site, if you will encounter any problems during the installation.

Step 3:
Compile the maven chapter-installation project, if you didn’t do it before.

Step 4:
Run the HelloIgniteSpring application with the following command:

java -jar ./target/HelloIgniteSpring-runnable.jar

You should have the following output in your console:


If you are curious about the code, please refer to the chapter-installation.

Step 5:
Now, let’s configure the JDBC driver for the Dbeaver. Go to Database -> Driver Manager -> New In the Settings section, fill in the requested information as follow:


Add all the libraries shown in the above screenshot. Copy and rename the file ∼/ignite-book- code-samples/chapters/chapter-installation/src/main/resources/default-config.xml into default-config-dbeaver.xml somewhere in your file system. Change the clientMode properties value to true in the default-config-dbeaver.xml file. Add the file path to the URL template as shown in the above screenshot and click ok.

Step 6:
Create a New connection based on the Ignite Driver manager. Go to the Database->New Connection. Select Ignite drive manager from the drop down list and click next. You should have the following screen before you.


Click the Test connection button for a quick test. If everything is done properly, you should have the next screen shot with the success notification.

Click ok and go through all the next step to complete the connection.

Step 7:
Create a new SQL editor and type the following SQL query on Dbeaver.

SELECT name FROM Person;

Step 8:
Run the script by pressing the button command+x and you should have the following result.

The above query returns all the cache objects from the cache testCache. You can also execute the following query:

SELECT name FROM Person p WHERE p.age BETWEEN 30 AND 60;

It should return the result with the following person

Shamim 
Scott

Ignite SQL engine is fully ANSI-99 compliant and let you run any SQL query like analytical or Ad-hoc queries. You can also try to configure Oracle SQL developer or Intellij Idea as a SQL client to work with Apache Ignite.

If you like this article, you would also like the book

Comments

Popular posts from this blog

Send e-mail with attachment through OSB

Oracle Service Bus (OSB) contains a good collection of adapter to integrate with any legacy application, including ftp, email, MQ, tuxedo. However e-mail still recognize as a stable protocol to integrate with any application asynchronously. Send e-mail with attachment is a common task of any business process. Inbound e-mail adapter which, integrated with OSB support attachment but outbound adapter doesn't. This post is all about sending attachment though JavaCallout action. There are two ways to handle attachment in OSB: 1) Use JavaCallout action to pass the binary data for further manipulation. It means write down a small java library which will get the attachment and send the e-mail. 2) Use integrated outbound e-mail adapter to send attachment, here you have to add a custom variable named attachment and assign the binary data to the body of the attachment variable. First option is very common and easy to implement through javax.mail api, however a much more developer manage t

Load balancing and fail over with scheduler

Every programmer at least develop one Scheduler or Job in their life time of programming. Nowadays writing or developing scheduler to get you job done is very simple, but when you are thinking about high availability or load balancing your scheduler or job it getting some tricky. Even more when you have a few instance of your scheduler but only one can be run at a time also need some tricks to done. A long time ago i used some data base table lock to achieved such a functionality as leader election. Around 2010 when Zookeeper comes into play, i always preferred to use Zookeeper to bring high availability and scalability. For using Zookeeper you have to need Zookeeper cluster with minimum 3 nodes and maintain the cluster. Our new customer denied to use such a open source product in their environment and i was definitely need to find something alternative. Definitely Quartz was the next choose. Quartz makes developing scheduler easy and simple. Quartz clustering feature brings the HA and