Skip to main content

Real time data processing with Cassandra, Part 1

This is the first part of getting start with real time data processing with Cassandra. In the first part i am going to describe how to configure Hadoop, Hive and Cassandra, also some adhoc query to use new CqlStorageHandler. In the second part i will show, how to use Shark and Spark for real time fast data processing with Cassandra. I was encourage by the blog from the Data Stax, you can find out ithere. Also all the credit goes for the author of the library cassandra-handler and Alex Lui for developing the CQLCassandraStorage. Of course you can use DataStax enterprise version for the first part, Data Stax enterprise version has built in support Hive and Hadoop. In this blog post i will use all the native apache products. If you are interested in Real time data process, please check this blog.
In the first part i will use following products:
1) Hadoop 1.2.1 (Single node cluster)
2) Hive 0.9.0
3) Cassandra 1.2.6 (Single node cluster)
4) cassandra-handler 1.2.6 (depends on Hive version 0.9.1, not working on other version of Hive)
Lets first download and configure Hadoop. Please check my old post to configure Hadoop. Configuration step is same as my old post. If you will got the following error
upgrade to version -41 is required.
please run the command hadoop-daemon.sh start namenode -upgrade and restart your hadoop server.
Now lets install and configure Hive.
1) Download Hive 0.9.0 and unzip somewhere in your local machine.
2) set HIVE_HOME in your bash_profile and path env variables.
3) create data warehouse directory in HDFS
$HADOOP_HOME/bin/hadoop fs -mkdir       /user/hive/warehouse
4) set them chmod g+w
$HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse
5) run hive by the command $HIVE_HOME/bin/hive or just hive (if you set the $HIVE_HOME/bin in your env path)
6) Create hive database
hive> CREATE DATABASE test
7) Use the database
hive> use test
8) create local hive table in database test
hive> CREATE TABLE hpokes (foo INT, bar STRING);
9) insert some data into table
hive> LOAD DATA LOCAL INPATH '$HIVE_HOME/examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
10) Run the following command
hive> select * from hpokes;
Command should be end up with a lot query result
29 val_30
242 val_243
285 val_286
35 val_36
227 val_228
395 val_396
244 val_245
Time taken: 0.334 seconds
If something goes wrong, check you installation, i have used following quick start guide.
11) Run some analytical function query
hive> select count(*) from hpokes;
above command should start Hadoop map reduce job. Progress of the job should be shown in console and should be end up with these following messages
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-09-22 20:02:43,178 Stage-1 map = 0%,  reduce = 0%
2013-09-22 20:02:49,214 Stage-1 map = 100%,  reduce = 0%
2013-09-22 20:03:00,403 Stage-1 map = 100%,  reduce = 33%
2013-09-22 20:03:01,418 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201309221750_0009
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   HDFS Read: 11870 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1004
Time taken: 40.444 seconds
Now it's time for install and run Cassandra.
12) Download Cassandra version 1.2.6 and install it by quick start guide, or you can check my previous post to get some quick start.
13) Create the following keyspace and CF
CREATE KEYSPACE test WITH replication = {
  'class': 'SimpleStrategy',
  'replication_factor': '1'
};
Of course you can create keyspace and CF from the Hive, which we will see later.
Now we have clone the project cassandra-handler from the git
14)
git clone https://github.com/milliondreams/hive.git cassandra-hive
Or you can download the zip from the git and unzip in any folder
15) In my case i have change the Hadoop core version in pom.xml, because i am using Hadoop-1.2.1 version
<dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <!--<version>0.20.205.0</version>-->
            <version>1.2.1</version>
            <type>jar</type>
            <scope>provided</scope>
        </dependency>
16) compile and build the project
mvn clean install
17) Copy the following libraries from /target and /target/dependencies to $HIVE_HOME/lib and $HADOOP_HOME/lib directory
cassandra-all-1.2.6.jar
apache-cassandra-1.2.6.jar
apache-cassandra-thrift-1.2.6.jar
hive-cassandra-1.2.6.jar
18) Restart hive and Hadoop.

19) Now we have to create Cassandra CF from Hive
hive> use test;
hive> CREATE EXTERNAL TABLE test.pokes(foo int, bar string)
    STORED BY 'org.apache.hadoop.hive.cassandra.cql.CqlStorageHandler'
    WITH SERDEPROPERTIES ("cql.primarykey" = "foo", "comment"="check", "read_repair_chance" = "0.2",
    "dclocal_read_repair_chance" = "0.14", "gc_grace_seconds" = "989898", "bloom_filter_fp_chance" = "0.2",
    "compaction" = "{'class' : 'LeveledCompactionStrategy'}", "replicate_on_write" = "false", "caching" = "all");
20) Lets insert some data from table hive hpokes to cassandra pokes
hive> insert into table pokes select * from hpokes;
it should start Hadoop map reduce job and, insert data from hpokes table to cassandra pokes.
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2013-09-22 18:01:19,671 Stage-0 map = 0%,  reduce = 0%
2013-09-22 18:01:29,811 Stage-0 map = 100%,  reduce = 0%
2013-09-22 18:01:34,866 Stage-0 map = 100%,  reduce = 100%
Ended Job = job_201309221750_0005
1004 Rows loaded to pokes
MapReduce Jobs Launched: 
Job 0: Map: 1   HDFS Read: 11870 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 40.162 seconds

in my case it inserted 1004 rows in tables.
21) Now you can run any analytical query in table pokes as follows
hive> select count(*) from pokes;
above command also runs Hadoop map reduce and should return the following messages
2013-09-22 18:13:40,390 Stage-1 map = 0%,  reduce = 0%
2013-09-22 18:13:58,229 Stage-1 map = 100%,  reduce = 0%
2013-09-22 18:14:12,642 Stage-1 map = 100%,  reduce = 33%
2013-09-22 18:14:13,649 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201309221750_0007
MapReduce Jobs Launched: 
Job 0: Map: 2  Reduce: 1   HDFS Read: 830 HDFS Write: 4 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1004
Time taken: 70.168 seconds
22) Now insert a few rows in Cassandra CF through CQLSH
cqlsh> insert into pokes(foo, bar) values(1000, 'test');
23) run the command from hive to find the row
hive> select * from pokes where foo=1000;
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 0
2013-09-22 18:14:52,545 Stage-1 map = 0%,  reduce = 0%
2013-09-22 18:15:02,674 Stage-1 map = 100%,  reduce = 0%
2013-09-22 18:15:07,756 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201309221750_0008
MapReduce Jobs Launched: 
Job 0: Map: 2   HDFS Read: 830 HDFS Write: 10 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1000 test
Time taken: 30.891 seconds
We have reached in the end of the post. Thank you everybody to come across the blog. Happy bloging. In the next part we will install Shark and Spark for real time data processing.

If you like this article, you would also like the book

Comments

Popular posts from this blog

Send e-mail with attachment through OSB

Oracle Service Bus (OSB) contains a good collection of adapter to integrate with any legacy application, including ftp, email, MQ, tuxedo. However e-mail still recognize as a stable protocol to integrate with any application asynchronously. Send e-mail with attachment is a common task of any business process. Inbound e-mail adapter which, integrated with OSB support attachment but outbound adapter doesn't. This post is all about sending attachment though JavaCallout action. There are two ways to handle attachment in OSB: 1) Use JavaCallout action to pass the binary data for further manipulation. It means write down a small java library which will get the attachment and send the e-mail. 2) Use integrated outbound e-mail adapter to send attachment, here you have to add a custom variable named attachment and assign the binary data to the body of the attachment variable. First option is very common and easy to implement through javax.mail api, however a much more developer manage t

Tip: SQL client for Apache Ignite cache

A new SQL client configuration described in  The Apache Ignite book . If it got you interested, check out the rest of the book for more helpful information. Apache Ignite provides SQL queries execution on the caches, SQL syntax is an ANSI-99 compliant. Therefore, you can execute SQL queries against any caches from any SQL client which supports JDBC thin client. This section is for those, who feels comfortable with SQL rather than execute a bunch of code to retrieve data from the cache. Apache Ignite out of the box shipped with JDBC driver that allows you to connect to Ignite caches and retrieve distributed data from the cache using standard SQL queries. Rest of the section of this chapter will describe how to connect SQL IDE (Integrated Development Environment) to Ignite cache and executes some SQL queries to play with the data. SQL IDE or SQL editor can simplify the development process and allow you to get productive much quicker. Most database vendors have their own front-en

Load balancing and fail over with scheduler

Every programmer at least develop one Scheduler or Job in their life time of programming. Nowadays writing or developing scheduler to get you job done is very simple, but when you are thinking about high availability or load balancing your scheduler or job it getting some tricky. Even more when you have a few instance of your scheduler but only one can be run at a time also need some tricks to done. A long time ago i used some data base table lock to achieved such a functionality as leader election. Around 2010 when Zookeeper comes into play, i always preferred to use Zookeeper to bring high availability and scalability. For using Zookeeper you have to need Zookeeper cluster with minimum 3 nodes and maintain the cluster. Our new customer denied to use such a open source product in their environment and i was definitely need to find something alternative. Definitely Quartz was the next choose. Quartz makes developing scheduler easy and simple. Quartz clustering feature brings the HA and