Skip to main content

Centralize logs with logstash

Now a days logging is the essential part of the any application. Logging useful piece of information can easily help to find errors, fixing bug and much more. Modern application now scaling up in hundreds servers in cloud. Managing and monitoring logs of heterogeneous system is very challenging for any system administrator even more challenging for developers to fixing bugs. Last Friday evening we stared our testing with 3rd party products and stuck with a few bugs. As usual we first looked for the log and tried to find some hints to reproduce the bug. Here we got serious problems, our application scaled on few application servers such as Oracle GlassFish, Apache Tomcat. It was not a pleasant moments to search all over the server to find a few pieces of information. Here we have understand that we have to use any tools to manage, monitor and search logs. With my experience we have a few options:
1) Flume, Hadoop hdfs and ElasticSearch.
2) Kafka, Storm and Slor.
3) Logstash and Graylog2.
First option with Hadoop we have implemented in few cases but for the current project it seems very big gun. Second option also need some configuration and coding experience to build up the log managements tools from the scratch. My aim was to use something new and elegant which we can configured with less effort and easy to use. A few times i heard about logstash and decided to make a try. In the rest of the post i will describe how to install and configure logstash for centralizing log, i.e. collecting, aggregating and searching log. Most of the features of logstash is as follows:
1) Collecting log through agents
2) Aggregating logs
3) Shipping the logs in ElasticSearch
4) Web interface for searching logs
5) Open source
6) Everything in an one jar, nothing more.
7) Very well documented with examples

Take a look at the high level architecture of logstash:
For centralizing you have to need the followings components:
1) ElasticSearch
2) Redis
I have go through the getting started page and everything runs fines as a charm. Only one error i have got when tried to install Redis.
$ make
clang: error: no such file or directory: '../deps/hiredis/libhiredis.a'
clang: error: no such file or directory: '../deps/lua/src/liblua.a'
make[1]: *** [redis-server] Error 1
make: *** [all] Error 2
By googling in internet i have found the solution very easily as follows:
$ make
cd deps
make lua hiredis linenoise
and finalized the installation
$ make
cd $REDIS_CODE/src
make
In my cases i wanted to collect log from the Glassfish server.log and use the basic configuration for agent
input {
    file {
    type => "server"

    # Wildcards work, here :)
    path => [ "$DOMAIN_HOME/logs/*.log" ]
  }
}

output {
  #stdout { codec => rubydebug }
  redis { host => "127.93.1.11" data_type => "list" key => "crm" }
}
That's all. Happy coding and bloging.

Comments

Popular posts from this blog

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b...

Analyse with ANT - a sonar way

After the Javaone conference in Moscow, i have found some free hours to play with Sonar . Here is a quick steps to start analyzing with ANT projects. Sonar provides Analyze with ANT document to play around with ANT, i have just modify some parts. Here is it. 1) Download the Sonar Ant Task and put it in your ${ANT_HOME}/lib directory 2) Modify your ANT build.xml as follows: <?xml version = '1.0' encoding = 'windows-1251'?> <project name="abc" default="build" basedir="."> <!-- Define the Sonar task if this hasn't been done in a common script --> <taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml"> <classpath path="E:\java\ant\1.8\apache-ant-1.8.0\lib" /> </taskdef> <!-- Out-of-the-box those parameters are optional --> <property name="sonar.jdbc.url" value="jdbc:oracle:thin:@xyz/sirius.xyz" /> <property na...

Apache Ignite Baseline Topology by Examples

Ignite Baseline Topology or BLT represents a set of server nodes in the cluster that persists data on disk. Where, N1-2 and N5 server nodes are the member of the Ignite clusters with native persistence which enable data to persist on disk. N3-4 and N6 server nodes are the member of the Ignite cluster but not a part of the baseline topology. The nodes from the baseline topology are a regular server node, that store's data in memory and on the disk, and also participates in computing tasks. Ignite clusters can have different nodes that are not a part of the baseline topology such as: Server nodes that are not used Ignite native persistence to persist data on disk. Usually, they store data in memory or persists data to a 3rd party database or NoSQL. In the above equitation, node N3 or N4 might be one of them. Client nodes that are not stored shared data. To better understand the baseline topology concept, let’s start at the beginning and try to understand its goal and what ...