Skip to main content

Oracle service bus logging with custom JDBCAppander

Last few weeks we are upgrading logging system of our MDM system, we have design a central place for all logging data. With central log system we can afford to manage our system very effectively and debug any fatal error. With OSB it's not straight forword to pulling logging data from console to somewhere, because a few subsystem uses different appender to write and filter log. See the following link to know more about oracle logging service. In my previous post i demonstrate how to use log4j JDBC appender in log4j.properties to redirect log to database table.
For the current post we also uses same DDL to create tables
1)
CREATE TABLE LOGS 
(USER_ID VARCHAR2(20), 
DOMAIN  varchar2(50), 
DATED   DATE NOT NULL, 
LOGGER  VARCHAR2(500) NOT NULL, 
LEVEL   VARCHAR2(50) NOT NULL, 
MESSAGE VARCHAR2(4000) NOT NULL 
);

2) Create a simple maven project with a few dependency to create custom JDBCAppender
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>ru.xyz.wl.logging</groupId>
<artifactId>jdbc-logging</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>jar</packaging>

<name>jdbc-logging</name>
<url>http://maven.apache.org</url>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>weblogic</groupId>
<artifactId>wlfullclient</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>weblogic</groupId>
<artifactId>wllog4j</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
<encoding>UTF-8</encoding>
</configuration>               
</plugin>
</plugins>
</build>
</project>

assume that you have already installed wlfullclient.jar and wllog4j.jar in your maven local repository.
3) Now it's time for coding, create two java class as follows:
package ru.xyz.wl.logging;

import org.apache.log4j.AppenderSkeleton;
import org.apache.log4j.spi.LoggingEvent;

import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.sql.DataSource;
import java.sql.*;

import weblogic.logging.log4j.WLLog4jLogEvent;

/**

*/
public class JDBCAppender extends AppenderSkeleton {
private Connection conn;
private PreparedStatement stm;
private String subSysName;
public JDBCAppender(String subSysName) {
this.subSysName = subSysName;
// get Initial ctx
try {
InitialContext ctx = new InitialContext();
DataSource dataSource = (DataSource) ctx.lookup("jdbc/logDataSource");
conn = dataSource.getConnection();
stm = conn.prepareStatement("INSERT INTO EXT_LOGS2 VALUES(?,?,?,?,?,?)");
stm.setEscapeProcessing(true);
conn.setAutoCommit(true);

} catch (NamingException e) {
e.printStackTrace();
} catch(SQLException e){
e.printStackTrace();
}

}

protected void append(LoggingEvent event) {
WLLog4jLogEvent logEvent = (WLLog4jLogEvent) event;
try {
stm.setString(1, logEvent.getUserId());
stm.setString(2, subSysName);
//stm.setDate(3, new Date(logEvent.getTimestamp()));
stm.setTimestamp(3, new Timestamp(logEvent.getTimestamp()));
stm.setString(4, logEvent.getLoggerName());
stm.setString(5, logEvent.getLevel().toString());
stm.setString(6, logEvent.getLogMessage());
stm.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
}

public void close() {
try {
stm.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}

public boolean requiresLayout() {
return false;
}
}

create AppenderStartup class to start up the appender
package ru.xyz.wl.logging;

import org.apache.log4j.Logger;
import weblogic.logging.log4j.Log4jLoggingHelper;
import weblogic.logging.log4j.WLLog4jLevel;
import weblogic.logging.LoggerNotAvailableException;
import weblogic.logging.NonCatalogLogger;

import java.sql.Date;
import java.util.Arrays;

/**
* User: sahmed

*/
public class AppenderStartup {
public static void main(String... args) {
System.out.println("Starting up .. Log4j appneder");
if(args.length < 2){
System.out.println("JDBC appender not configured ..");
System.out.println("AppenderStartup argumemnts must be 2, usages ERCI WARNING, where ERCI - Sub system name, WARNING log level");
System.out.println("Put arguments and restart Admin server.");
System.err.println("JDBC appender not configured ..");
return;
}
try {
Logger serverLogger = Log4jLoggingHelper.getLog4jServerLogger();
//Logger domainLogger = Log4jLoggingHelper.getLog4jDomainLogger();
JDBCAppender jdbcAppender = new JDBCAppender(args[0]);
serverLogger.addAppender(jdbcAppender);
jdbcAppender.setThreshold(WLLog4jLevel.toLevel(args[1].toUpperCase(), WLLog4jLevel.INFO));
//domainLogger.addAppender(jdbcAppender);

//NonCatalogLogger nc = new NonCatalogLogger("MyAppenderTest");
//nc.info("Test INFO message");
//nc.warning("Test WARNING message");
} catch (LoggerNotAvailableException e) {
e.printStackTrace();
}
}
}
this class required two arguments to startup, one of them are subsystem name and another one is the Log level, by default it's sets log level INFO. Now we are ready to implements our library to redirect all the log file to database tables. 4)First of all we have to create a datasource on web logic server with JNDI name jdbc/logDataSource and configure it. Next we have to put following libs on weblgic Domain lib directory
  • jdbc-logging-1.0-SNAPSHOT.jar
  • log4j-1.2.15.jar
  • ojdbc-14.jar (optional)
  • wllog4j.jar

Next logon to weblogic web console and create a startup class with AppenderStartup
Class Name : ru.xyz.wl.logging.AppenderStartup
arguments : erci WARNING, where erci - subsystme name, WARNING log level
5) "Enable Redirect stdout logging enabled" and "Redirect stderr logging enabled" on
environment->server-> admin server-> logging-> advance
also Select Logging implemention LOG4J.
6) restart admin server.
Now all the log messages from Server logger will redirect not only to console window but also to database table.

Comments

Popular posts from this blog

Tip: SQL client for Apache Ignite cache

A new SQL client configuration described in  The Apache Ignite book . If it got you interested, check out the rest of the book for more helpful information. Apache Ignite provides SQL queries execution on the caches, SQL syntax is an ANSI-99 compliant. Therefore, you can execute SQL queries against any caches from any SQL client which supports JDBC thin client. This section is for those, who feels comfortable with SQL rather than execute a bunch of code to retrieve data from the cache. Apache Ignite out of the box shipped with JDBC driver that allows you to connect to Ignite caches and retrieve distributed data from the cache using standard SQL queries. Rest of the section of this chapter will describe how to connect SQL IDE (Integrated Development Environment) to Ignite cache and executes some SQL queries to play with the data. SQL IDE or SQL editor can simplify the development process and allow you to get productive much quicker. Most database vendors have their own front-en

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b

Load balancing and fail over with scheduler

Every programmer at least develop one Scheduler or Job in their life time of programming. Nowadays writing or developing scheduler to get you job done is very simple, but when you are thinking about high availability or load balancing your scheduler or job it getting some tricky. Even more when you have a few instance of your scheduler but only one can be run at a time also need some tricks to done. A long time ago i used some data base table lock to achieved such a functionality as leader election. Around 2010 when Zookeeper comes into play, i always preferred to use Zookeeper to bring high availability and scalability. For using Zookeeper you have to need Zookeeper cluster with minimum 3 nodes and maintain the cluster. Our new customer denied to use such a open source product in their environment and i was definitely need to find something alternative. Definitely Quartz was the next choose. Quartz makes developing scheduler easy and simple. Quartz clustering feature brings the HA and