Skip to main content

Call pl/sql package by Ibatis 3

Last few weeks i have been working with a project ,where we decided to use Ibatis for generating complex report. Ibatis is a very small smart ORM tools to execute complex query but, in version 3, developers made a vast change and it's difficult to migrate from version 2 to version 3.
In this post i am going to describe how to call pl/sql package function from within Ibatis3.
First of all we will create two small tables and a pl/sql package for demonstration:
-- Create table
create table ADDRESSES
(
ADR_ID      INTEGER not null,
ADR_CITY    VARCHAR2(15),
ADR_COUNTRY VARCHAR2(15) not null
);
alter table ADDRESSES
add primary key (ADR_ID);
create table PERSONS
(
PRS_ID         INTEGER not null,
PRS_FATHER_ID  INTEGER,
PRS_MOTHER_ID  INTEGER,
PRS_ADR_ID     INTEGER,
PRS_FIRST_NAME VARCHAR2(15),
PRS_SURNAME    VARCHAR2(15)
);
/
alter table PERSONS
add constraint PRS_ADR_FK foreign key (PRS_ADR_ID)
references ADDRESSES (ADR_ID);
alter table PERSONS
add constraint PRS_PRS_FATHER_FK foreign key (PRS_FATHER_ID)
references PERSONS (PRS_ID);
alter table PERSONS
add constraint PRS_PRS_MOTHER_FK foreign key (PRS_MOTHER_ID)
references PERSONS (PRS_ID);

create or replace package IbatisTest is

function getPersonsById(p_id integer) return varchar2;
function addPerson(p_name varchar2, p_fname varchar2, p_add integer)return integer;

end IbatisTest;
/
create or replace package body IbatisTest is

function getPersonsById(p_id integer) return varchar2 is
l_name varchar2(200);
begin
select 
p.prs_first_name
into l_name       
from persons p
where p.prs_id = p_id;    

return(l_name);
end;
function addPerson(p_name varchar2, p_fname varchar2, p_add integer)return integer 
is
p_id integer;       
begin
select common_seq.nextval 
into p_id 
from dual;
insert into persons(prs_id, prs_first_name, prs_surname, prs_adr_id) 
values (p_id, p_fname, p_name, p_add) returning prs_id into p_id;
commit;       
return (p_id);
end;  

begin
null;
end IbatisTest;
/

next we we will develop our *Mapper.xml
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
PUBLIC "-//ibatis.apache.org//DTD Mapper 3.0//EN"
"http://ibatis.apache.org/dtd/ibatis-3-mapper.dtd">
<mapper namespace="com.blue.ibatis.test.dao.FooMapper">
<parameterMap id="parameters1" type="LRU">
<parameter property="name" jdbcType="VARCHAR" javaType="java.lang.String" mode="OUT"/>
<parameter property="id" jdbcType="NUMERIC" javaType="java.lang.Long" mode="IN"/>
</parameterMap>

<parameterMap id="addParameters" type="LRU">
<parameter property="p_id" jdbcType="NUMERIC" javaType="java.lang.Long" mode="OUT"/>
<parameter property="p_name" jdbcType="VARCHAR" javaType="java.lang.String" mode="IN"/>
<parameter property="p_fname" jdbcType="VARCHAR" javaType="java.lang.String" mode="IN"/>
<parameter property="p_add" jdbcType="NUMERIC" javaType="java.lang.Long" mode="IN"/>
</parameterMap>

<select statementType="CALLABLE" id="getPersonsById" parameterMap="parameters1" resultType="String">
{ ? = call IbatisTest.getPersonsById( ? ) }
</select>

<select statementType="CALLABLE" id="addPerson" parameterMap="addParameters" resultType="Integer">
{ ? = call IbatisTest.addPerson( ?,?,? ) }
</select>
</mapper>

and on finish we have a few fragments of java code to call pl/sql package function
// query to addPersons
Map pMap = new HashMap();
pMap.put("p_name","Xyz");
pMap.put("p_fname","qwe");
pMap.put("p_add", Long.valueOf(11l));

session.selectOne("com.blue.ibatis.test.dao.FooMapper.addPerson", pMap);
pMap.get("p_id");

Comments

Popular posts from this blog

Tip: SQL client for Apache Ignite cache

A new SQL client configuration described in  The Apache Ignite book . If it got you interested, check out the rest of the book for more helpful information. Apache Ignite provides SQL queries execution on the caches, SQL syntax is an ANSI-99 compliant. Therefore, you can execute SQL queries against any caches from any SQL client which supports JDBC thin client. This section is for those, who feels comfortable with SQL rather than execute a bunch of code to retrieve data from the cache. Apache Ignite out of the box shipped with JDBC driver that allows you to connect to Ignite caches and retrieve distributed data from the cache using standard SQL queries. Rest of the section of this chapter will describe how to connect SQL IDE (Integrated Development Environment) to Ignite cache and executes some SQL queries to play with the data. SQL IDE or SQL editor can simplify the development process and allow you to get productive much quicker. Most database vendors have their own fron...

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b...

Load balancing and fail over with scheduler

Every programmer at least develop one Scheduler or Job in their life time of programming. Nowadays writing or developing scheduler to get you job done is very simple, but when you are thinking about high availability or load balancing your scheduler or job it getting some tricky. Even more when you have a few instance of your scheduler but only one can be run at a time also need some tricks to done. A long time ago i used some data base table lock to achieved such a functionality as leader election. Around 2010 when Zookeeper comes into play, i always preferred to use Zookeeper to bring high availability and scalability. For using Zookeeper you have to need Zookeeper cluster with minimum 3 nodes and maintain the cluster. Our new customer denied to use such a open source product in their environment and i was definitely need to find something alternative. Definitely Quartz was the next choose. Quartz makes developing scheduler easy and simple. Quartz clustering feature brings the HA and...