Skip to main content

Implementing DataServices with Oracle data service integrator

DataService: one kind of special service, which expose interface for data stored in sources such as relational databases, CSV files & MS-Excel files. DataServices plays a vital role on IT strategy like SOA. Any organization have to have expose their master data to any enterprise application. Through data services a company enables to have high-quality, consistent data at the right place at the right time.
In practical, data services exposes data, represented as Entity Data Model (EDM) objects, via web services accessed over HTTP. The data can be addressed using a REST-like URI. The Astoria service, when accessed via the HTTP GET method with such a URI, will return the data. The web service can be configured to return the data in either plain XML, JSON or RDF+XML. For more information see the following article about data services.
It's not hard to create and maintain data service on standard way, a lot of ORM framework like hibernate, iBatis, toplink even low level jdbc framework is good enough to built data services. Sometimes it's hard to maintain and governece this type of enterprise application, because life cycle of the development process is vital. For these reason, in this post i will give some overview of tools and frameworks to accelerates application development and deployment by simplifying the complex task of building and maintaining access to and from multiple external data sources.
1) XAware Open Source Data Integration: It provides real-time, bi-directional data integration with a service-oriented flavor. XAware makes other tools and frameworks more productive by hiding data access complexity behind "XML views". XML views span any number of data sources, and can read data, write data, or transfer data between sets of sources, all within a distributed transaction.
2) WSO2 Data Services: With WSO2 Data Services, data can be exposed and accessed in a secure (using WS-Security) and reliable (using WS-ReliableMessaging) manner; data is also made available for mashing-up with other Web services.
3) Native Oracle XML DB Web Services: Oracle 11g Database makes the conversion of existing PL/SQL code into web services easier than ever by providing Native XML DB web services. With some simple configuration, this functionality exposes PL/SQL code code as web services.
4) Oracle Data Service Integrator:It brings a service-oriented architecture (SOA) approach to data access. Data Services Integrator enables organizations to consolidate, integrate, and transform as needed disparate data sources scattered throughout their enterprise, making enterprise data available as an easy-to-access, reusable commodity: a data service.
Next we will implement a simple data service project to see how it's easy to develop on oracle data service integrator.
For study purpose i decided to make the sample project as simple as possible. First we will create a table on Oracle data base and expose this table as a web service through oracle data service integrator. In conclusion you will get more resources links to follows.
Following platforms will get on work:
1) Oracle 10g data base.
2) Oracle data service integrator 10g r3
Download and install products on your local machine. I am going to use user scott to sample purpose.
1) Create a table on user scott as follows:
CREATE TABLE document ( doc_id VARCHAR2(200), doc_type_i VARCHAR2(200), vid_doc VARCHAR2(200), "COMMENT" VARCHAR2(200), num_doc VARCHAR2(200), dat_doc DATE );
Insert some data on it.
2) After installation of data service integrator create one domain, name the domain base_domain.
3) Open the eclipse workshop of oracle data service integrator and create a new server based on domain base_domain.
4) Open admin console, the link should be as follows:
http://localhost:7001/console/console.portal?_nfpb=true&_pageLabel=HomePage1
5) Add a non XA data source on it. Provide all the necessary information as follows:
database name: scott
jndiName:nsiDataSource
hostname:mercury
database username:scott
password:tiger
now we are ready to create data service on workshop.
6) Open the workshop and change the perspective to Oracle data service integrator.
7) Create a dataspace project.
8) Create one physical data service as follows:
Data Source type: Reletional
DataSource: nsiDataSource
DataBase type: Oracle->9->Oracle 9.x and above
Relational data base object: Tables and views
and click next
9) On SQL select source views, select schema Scott and expand it, also select table named "Document".

10) On operation page select check box public and click next button.
11) Put the data service name as getNsiDocument and click finish.
This will create file getNsiDocument.ds and open a overview window as follows:

12) On the above page go to the tab TEST and select operation getNsiDocument and click run. This will show you some result.
13) Now we will create logical data service over physical service. Click new-> New logical service.
DataService Name: getNsiDocument_l
Select logical data service type: Entity data service
and click finish. This will create a new logical service named getNsiDocument_l and now the project has error, don't worry we will make up it very soon.
14) Right click on the overview tab and select add operation. Select it as a public service as a kind of read and name the service as getNsiDocumentL.
15) Click on the tab Query Map. Expand the node getNsiDocument.ds and drag getNsiDocument() to the Query map.
16) Select the getNsiDocument*, press crtl and drag the element over Return box.

17) Right click on the return box and select save and Associate xml type.
18) just change the namespace as ld:Physical/getNsiDocumentL and click ok.
19) For now the project should be valid and we can test the logical data service.
Click tab TEST and click the button run.
20) Now we will add a where clause on our logical service. Click query Map, select getNsiDocument() box and the condition editor will highlighted. Click on add where clause, select element vid_doc and add ="14" in my cause. See the following illustration.

21) Save all the work and test again - now the result should be filtered by where clause.
Here we have done most of the work. Now we will create a web service witch will wrap the logical service.
22) Create new web service map, name nsiDocumantWs and drag the logical service named getNsiDocumentL over the web service map.
Just now we have finished all our work. We have create physical data service and a logical data service over it, at last create a web service map for the logical service.
Now we can test our data service, for these copy the following url and open it on any web browser:
http://localhost.fors.ru:7001/wls_utc/begin.do?
Right click on the nsiDocumentWs.ws and click copy wsdl url. Paste the url on weblogic test client and click on getNsiDocumentL button. You should see some result on page as follows:

In conclusion, we able to understand how easy it is to create data service in oracle data service workshop on declarative way. For more information see following pages:
1) Tutorial.
2) Generate web service from data services.
On the next post i will demonstrate a simple example to use DSP protocol on OSB 10gR3.

Comments

Popular posts from this blog

Tip: SQL client for Apache Ignite cache

A new SQL client configuration described in  The Apache Ignite book . If it got you interested, check out the rest of the book for more helpful information. Apache Ignite provides SQL queries execution on the caches, SQL syntax is an ANSI-99 compliant. Therefore, you can execute SQL queries against any caches from any SQL client which supports JDBC thin client. This section is for those, who feels comfortable with SQL rather than execute a bunch of code to retrieve data from the cache. Apache Ignite out of the box shipped with JDBC driver that allows you to connect to Ignite caches and retrieve distributed data from the cache using standard SQL queries. Rest of the section of this chapter will describe how to connect SQL IDE (Integrated Development Environment) to Ignite cache and executes some SQL queries to play with the data. SQL IDE or SQL editor can simplify the development process and allow you to get productive much quicker. Most database vendors have their own front-en

8 things every developer should know about the Apache Ignite caching

Any technology, no matter how advanced it is, will not be able to solve your problems if you implement it improperly. Caching, precisely when it comes to the use of a distributed caching, can only accelerate your application with the proper use and configurations of it. From this point of view, Apache Ignite is no different, and there are a few steps to consider before using it in the production environment. In this article, we describe various technics that can help you to plan and adequately use of Apache Ignite as cutting-edge caching technology. Do proper capacity planning before using Ignite cluster. Do paperwork for understanding the size of the cache, number of CPUs or how many JVMs will be required. Let’s assume that you are using Hibernate as an ORM in 10 application servers and wish to use Ignite as an L2 cache. Calculate the total memory usages and the number of Ignite nodes you have to need for maintaining your SLA. An incorrect number of the Ignite nodes can become a b

Load balancing and fail over with scheduler

Every programmer at least develop one Scheduler or Job in their life time of programming. Nowadays writing or developing scheduler to get you job done is very simple, but when you are thinking about high availability or load balancing your scheduler or job it getting some tricky. Even more when you have a few instance of your scheduler but only one can be run at a time also need some tricks to done. A long time ago i used some data base table lock to achieved such a functionality as leader election. Around 2010 when Zookeeper comes into play, i always preferred to use Zookeeper to bring high availability and scalability. For using Zookeeper you have to need Zookeeper cluster with minimum 3 nodes and maintain the cluster. Our new customer denied to use such a open source product in their environment and i was definitely need to find something alternative. Definitely Quartz was the next choose. Quartz makes developing scheduler easy and simple. Quartz clustering feature brings the HA and