Mule ESB Connector for


Mule ESB Connector for

Secret” methods of SAAS to On Premise integration


This document is complete, to the best of my knowledge, but if I forgot something, please tell me.

Table of Contents

Abstract 1

About author 2

The problem 2

Application integration challenge 2

Mule ESB 3 3

Scenario 3

The solution. 4

Implementation approach 4

Mule connector concepts and techniques 5

Implementation with annotation technique. 5

Implement by copy of generated code 7

Implement context retrieval with Reflexion 7

Flow implementation 8

Prepare event view 9

Prepare pool with watermark 10

Prepare db connector 10

Filter to stop empty calls. 11

Iterate over the collection in the payload. 11

Configure Street Hawk connector 11

Export of the delivery artifacts for headless Mule ESB deployment 13

Deployment on standalone mule server 13


The publication provide guidance on how to succeed in developing a custom adapter to connect Mule ESB to the SAAS offering

At first it explains the problem of data integration in general. Then we will explain the solution offered by ESB paradigm. Next step will be describing of the ESB Adapter pattern. The paper discuss approaches to the Adapter pattern implementation amongst different ESB vendors, both on premises and SAAS. Then it provides the conceptual overview of the Mule ESB by Next it will explore possibilities offered bu Mule DevKit in terms of adapter development.

At the end we will work on the case study that allows us to implement ESB connector in order to help connect on premises backend infrastructure to SAAS solution. The resulting work is going to fulfill application integration requirements from the point of view of mobile application owner. Using the Adapter developed mobile app owner efficiently connect to the SAAS solution in order to achieve the better user retention and targeting.

Technologies, tips and suggestions which we give in this paper, in the same manner as any system aimed at practical application, are in constant development. All the news, useful additional techniques and new strategies you can find on our site

I would be very grateful for any suggestions, feedback, comments, and reviews of the paper that you can send to

About author

The author has 20+ years of experience in the IT industry where he has held a wide variety of roles. After serving as enablement architect at IBM La Gaude in the mid-noughties his roles has been consulting, development and testing in the business process management and data integration segment, specifically for WebSphere Process Server IBM BPM, WebSphere Message Broker and IBM Integration Bus. For the last eight years he has been running the independent consultancy ABTIMO SARL in Paris, France. In his portfolio are multiple successful implementations for the world leading businesses in multiple functional domains, from banking and telecom trough FMCG, food, to car makers and steel producers in different countries to the east and west of the Greenwich within or outside IBM. Recently he was involved to project using Mule ESB as well.

The problem

Application integration challenge

Integrating multiple systems in a point to point manner can be very time consuming and expensive to maintain. One common approach to resolve this issue is to introduce an Enterprise Service Bus

(ESB), which replaces the point to point approach with a single, centralized place to integrate systems, and does so in a service-oriented manner. That said only by using a tool we can not ensure the success of the integration projects It requires experience and wisely applied best practices.

As soon as the decision is made to implement ESB, one could think that in order to avoid problems related to the maintenance of additional infrastructure, HA, upgrades etc. the easiest approach comes from existing implementation in the cloud. As an example we can cite Zapier or OneSaaS.

Unfortunately this approach is not as straightforward and easy as it seems. Even if we do not mention recurring OpEx cost that is going to be huge for big volume of messages, first of all we must remain beware of getting into vendors lock in, the precisely pitfall that people are looking to avoid with ESB architecture. Second important consideration is the fact that Banking companies insurances and Mobile App businesses are concerned with the privacy and do not want to share their data with the cloud blindly.

While for ex. Mule ESB has a cloud deployment option, the on-premises solutions still have much better credibility for top-end customers

One of the objectives of this article is to look at a pattern that allows for the upgrading or replacement of legacy systems without an excessive overhead in additional integration work. In order to achieve the best standardization and flexibility ideally we should have the system when each backend expose a nice SOAP service with clean formal description of the interface using wsdl and with data structure modeling done with the xsd schemas. The system built this way allows for smarter management with UDDI repository and many difficulties observer previously would have been gone in such a setup.

The another one is to highlight that two main mistakes are waiting for unexperienced ESB implementors. First is the penchant to reimplement point to point scenario using ESB tools. While it offers some advantage for being easier to implement, it creates however future maintenance issues.

Second is to implement all integration inside of huge unique message flows. This bring a problem of modification, because bigger flow will need retesting if only small part changes in one interface to some backend.

The mean to achieve these objectives that proves it’s ability to scratch the itch is so called adaptor pattern, where each backend is seen from ESB as an almost ideal SOAP endpoint behind an additional component that will close protocol mismatch gap as well as data format difference.

All major ESB vendors offers adaptors to most of the well established backends such as SAP, Siebel, MS Dynamics etc… Some vendors such as Oracle or IBM calls them Adapters while others such as Mule ESB or WSO2 calls them Connectors.

Decision for choosing the on premises ESB solution is defined by an ensemble of parameters , such as license price, development speed and cost, availability of people, etc.

As an illustration to the elements above, we implement MULESsoft Mule ESB and we will write the connector to the backend SAAS

Mule ESB

Mule ESB is the product of Mule ESB is a lightweight Java-based enterprise service bus (ESB) and integration platform that allows developers to connect applications together quickly and easily, enabling them to exchange data. is nice ESB vendor despite that their product is still a bit unfinished and incomplete comparing to long established competitors products as IBM and Oracle. One of the advantages of Mule ESB is availability of easy of use tools that facilitate connector creation. is the Mobile Engagement Automation SAAS offering. The platform allows to achieve so called predictive engagement automation. This is a tool that allows to reduce the guesswork while looking at engagement of the mobile apps. The SH is a platform that offers the tools to access data, knowledge and automation to create personalized engagement, improve mobile user’s trust and deliver additional value. It allows mobile telemetry for user insights with possibility to take actions. Allows to target right time communication from App owners to customers. provides automation engine for real-time engagement decisions and actions.

At this time streethawk expose standard RESTful interface. We are going to wrap StreetHawk interface into Mule ESB adaptor in order to hide the complexity of StreetHawk to the average ESB developer and to establish a normalized way to interact with the backend from interior of the ESB.


In order to realize some real life scenario for the ESB flow implementation we consider the case as following:

The company A holds their customers table in the MySQL. This table is continuously updated from mobile application by Mobile App users filling the registration form. The information about user added or updated into the table of users. As soon as the information about user first and last name is updated, the company A wants this user information in StreetHawk to be tagged with his name and family name in order to provide personalization for push messages.

Currently the company A provides an implementation in Python to do this.

import requests

import simplejson as json

import datetime

import mysql.connector

HOST = ‘’ # the default API Host

APP_KEY = ‘SHSample’

AUTH_TOKEN=# <The StreetHawk API Auth Token> #found in Settings->Auth Token

def tag_names_by_cuid(the_cuid,first_name, last_name):

params = dict(app_key=APP_KEY, auth_token=AUTH_TOKEN, sh_cuid=the_cuid, key=”sh_first_name”, string=first_name)

r = + ‘/v1/tag’, params)

print r.json()

params = dict(app_key=APP_KEY, auth_token=AUTH_TOKEN, sh_cuid=the_cuid, key=”sh_last_name”, string=last_name)

r = + ‘/v1/tag’, params)

cnx = mysql.connector.connect(user=’root’, database=’my-backend-database’)

cursor = cnx.cursor()

last_date =, 1, 1) # or some other date you want to sync since.

query = “SELECT id, first_name, last_name, updated_on FROM users WHERE updated_on > %s” % last_date


row = cursor.fetchone()

while row is not None:


row = cursor.fetchone()



Rough and brute, difficult to reuse elsewhere. Launched by murky unix scheduler, not very fluid and no watermark functionality provided.

The solution.

Implementation approach

In order to implement the solution we will start by designing the StreetHawk connector.

The idea is to provide connector artifact that implement connection to the server with the possibility to configure credentials, send payload or variables as Java objects or maps, verify input parameters, call REST server, retrieve JSON payload and convert it to Java object.

Then we will create Mule message flow and all necessary artifacts to use the achieved connector for real word task.

Mule connector concepts and techniques

With the Anypoint studio DevKit offering the implementation of Mule ESB connectors has been greatly simplified.

Basically the way to implement the connector depends on the protocol the backend API conforms to. The generic way to programm a connector is using Java SDK. For backends implementing SOAP services the approach using CXF framework is recommended. For restful interface other solutions using a REST client such as Jersey are available. For well behaved RESTful services the easiest way is to go with so called @RESTCall annotations.

Implementation with annotation technique.

Given the fact that street hawk is pretty descent SAAS solution exposing very passable well behaved RESTful service we are keen to implement the connector the easiest way.

To begin, just install prerequisites ( Java 7, Maven, Anypoint Studio and Studio DevKit plugin) and generate skeleton for the adapter .

File->New->Anypoint Connector Project

package org.mule.modules.shc;

import …


* Street Hawk Anypoint Connector


* @author ABTIMO SARL


@Connector(name=”shc”, schemaVersion=”1.0″, friendlyName=”SHC”)

public abstract class SHCConnector {

Next step is to study the StreetHawk API specification document and implement abstract method

with necessary parameters, that will be transformed to REST api call and return transformed payload as Java Object or map.

Let’s chose from the StreetHawk API the easiest but useful function to implement, such as the function Reading Tags.

Accordingly to the StreetHawk API specification we define abstract class taking in parameters

sh_cuid and install id

Given the fact that the client must be authenticated the auth_token parameter requires to be sent to the API call


* Custom processor


* {@sample.xml ../../../doc/shc-connector.xml.sample shc:readingTags}


* @param auth_token Street Hawk auth_token from your account.

* @param installid Street Hawk Installation id from your backend. The unique installation ID for the device.

* @param sh_cuid Street Hawk customer id from your backend. The sh_cuid Installs are tagged with.

* @return The list of tags for given installation and customer id.

* @throws IOException Comment for Exception



@ReconnectOn(exceptions = { Exception.class })

@RestCall(uri=“https://{host}/{api_version}/tags?auth_token=123&installid=123&sh_cuid=123″, method=HttpMethod.GET)

public abstract Object readingTags(@RestQueryParam(“auth_token”) String auth_token, @RestQueryParam(“installid”) String installid, @RestQueryParam(“sh_cuid”) String sh_cuid) throws IOException;

This is the abstract method that will be implemented during the project build by code generation in the class :

package org.mule.modules.shc.adapters;


import java.lang.reflect.Method;


@Generated(value = “Mule DevKit Version 3.6.1″, date = “2015-05-02T01:17:47+02:00″, comments = “Build UNNAMED.2405.44720b7″)

public class SHCConnectorRestClientAdapter

extends SHCConnectorProcessAdapter

implements MuleContextAware, Disposable, Initialisable


public Object readingTags(String auth_token, String installid, String sh_cuid)

throws IOException


HttpMethod method = null;

method = new GetMethod();

The only action we required to accomplish in order to get this API working is adding the hint below in the shc-connector.xml.sample:

<!– BEGIN_INCLUDE(shc:readingTags) –>

<shc:readingTags config-ref=“” auth_token=“#[map-payload:auth_token]“ installid=“#[map-payload:installid]“ sh_cuid=“#[map-payload:sh_cuid]“/>

<!– END_INCLUDE(shc:readingTags) –>

In addition in order to be able to configure the authentication parameters common to all requests, it is suitable to create annotated properties for host, version and authentication token with all corresponding accessors and mutators.


* The authentication token. An auth_token is used to access the API programmatically.

* It is linked to a specific User and a specific App.

* You can get your auth_token from the settings page (inside the web console).

* The auth_token has to be submitted as a GET or POST parameter.





private String auth_token;


* host part of API URL





private String host;


* API version





private String api_version;

Implement by copying of generated code

Next API call that is useful to implement is SettingTags:

At first it seems to be attractive just to repeat exactly the same approach as with reading tags, but after careful consideration it requires more work to do. First of all, the installid and sh_cuid both are optional, but one of them should be present at the time of call, and in addition, one of the post parameter name to call SH API must be dynamically chosen as a function of parameter value.

Such a flexibility requirement may suggest the choice of a technics bespoken, precisely Jersey Clients technique. Nevertheless there is another solution that came to mind after examining generated code. We can just copy the auto-generated concrete method to the Abstract class SHCConnector, and then update it’s code accordingly to the requirements above.

Implement context retrieval with Reflexion

The only difficulty on this way is to find the way to retrieve the Mule Context in the abstract class. The easiest way to do it seems to be using Reflexion API.


* Initialiation of mule context using Reflexion API

* We do it in order to reuse annotations while not using

* a separate JSON client for custom api calls

* @throws Exception Comment for Exception


public void initMuleCtxWithReflexion() throws Exception {

for (Field field : this.getClass().getDeclaredFields()) {

try {

String name = field.toString();

if (“private org.mule.api.MuleContext org.mule.modules.shc.adapters.SHCConnectorRestClientAdapter.muleContext”

.equals(name)) {


muleContext1 = (MuleContext) field.get(this);

httpMuleMessageFactory1 = new HttpMuleMessageFactory(muleContext1);



} catch (IllegalAccessException e) {

// swallow




throw new StreetHawkConnectorException(“Unable to find field to init Mule context”);


Flow implementation

The overall standard pattern to update SAAS from RDBMS in terms of Mule , or rather any ESB looks like the schema below:

Basically we must implement database connector, putting it inside a polling mechanism, then once the updated data gathered from the database we take them from payload and send to the SAAS using the SHC connector developed in previous chapter. After successful message processing the answer payload converted to JSON format and written to the disk using File connector.

Based on the current python implementation, the mysql source table seems to have the look like this:

CREATE TABLE `users` (

`id_user` varchar(50) NOT NULL,

`first_name` varchar(50) NOT NULL,

`last_name` varchar(50) NOT NULL,

`updated_on` date DEFAULT NULL

PRIMARY KEY (`id_user`)


In order to be able to send updates we must include some logic to update the time field either by declaration


Or via trigger creation



set new.updated_on1=curdate();


delimiter ;

Now each time on update or insert in the customer table the updated_on field will receive the latest timestamp.

Prepare event view

Given the fact that our freshly implemented street hawk connector is supposed to receive the input data one tag in time basis and then to call street hawk server accordingly, in order to facilitate message flow design we could produce the database output in the same shape as SHC is supposed to receive.

In concrete terms we may want to retrieve the events as rows in the table :

sh_cuid tagname tagvalue updated_on sh_first_name anurag1 2015-04-27 02:06:17 sh_last_name kondeya 2015-04-27 02:06:17

This goal is easy and fast to achieve by creating view in the database:


(select `users`.`id_user` AS `sh_cuid`,’sh_first_name’ AS `tagname`,`users`.`first_name` AS `tagvalue`,`users`.`updated_on` AS `updated_on` from `users`)


(select `users`.`id_user` AS `sh_cuid`,’sh_last_name’ AS `tagname`,`users`.`last_name` AS `tagvalue`,`users`.`updated_on` AS `updated_on` from `users`)

order by `sh_cuid`;

Prepare poll with watermark

The next step is Poll scope and Database connector configuration with watermark functionality to select for processing only fresh data.

We configure the Poll scope with frequency of 10 seconds and 5 second start delay in order to mitigate transitional state errors during message flow startup.

Then we enable watermark functionality configuring watermark flow variable called timestamp.

For the initial value we take an arbitrary data mandatory in the past vis a vis the oldest event table modification. We can take for exemple #['2014-01-01 00:00:00.0']

As a selector function we choose MAX. This way the watermark value will be updated by payloads updated_on value when updated_on is later in time then current timestamp variable value.

Prepare db connector

In order to restrain the selection to the tuples updated after watermark time, we

configure the select statement as below way :

select * from v_tags where updated_on > ‘#[flowVars.timestamp]‘;

Of course MySQL configuration must hold the necessary values in order to be able to establish connection.

Filter to stop empty calls.

The Poll scope is designed the way that it fires Database connector repeatedly without interruption and passes by the result even if the select statement returns empty recordset. In order to filter the unwanted messages with empty recordset we must apply the filter

<expression-filter expression=“#[payload.size() &gt; 0]” doc:name=“Expression”/>

Iterate over the collection in the payload.

In order to call Street Hawk api for each element from database array, we must place it inside of “For Each” scope.

Configure StreetHawk connector

Now goes the actual SHC call. First of all we must provide global SHC configuration. The only Auth_token is the required information here. One gets the auth_token from his account screen, going to Settings menu , then to Auth Token.

On the local level the values to send to API retrieved from the payload this way :

Export of the delivery artifacts for headless Mule ESB deployment

Right click on db-sh project, export -> Mule -> Anypoint Studio Project to Mule deployable Archive

Deployment on standalone mule server

First of all you must download standalone mule server.

Next you have to install standalone environment, run it, run DDL in order to configure database artifacts as discussed above and drop the delivery artifacts to /apps dir.

As soon as you update table created in mysql, the values of first and last name will be tagged to your user with

See also :

Leave a Reply

You must be logged in to post a comment.

iBudget    |   No More Money Falling Through Your Fingers        Home page
Popular Posts

Sorry. No data so far.

App store
Available on the iPhone
    Why Using Feedburner in Blogs Is So Important?