How to effectively build a hybrid SaaS API management strategy

– By Andy Thurai (@AndyThurai) and Blake Dournaee (@Dournaee). This article was originally published on Gigaom

Summary: Enterprises seeking agility are turning to the cloud while those concerned about security are holding tight to their legacy, on-premise hardware. But what if there’s a middle ground?

If you’re trying to combine both a legacy and a cloud deployment strategy without having to do everything twice a hybrid strategy might offer the best of both worlds. We discussed that in our first post API Management – Anyway you want it!.

In that post, we discussed the different API deployment models as well as the need to understand the components of API management, your target audience and your overall corporate IT strategy. There was a tremendous readership and positive comments on the article. (Thanks for that!). But, there seem to be a little confusion about one particular deployment model we discussed – the Hybrid (SaaS) model. We heard from a number of people asking for more clarity on this model. So here it is.

Meet Hybrid SaaS

A good definition of Hybrid SaaS would be “Deploy the software, as a SaaS service and/or as on-premises solution, make those instances co-exist, securely communicate between each other, and be a seamless extension of each other.”

Read more of this post

ATOS API: A zero cash payment processing environment without boundaries

When ATOS, a big corporate conglomerate (EUR 8.8 billion and 77,100 employees in 52 countries), decided that they wanted to become the dominant Digital Service Provider (DSP) for payments, they had a clear mandate on what they wanted to do. They wanted to build a payment enterprise without boundaries. [Wordline is an ATOS subsidiary setup to handle the DSP program exclusively]. One of the magic bullets out of that mandate was:

The growing trust of consumers to make payments for books, games and magazines over mobiles and tablets evolving into a total acceptance of cashless payments in traditional stores and retail outlets bringing the Zero Cash Society ever closer.

This required them to rethink the way they processed payments. They are one of the largest payments processors in the world, but they were primarily focused on only big enterprises and name brand shops using their services. Onboarding every customer took a long time, and the integration costs were high. After watching the smaller companies such as Dwolla, Square and others trying to revolutionize the world they decided it is time for the giant to wake up.

The first decision was to embrace the smaller vendors. In order to do that, they can’t be a high touch, very time consuming, takes forever to integrate and very high cost per customer on-boarding environment. They wanted to build a platform that is low touch, completely API driven, fully self-serviced, and continuously integrating yet provides secure payment processing transactions. In addition, they were also faced with moving from the swipe retail payment systems to support ePayment and mobile payments. Essentially, they wanted to build a payment platform that catered not only to today’s needs but flexible enough to expand and scale for the future needs and demands. They wanted to offer this as a service to their customers.

Read more of this post

API Management – Anyway you want it!

– By Andy Thurai (Twitter:@AndyThurai) and Blake Dournaee (@Dournaee). This article originally appeared on Gigaom.

Enterprises are building an API First strategy to keep up with their customer needs, and provide resources and services that go beyond the confines of enterprise. With this shift to using APIs as an extension of their enterprise IT, the key challenge still remains choosing the right deployment model.

Even with bullet-proof technology from a leading provider, your results could be disastrous if you start off with a wrong deployment model. Consider developer scale, innovation, incurring costs, complexity of API platform management, etc. On the other hand, forcing internal developers to hop out to the cloud to get API metadata when your internal API program is just starting is an exercise leading to inefficiency and inconsistencies.

Components of APIs

But before we get to deployment models, you need to understand the components of API management, your target audience and your overall corporate IT strategy. These certainly will influence your decisions.

Not all Enterprises embark on an API program for the same reasons – enterprise mobility programs, rationalizing existing systems as APIs, or find new revenue models, to name a few.  All of these factors influence your decisions.

API management has two major components: the API traffic and the API metadata. The API traffic is the actual data flow and the metadata contains the information needed to certify, protect and understand that data flow. The metadata describes the details about the collection of APIs. It consists of information such as interface details, constructs, security, documentation, code samples, error behavior, design patterns, compliance requirements, and the contract (usage limits, terms of service). This is the rough equivalent of the registry and repository from the days of service-oriented architecture, but it contains a lot more. It differs in a key way; it’s usable and human readable. Some vendors call this the API portal or API catalog.

Next you have developer segmentation, which falls into three categories – internal, partner, and public. The last category describes a zero-trust model where anyone could potentially be a developer, whereas the other two categories have varying degrees of trust. In general, internal developers are more trusted than partners or public, but this is not a hard and fast rule.

Armed with this knowledge, let’s explore popular API Management deployment models, in no particular order.

Read more of this post

How APIs Fuel Innovation

– By Andy Thurai (Twitter: @AndyThurai)

This article originally appeared on ProgrammableWeb.

There has been so much talk about APIs and how they add additional revenue channels, create brand new partnerships, allow business partners to integrate with ease, and how they help with promoting your brand. But an important and under looked aspect, which happens to be a byproduct of this new paradigm shift, is the faster innovation channel they provide. Yes, Mobile First and the API economies are enabled by APIs.

picture1

Read more of this post

Ubisoft API (powered by Intel) – The game plays you now!

By Andy Thurai (Twitter: @AndyThurai)

[Original version of this blog appeared on Intel blogs here]

Remember the old days, when we used to play “graphical” games such as Tetris, and were amazed by them? Twenty years fast forward, Ubisoft is doing things to enrich the user experience in an amazing way. Gone are the days the games are given to you statically so the results are predictable if you play them in a certain way. Now, the real-time games (such as Assassin’s Creed ®) are adapting itself to every player to provide a unique and tailored gaming experience based on each individual player’s skill and play style.  This is the kind of experience Ubisoft wants to deliver to their vast customer base, which posed and interesting challenge.

Tetris

[The most graphical modern game in the late 80s – Tetris]

As any teen can vouch for, gaming is moving from a console-based model to a device-based model (Console/PC/ Mobile/other devices). The games are not controlled by your keystrokes or game controllers anymore, but based on player movements as sensed by sensors such as cameras, body armor, gadgets, etc.

This change posed an interesting challenge to our recent customer Ubisoft. They needed to convert their existing legacy services into a cross-platform enabler to support the above and they also needed to build a new gaming platform for the future that will allow them to provide a richer, connected, and engaged user experience by providing a ubiquitous platform.

Read more of this post

QCON NY 2013

I had a speaking opportunity at QCON in Big Apple last week.

QCON NY2

As usual Big Data and Mobility were the dominating topics in this conference. Surprisingly, there was a strong html5 presence as well. At least ten presentations (including mine) were based on html5 or other modern language themes, which means the momentum is shifting from native apps to html 5 fast. It is not about just plain vanilla JavaScript anymore.

One thing I can vouch for is that the development crowd seems to be getting younger and sharper on a daily basis.

Read more of this post

Big Data, IoT, API … Newer technologies protected by older security

Now-a-days every single CIO, CTO, or business executive that I speak to is captivated by these three new technologies: Big Data, API management and IoTs (Internet of Things). Every single organizational executive that I speak with confirms that they either have current projects that are actively using these technologies, or they are in the planning stages and are about to embark on the mission soon.

Though the underlying need and purpose served are unique to each of these technologies, they all have one thing common. They all necessitate newer security models and security tools to serve any organization well. I will explain that in a bit, but let us see what is the value added by these technologies to any organization:

IoT – is specific data collection points that employ sensors placed anywhere and everywhere. Most often times the information collected by these devices are sensitive data and contain specific identifiable targeted data. IoT allows organizations to analyze behaviors and patterns as needed but also poses an interesting problem. Gone is TB (Terabytes) of data; now we are talking about PB (petabytes) of data which continue to grow exponentially. IoTs use M2M communication, which are a newer channel and create a newer set of threat vectors.

Big Data – store massive amounts of data (some of these data are from the aforementioned IoTs) and having the necessary software and infrastructure that allow you to access them faster which promises to cost you a fraction of what it is costs today, further enabling you to capture as many data points as possible.

API – interface, enabler and inter-connector between systems by providing a uniform and portable interface (whether it is to the big data or the platform that enables big data).

While each of technologies at first glance appears to be serving different constituencies within an Enterprise, there is an undeniable interconnectedness that exists. The IoT collects data from everywhere. Hence, it is pouring tons of data that need to be not only stored somewhere, but also analyzed properly so that the dots can be connected, to ultimately form meaningful patterns that people can make use of.

Read more on ProgrammableWeb (PW) blog site

The Façade Proxy

KuppingerCole analyst Craig Burton (of Burton Group originally) wrote a recent article about Façade proxies. You can read the article here: http://blogs.kuppingercole.com/burton/2013/03/18/the-faade-proxy/

As Craig notes,

“A Façade is an object that provides simple access to complex – or external – functionality. It might be used to group together several methods into a single one, to abstract a very complex method into several simple calls or, more generically, to decouple two pieces of code where there’s a strong dependency of one over the other. By writing a Façade with the single responsibility of interacting with the external Web service, you can defend your code from external changes. Now, whenever the API changes, all you have to do is update your Façade. Your internal application code will remain untouched.”

I call this “Touchless Proxy”. We have been doing the touchless gateway for over a decade, and now using the same underlying concept, we provide touchless API gateway or a façade proxy.

While Intel is highlighted as a strong solution in this analyst note by KuppingerCole, Craig raises the following point:

“When data leaves any school, healthcare provider, financial services or government office, the presence of sensitive data is always a concern.”

This is especially timely as the healthcare providers, financial institutions, and educational institutions rush to expose their data using APIs to their partners.

Read more of this post

Why are APIs so popular?

Kin Lane recently wrote a couple of blogs about why copyrighting an API is not common. I couldn’t agree more that copyrighting API is uncommon. First of all, the API definition is just an interface (It is the implementation detail that is important, and needs to be guarded), so it doesn’t make any sense to copyright an interface. (It is almost like copyrighting a pretty face 🙂 ). Secondly, the whole idea of exposing an API is you are looking for others to finish the work you started by just providing the plumbing work. Why would anyone want to get involved with a copyrighted API and finish your work for you?

Kin Lane says, “API copyright would prevent the reuse and remix of common or successful API patterns within a space. We are at a point where aggregating common, popular APIs into single, standardized interfaces is emerging as the next evolution in web and mobile app development.”

http://apivoice.com/2012/12/08/api-copyright-would-restrict-api-aggregation/index.php (to read his complete blog).

We have gone from the services aggregation concepts to mashups, and now I am seeing the newer trend of API aggregation.

Keep in mind APIs are generally offered by vendors who want to expose a specific functionality or platform. If you need cross platform, cross provider, cross functionality options, you need to have API aggregations. Remember during the services days how much of a hard time we used to have in integrating and aggregating services from different vendors? I know some companies are making a good living by just building aggregated APIs. 🙂

One of the common usage patterns I see time and again is customers use the strongest points from vendors of their choice. This was not possible when you were building services. You ended up buying one vendor stack, and you were limited what was offered by them, unless you custom built the weak parts by yourself.

Now imagine the power of what you are getting now. You are cherry picking the best of breed platforms, best of possible functionalities from multiple vendors of your choice and liking.

Application aware Firewalls

You may have heard this term recently and wondered what it meant. When it comes to security everyone thinks of Firewalls, Proxies, IPS, IDS, Honeypots, VPN devices, email security and even Web security, but most people don’t think in terms of Application level security unless either you are the developer, admin, or user of those specific services or perhaps a hacker. Especially when your traditional network boundaries disappear you can’t carry all of those devices with you. When you move out of your traditional boundaries, towards the cloud, you trust the cloud provider to provide you these features. But you can’t do the same with application level security.  That is because those devices work on a level below the Application Layer (Or Layer 7 in the ISO-OSI architecture model). And those standards are very well defined and established, but whereas, to an extent, the application layer is still evolving – from COBOL to API everything is fair game.

There is a reason why Enterprises are looking for devices  which can do it all. I was reading a security research report  the other day, which was suggesting  that attackers are moving up the stack to the application layer as it is so easy to hack into applications nowadays, especially with the applications moving to the cloud, and introducing new vectors of attack, such as whole layer of API/ XML threats (if you are still bound to XML/SOAP and can’t free yourself). Most of the organizations that I see don’t have the same solid security at the application level as they do at the network level. It developed over last few years as more and more applications are coming out with new technologies exposing themselves to newer threats plus there is no unified standard between developers when they develop application level security.

The network security we have today is not “application aware”. This means the API/XML and other application level threats go right thru your regular network defenses that you built over years. There are people out there thinking, if you use REST or JSON then they are not prone to attacks, as are others who are using SOAP/XML/ RPC, which is a funny thought.

Add this to the fact that when your applications move your enterprise boundary to go to a cloud they are exposed to hackers 24×7 waiting to be attacked.  Not only direct attack on your application, but maybe a bounce off another application that is hosted in a multi-tenant environment. So your new “firewall” should be able to inspect, have visibility into, analyze application traffic and identify threats. But the issue doesn’t stop there; you also need to analyze for virus, malware and the “intention” of the message (and its attachments) as they pass through them. Most times the issue with Firewalls inspecting the traffic would be it will look at where it is going (port and maybe an IP address), but not what the message is intend to do. There is a reason why injection attacks such as SQL Injection, XSS, Xpath injection all became so popular.

Now there is another issue and this relates to the way application are built now a days. In the olden days you controlled both the client and the server and even the communication between them to an extent. Now we expose APIs and let others build interfaces, middleware and the usage model as they see fit. Imagine a rookie or an outsourced developer developing a sub standard code and put it out there for everyone poke and prod for weaknesses.  As we all know the chain is as strong as the weakest link. The problem is it is hard to figure out which is your weakest link. So application aware firewalls can not only inspect, analyze or control traffic to applications but having inherent knowledge it can work at a deeper level too.

This gives you freedom to move the necessity of application level security from your applications/ services/ API to a centralized location, so your developers can concentrate on what they are supposed to do – develop the services that matter to organization and not worry about other nuances and leave that to the experts.

That is where Intel/McAfee comes into play. We have solutions that can help you build solid applications/services/ APIs and insulate and abstract the ancillary services out of it in a centralized, or de-centralized, locations and manage them globally. Our solutions allow you to abstract application security, mobile middleware, data mediation, message transformation, message routing, Quality of Service, Service Level based enforcements, protocol mediation, application firewalls, Web App Firewalls (WAFs) etc out in a standards based fashion thereby freeing your developers.

Check out our solution set Intel ESG (Enterprise Service Gateway), McAfee MSG (McAfee Service Gateway), McAfee MWG (McAfee Web Gateway), Intel API Gateway which will all help you take your Enterprise and Cloud services to the next level.

http://software.intel.com/en-us/articles/Expressway-Service-Gateway/

http://software.intel.com/en-us/articles/Cloud-Service-Brokerage-API-Resource-Center/

http://software.intel.com/en-us/articles/REST-Web-Services-API-Security/

http://www.mcafee.com/us/products/services-gateway.aspx

http://www.mcafee.com/us/products/web-gateway.aspx


%d bloggers like this: