Prescriptive Analytics: Predict and Shape the Future

This article originally appeared on Gigaom

-  By Andy Thurai (@AndyThurai) and Atanu Basu (@atanubasu). Andy Thurai is the Chief Architect and CTO for Intel App Security unit. Atanu Basu is the CEO of Ayata.

Knowledge is power, according to Francis Bacon, but knowing how to use knowledge to create an improved future is even more powerful. The birth of a sophisticated Internet of Things has catapulted hybrid data collection, which mixes structured and unstructured data, to new heights.

Broken Analytics

According Gartner, 80% of data available has been collected within the past year. In addition, 80% of the world’s data today is unstructured. Using older analysis, security, and storage tools on this rich data set is not only painful, but will only produce laughable results.

Even now, most corporations use descriptive/diagnostic analytics. They use existing structured data and correlated events, but usually leave the newer, richer, bigger unstructured data untouched. The analyses are built on partial data and usually produce incomplete takeaways.

Smarter Analytics to the rescue

Gaining momentum is a newer type of analytics technology, called prescriptive analytics, which is about figuring out the future and shaping it using this hybrid data set. Prescriptive analytics is evolving to a stage where business managers – without the need for data scientists – can predict the future and make prescriptions to improve this predicted future.

Prescriptive analytics is working towards that “nirvana” of event prediction and a proposed set of desired actions that can help mitigate an unwanted situation before it happens. If a machine prescribes a solution anticipating a future issue and you ignore it, the machine can think forward and adapt automatically. It can realize there was no action taken and predict a different course of events based on the missed action and generate a different prescription that takes into account the new future.

Read more of this post

What the Frack?

I was doing some research recently for an article in ONG (Oil & Natural Gas) sector practice that is making huge headlines recently called “fracking”.

For those who ask What the Frack?

Fracking, or Frack, (or hydraulic fracturing – oh my, how much we love shortening things into cute names) is a procedure in which essentially you are fracturing (or cracking) things with hydraulics hoping to find oil or gas. Essentially this gives us an opportunity to do horizontal drilling which was otherwise impossible.

Conventional places are running dry so we need to find new sources – oil out of sand, gas out of rocks. We are becoming God by performing these miracles!

Read more of this post

How to effectively build a hybrid SaaS API management strategy

- By Andy Thurai (@AndyThurai) and Blake Dournaee (@Dournaee). This article was originally published on Gigaom

Summary: Enterprises seeking agility are turning to the cloud while those concerned about security are holding tight to their legacy, on-premise hardware. But what if there’s a middle ground?

If you’re trying to combine both a legacy and a cloud deployment strategy without having to do everything twice a hybrid strategy might offer the best of both worlds. We discussed that in our first post API Management – Anyway you want it!.

In that post, we discussed the different API deployment models as well as the need to understand the components of API management, your target audience and your overall corporate IT strategy. There was a tremendous readership and positive comments on the article. (Thanks for that!). But, there seem to be a little confusion about one particular deployment model we discussed – the Hybrid (SaaS) model. We heard from a number of people asking for more clarity on this model. So here it is.

Meet Hybrid SaaS

A good definition of Hybrid SaaS would be “Deploy the software, as a SaaS service and/or as on-premises solution, make those instances co-exist, securely communicate between each other, and be a seamless extension of each other.”

Read more of this post

Which kind of Cyborg are you?

By Andy Thurai (@AndyThurai)

[This article is a result of my conversations with Chris Dancy (www.Chrisdancy.com) on this topic. The original version of this was published on Wired magazine @ http://www.wired.com/insights/2014/01/kind-cyborg/].

Machines are replacing humans in the thinking process. The field of Cognitive Thinking is a mixture of combining rich data collection (with wide array of sensors), machine learning, predictive analysis, and cognitive anticipation in a right mix. Machines can do “just-in-time-machine-learning” rather than using predictive models and are virtually model free.

The Cognitive Computing concept revolves around few combined concepts:

  1. Machines learn and interact naturally with people to extend what either humans or machines could do on their own.
  2. They help human experts make better decisions.
  3. These machines collect richer data sets and use them in their decision making process, which creates the need for intelligent interconnected devices. This creates a network of intelligent sensors feeding the super brain.
  4. Machine learning algorithms sense, predict, infer, think, analyze, and reason before they make decisions.

Which kind of cyborg are you?

The field of cybernetics has been around for a long time. Essentially, it is the science (or art) of the evolution of cyborgs.  The cyborgs have evolved from assistive cyborgs to creative cyborgs. Not only can they adapt to human situations, but they are also able to learn from human experiences (machine learning), think (cognitive thinking), and figure out (situation analysis) how to help us rather than being told.

Read more of this post

ATOS API: A zero cash payment processing environment without boundaries

When ATOS, a big corporate conglomerate (EUR 8.8 billion and 77,100 employees in 52 countries), decided that they wanted to become the dominant Digital Service Provider (DSP) for payments, they had a clear mandate on what they wanted to do. They wanted to build a payment enterprise without boundaries. [Wordline is an ATOS subsidiary setup to handle the DSP program exclusively]. One of the magic bullets out of that mandate was:

The growing trust of consumers to make payments for books, games and magazines over mobiles and tablets evolving into a total acceptance of cashless payments in traditional stores and retail outlets bringing the Zero Cash Society ever closer.

This required them to rethink the way they processed payments. They are one of the largest payments processors in the world, but they were primarily focused on only big enterprises and name brand shops using their services. Onboarding every customer took a long time, and the integration costs were high. After watching the smaller companies such as Dwolla, Square and others trying to revolutionize the world they decided it is time for the giant to wake up.

The first decision was to embrace the smaller vendors. In order to do that, they can’t be a high touch, very time consuming, takes forever to integrate and very high cost per customer on-boarding environment. They wanted to build a platform that is low touch, completely API driven, fully self-serviced, and continuously integrating yet provides secure payment processing transactions. In addition, they were also faced with moving from the swipe retail payment systems to support ePayment and mobile payments. Essentially, they wanted to build a payment platform that catered not only to today’s needs but flexible enough to expand and scale for the future needs and demands. They wanted to offer this as a service to their customers.

Read more of this post

API Days Paris – impressionnant!

Recently I had the pleasure of speaking at the API Days in Paris. It was a great event, and the crowd was surprisingly larger than I expected.

The usual suspects were presenting there including Kin Lane, Adam DuVander, Mike Amundsen, Mehdi, myself, SOA software, WSO2, 3Scale, Mulesoft, FaberNovel along with some surprises. Interestingly I saw Microsoft, HP, and Rackspace for the first time, and IBM is starting to show up in more events now as well.

In my opinion, the best speech would probably go to Rafi Haladjian of Sen.Se (The End of The Internet of Things).  Rumor has it that the slides not working was part of his act :) Regardless, he improvised and spoke without any visual cue. It was funny, full of substance and included a good amount of thought leadership. He started off his conversation with a humor bit suggesting he doesn’t speak English and knows nothing about Internet of Things. Wonder how it would have turned out with slides.

Interestingly enough APISpark (Restlet) had the big stage with gold sponsorship, a prime speaking spot, and demonstrated some good ideas. We will have to wait and see how it will turn out when the next conference rolls around.

Read more of this post

API Management – Anyway you want it!

- By Andy Thurai (Twitter:@AndyThurai) and Blake Dournaee (@Dournaee). This article originally appeared on Gigaom.

Enterprises are building an API First strategy to keep up with their customer needs, and provide resources and services that go beyond the confines of enterprise. With this shift to using APIs as an extension of their enterprise IT, the key challenge still remains choosing the right deployment model.

Even with bullet-proof technology from a leading provider, your results could be disastrous if you start off with a wrong deployment model. Consider developer scale, innovation, incurring costs, complexity of API platform management, etc. On the other hand, forcing internal developers to hop out to the cloud to get API metadata when your internal API program is just starting is an exercise leading to inefficiency and inconsistencies.

Components of APIs

But before we get to deployment models, you need to understand the components of API management, your target audience and your overall corporate IT strategy. These certainly will influence your decisions.

Not all Enterprises embark on an API program for the same reasons – enterprise mobility programs, rationalizing existing systems as APIs, or find new revenue models, to name a few.  All of these factors influence your decisions.

API management has two major components: the API traffic and the API metadata. The API traffic is the actual data flow and the metadata contains the information needed to certify, protect and understand that data flow. The metadata describes the details about the collection of APIs. It consists of information such as interface details, constructs, security, documentation, code samples, error behavior, design patterns, compliance requirements, and the contract (usage limits, terms of service). This is the rough equivalent of the registry and repository from the days of service-oriented architecture, but it contains a lot more. It differs in a key way; it’s usable and human readable. Some vendors call this the API portal or API catalog.

Next you have developer segmentation, which falls into three categories – internal, partner, and public. The last category describes a zero-trust model where anyone could potentially be a developer, whereas the other two categories have varying degrees of trust. In general, internal developers are more trusted than partners or public, but this is not a hard and fast rule.

Armed with this knowledge, let’s explore popular API Management deployment models, in no particular order.

Read more of this post

Follow

Get every new post delivered to your Inbox.

Join 28 other followers

%d bloggers like this: