Is Your AI Ethical?

 

[Pic Courtesy: Atlantic Re:think]

A group of teachers successfully sued the Houston Independent School District (HISD) in 2017 claiming their Fourteenth Amendment rights were violated when the school district used an opaque artificial intelligence (AI) algorithm to evaluate and terminate 221 teachers. The judge overturned the use of AI algorithms suggesting, “When a public agency adopts a policy of making high stakes employment decisions based on secret algorithms (aka, AI and Neural Networks) incompatible with a minimum due process, the proper remedy is to overturn the policy.”

The fields of computer modeled risk assessment and algorithmic decision making have been around for a while, but AI takes it to the next level – as demonstrated by Cambridge Analytica’s recent infamous work. AI is having an even bigger impact in our lives than that we find in movies like Terminator and I, Robot. While those movies suggest that the robots might end human freedom and control us – the biased, unfair, or downright unethical decision-making algorithms, automatically created and used by machines, pose a bigger risk to humanity.

Read more of this post

Are you a “data liberal” or a “data conservative”?

– By Andy Thurai (@AndyThurai). This article was originally published on Xively blog site.

In the last decade, as a society, we had worked very hard toward “liberating our data” — unshackling it from the plethora of constraints unnecessarily imposed by I.T. In contrast to this, In the 90s and early 00s, data had been kept in the Stygian depths of the data warehouse, where only an elite few had access to, or had knowledge about it or the characteristics defining it.

Once we had the epiphany that we could glean amazing insights from data, even with our “junk” data, our efforts quickly refocused around working hard to expose data in every possible way. We exposed data at the bare bones level using the data APIs, or at a value added data platforms level, or even as industry based solutions platforms.

Thus far, we have spent a lot of time analyzing, finding patterns, or in other words, innovating, with a set of data that had been already collected. I see, however, many companies taking things to the next proverbial level.

In order to innovate, we must evolve to collect what matters to us the most as opposed to resign to just using what has been given to us. In other words, in order to invent, you need to start with an innovative data collection model. What this means is for us to move with speed and collect the specific data that will add value not only for us, but for our customers in a meaningful way.

Read more of this blog on Xively blog site.

Prescriptive Analytics: Predict and Shape the Future

This article originally appeared on Gigaom

–  By Andy Thurai (@AndyThurai) and Atanu Basu (@atanubasu). Andy Thurai is the Chief Architect and CTO for Intel App Security unit. Atanu Basu is the CEO of Ayata.

Knowledge is power, according to Francis Bacon, but knowing how to use knowledge to create an improved future is even more powerful. The birth of a sophisticated Internet of Things has catapulted hybrid data collection, which mixes structured and unstructured data, to new heights.

Broken Analytics

According Gartner, 80% of data available has been collected within the past year. In addition, 80% of the world’s data today is unstructured. Using older analysis, security, and storage tools on this rich data set is not only painful, but will only produce laughable results.

Even now, most corporations use descriptive/diagnostic analytics. They use existing structured data and correlated events, but usually leave the newer, richer, bigger unstructured data untouched. The analyses are built on partial data and usually produce incomplete takeaways.

Smarter Analytics to the rescue

Gaining momentum is a newer type of analytics technology, called prescriptive analytics, which is about figuring out the future and shaping it using this hybrid data set. Prescriptive analytics is evolving to a stage where business managers – without the need for data scientists – can predict the future and make prescriptions to improve this predicted future.

Prescriptive analytics is working towards that “nirvana” of event prediction and a proposed set of desired actions that can help mitigate an unwanted situation before it happens. If a machine prescribes a solution anticipating a future issue and you ignore it, the machine can think forward and adapt automatically. It can realize there was no action taken and predict a different course of events based on the missed action and generate a different prescription that takes into account the new future.

Read more of this post

How to effectively build a hybrid SaaS API management strategy

– By Andy Thurai (@AndyThurai) and Blake Dournaee (@Dournaee). This article was originally published on Gigaom

Summary: Enterprises seeking agility are turning to the cloud while those concerned about security are holding tight to their legacy, on-premise hardware. But what if there’s a middle ground?

If you’re trying to combine both a legacy and a cloud deployment strategy without having to do everything twice a hybrid strategy might offer the best of both worlds. We discussed that in our first post API Management – Anyway you want it!.

In that post, we discussed the different API deployment models as well as the need to understand the components of API management, your target audience and your overall corporate IT strategy. There was a tremendous readership and positive comments on the article. (Thanks for that!). But, there seem to be a little confusion about one particular deployment model we discussed – the Hybrid (SaaS) model. We heard from a number of people asking for more clarity on this model. So here it is.

Meet Hybrid SaaS

A good definition of Hybrid SaaS would be “Deploy the software, as a SaaS service and/or as on-premises solution, make those instances co-exist, securely communicate between each other, and be a seamless extension of each other.”

Read more of this post

API Management – Anyway you want it!

– By Andy Thurai (Twitter:@AndyThurai) and Blake Dournaee (@Dournaee). This article originally appeared on Gigaom.

Enterprises are building an API First strategy to keep up with their customer needs, and provide resources and services that go beyond the confines of enterprise. With this shift to using APIs as an extension of their enterprise IT, the key challenge still remains choosing the right deployment model.

Even with bullet-proof technology from a leading provider, your results could be disastrous if you start off with a wrong deployment model. Consider developer scale, innovation, incurring costs, complexity of API platform management, etc. On the other hand, forcing internal developers to hop out to the cloud to get API metadata when your internal API program is just starting is an exercise leading to inefficiency and inconsistencies.

Components of APIs

But before we get to deployment models, you need to understand the components of API management, your target audience and your overall corporate IT strategy. These certainly will influence your decisions.

Not all Enterprises embark on an API program for the same reasons – enterprise mobility programs, rationalizing existing systems as APIs, or find new revenue models, to name a few.  All of these factors influence your decisions.

API management has two major components: the API traffic and the API metadata. The API traffic is the actual data flow and the metadata contains the information needed to certify, protect and understand that data flow. The metadata describes the details about the collection of APIs. It consists of information such as interface details, constructs, security, documentation, code samples, error behavior, design patterns, compliance requirements, and the contract (usage limits, terms of service). This is the rough equivalent of the registry and repository from the days of service-oriented architecture, but it contains a lot more. It differs in a key way; it’s usable and human readable. Some vendors call this the API portal or API catalog.

Next you have developer segmentation, which falls into three categories – internal, partner, and public. The last category describes a zero-trust model where anyone could potentially be a developer, whereas the other two categories have varying degrees of trust. In general, internal developers are more trusted than partners or public, but this is not a hard and fast rule.

Armed with this knowledge, let’s explore popular API Management deployment models, in no particular order.

Read more of this post

How APIs Fuel Innovation

– By Andy Thurai (Twitter: @AndyThurai)

This article originally appeared on ProgrammableWeb.

There has been so much talk about APIs and how they add additional revenue channels, create brand new partnerships, allow business partners to integrate with ease, and how they help with promoting your brand. But an important and under looked aspect, which happens to be a byproduct of this new paradigm shift, is the faster innovation channel they provide. Yes, Mobile First and the API economies are enabled by APIs.

picture1

Read more of this post

Taming Big Data Location Transparency

Andy Thurai, Chief Architect & CTO, Intel App security & Big Data (@AndyThurai) | David Houlding, Privacy Strategist, Intel (@DavidHoulding)

Original version of this article appeared on VentureBeat.

Concern over big government surveillance and security vulnerabilities has reached global proportions. Big data/analytics, government surveillance, online tracking, behavior profiling for advertising and other major tracking activity trends have elevated privacy risks and identity based attacks. This has prompted review and discussion of revoking or revising data protection laws governing trans-border data flow, such as EU Safe Harbor, Singapore government privacy laws, Canadian privacy laws, etc. Business impact to the cloud computing industry is projected to be as high as US $180B.

The net effect is that the need for privacy has emerged as a key decision factor for consumers and corporations alike. Data privacy and more importantly identity-protected, risk mitigated data processing are likely to further elevate in importance as major new privacy-sensitive technologies emerge. These include wearables, Internet of Things (IoT), APIs, and social media that powers both big data and analytics that further increase associated privacy risks and concerns. Brands that establish and build trust with users will be rewarded with market share, while those that repeatedly abuse user trust with privacy faux pas will see eroding user trust and market share. Providing transparency and protection to users’ data, regardless of how it is stored or processed, is key to establishing and building user trust. This can only happen if the providers are willing to provide this location and processing transparency to the corporations that are using them.

Read more of this post

Don’t be stupid, use (cloud) protection!

– By Andy Thurai (Twitter: @AndyThurai)

This article originally appeared on PandoDaily.

Looks like Obama read my blog! The White House got the message. Politicians now seem to understand that while they are trying to do things to save the country, such as creating NSA programs, they cannot do that at the cost of thriving and innovative businesses, especially cloud programs, which are in their infancy. Recently, Obama met with technology leaders from Apple, AT&T, Google and others behind closed doors to discuss this issue.

While American initiatives, both federal and commercial, are trying to do everything to fix this issue, I see vultures in the air. I saw articles urging nationalism among Canadian companies, asking them to go Canadian. In addition, they are also trying to use scare tactics to steer the business towards them, which is not necessarily going to help global companies in my view.

Read more of this post

Snowden gone, ripples remain!

– By Andy Thurai (Twitter: @AndyThurai)

[Original version of this blog appeared on PandoDaily magazine.]

Though Snowden is long gone now, the ripple effects that he created are going to remain for a long time to come. If you haven’t done so already, I suggest you read about the NSA surveillance programs PRISM and XKeyscore before you continue with this article.

Essentially, these government programs are creating nervous times for my Canadian, European and APAC customers who are using US cloud providers. Given the very strict data residency and data privacy requirements to protect their citizens’ sensitive data in these parts of the world, through “guilt by association” alone, the latest incidents have implicated most corporations that move their data across boundaries. One thing is certain: these programs that are exposed because someone came out in the public. Just because a specific country’s cloud provider hasn’t been accused yet (or not found guilty) doesn’t necessarily mean that they are not doing the same thing. There is a chance that they might be doing it and have not been caught yet.

Unfortunately, the cloud community spent years alleviating the fear of moving data to the cloud by entities. Those days, the fear was about hackers and disgruntled employees/partners accidentally or willfully exposing their data. Now they need to fight an uphill battle of convincing the entities not about hackers, but about legal entities and governments.

Read more of this post

ZDnet observation about Chief API Officer

Joe McKendrick of ZDnet wrote a blog commenting on my article Chief API officer. You can read it here.

He makes a couple of valid observations which deserves some clarification.

“CMOs may also help reinvent the business as a cloud provider in its own right — even if the business is something other than technology.” – I agree. This is due to the fact that IT is already crunched for capital and struggling to come up with money to spend on new platforms.  CMO not only has more money but can just shift the spending habits from spending on other marketing and revenue generating channels to this newer channel which has more potential.

“And CEOs and CFOs may like this new direction, since the CMO’s job is all about creating new business.” – I agree. I have seen this time and again. There are customers, Aetna is a prime example, who run (or endorse) the API programs out of the CEO office. Watch out for my follow up article where I discuss this in more detail.

“Is this a good thing? Enterprise technology has become incredibly complex, and it takes very technically proficient individuals to understand and guide the business to invest wisely and avoid costly security errors. Plus, many of the consumerish services being adopted by marketing departments are relatively simple compared to the programming and administration that goes into enterprise IT systems.” – This is debatable. First of all, we are not trying to create a new trend, just trying to embrace the trend. That is IT spending being supported by other organizations that are cash rich as opposed to cash strapped IT operations. Plus, when you invest just purely on the opex model, as opposed to capex model, their expenses are relatively cheaper (on a yearly/ usage model basis, not on a TCO basis which is another big debate). Ultimately, what I am suggesting is that while embracing this trend, provide the other organizations with a more mature, robust, and secure solution that will have an oversight and governance of a mature corporate IT unit even though it is owned, operated, measured and managed by people outside corporate IT.