What are your “undocumented” APIs up to?

Do you know “Snappening”? It is a story about private Snapchat pictures turning from Casper, the friendly ghost, to a scary Halloween ghost. Recently, there was a second incident at Snapchat in which users had about 200,000 private pictures exposed (mostly pictures of under-aged users, aged from 13-17) online. Most had no knowledge of these photos being stored by anyone. Given the nature of some of those pictures (under aged/minor compromising pictures), these can be considered illegal to possess.

[Image courtesy: Casper’s Scare School]

[Snappening is a little different than the Fappening that occurred a few months ago in which female celebrities’ nude pictures were hacked from iCloud. In that case, the attack was targeted at specific celebrity accounts with a combination of brute force and phishing. Hence, the attack was limited to very few accounts and not massive scale like Snapchat.]

Read more of this post

Advertisements

Is your API an asset or a liability?

This article was originally published on VentureBeat.

A touchy API topic is data ownership and liability, regardless of whether the APIs are open or protected. Obviously, depending on your business model and needs, you will choose to expose the APIs and the underlying assets to your developers, partners, public developers, your consumers, or others that I am forgetting. While almost everyone talks about the API business relationships, the liability concern brings the legal relationship to the forefront.

[Image courtsey: jasonlove.com]

liabilityAPIs are considered a contract between the data supplier (or API provider) and the app provider. If you have different API providers that publish APIs from a central place, and multiple third parties use that API catalog to build apps for their consumers (end users), then it becomes complicated. While you can fix some of this by writing detailed contracts and making the app providers and end customers agree to the terms of usability before they use those APIs, as a provider, you are also responsible for implementing controls around your APIs to mitigate most, if not all, of the risks involved.

Read more of this post

Enterprise IOT: Mixed Model Architecture

– By Andy Thurai (@andythurai)

This article was originally published on VentureBeat.

Recently, there has been a lot of debate about how IoT (Internet of Things) affects your architecture, security model and your corporate liability issues. Many companies seem to think they can solve these problems by centralizing the solution, and thus collectively enforcing it in the hub, moving as far away from the data collection centers (not to be confused with data centers). There is also a lot of talk about hub-and-spoke model winning this battle. Recently, Sanjay Sarma of MIT, a pioneer in the IoT space, spoke on this very topic at MassTLC (where I was fortunate enough to present as well). But based on what I am seeing in the field, based on how the actual implementations work, I disagree with this one size fits all notion.

Read more of this post

Value of Things – Internet of Things

Recently, I had the privilege to present on IoT security, alongside Michael Curry of IBM, at the MassTLC “Value of Things” conference. You can see the slides here http://www.slideshare.net/MassTLC/andy-thurai-iot-security. I will post the video once it is published.

One of topics that I discussed, which resonated well with the crowd, was about IoTs (Internet of Things) doing both data collection and process control on the same device — Not only on the same device, but also on the same plane most times. This means if someone has access to those data collection mechanisms they also get to control the processes as well, which could be dangerous in wrong hands.

Read more of this post

Are you a “data liberal” or a “data conservative”?

– By Andy Thurai (@AndyThurai). This article was originally published on Xively blog site.

In the last decade, as a society, we had worked very hard toward “liberating our data” — unshackling it from the plethora of constraints unnecessarily imposed by I.T. In contrast to this, In the 90s and early 00s, data had been kept in the Stygian depths of the data warehouse, where only an elite few had access to, or had knowledge about it or the characteristics defining it.

Once we had the epiphany that we could glean amazing insights from data, even with our “junk” data, our efforts quickly refocused around working hard to expose data in every possible way. We exposed data at the bare bones level using the data APIs, or at a value added data platforms level, or even as industry based solutions platforms.

Thus far, we have spent a lot of time analyzing, finding patterns, or in other words, innovating, with a set of data that had been already collected. I see, however, many companies taking things to the next proverbial level.

In order to innovate, we must evolve to collect what matters to us the most as opposed to resign to just using what has been given to us. In other words, in order to invent, you need to start with an innovative data collection model. What this means is for us to move with speed and collect the specific data that will add value not only for us, but for our customers in a meaningful way.

Read more of this blog on Xively blog site.

Taming Big Data Location Transparency

Andy Thurai, Chief Architect & CTO, Intel App security & Big Data (@AndyThurai) | David Houlding, Privacy Strategist, Intel (@DavidHoulding)

Original version of this article appeared on VentureBeat.

Concern over big government surveillance and security vulnerabilities has reached global proportions. Big data/analytics, government surveillance, online tracking, behavior profiling for advertising and other major tracking activity trends have elevated privacy risks and identity based attacks. This has prompted review and discussion of revoking or revising data protection laws governing trans-border data flow, such as EU Safe Harbor, Singapore government privacy laws, Canadian privacy laws, etc. Business impact to the cloud computing industry is projected to be as high as US $180B.

The net effect is that the need for privacy has emerged as a key decision factor for consumers and corporations alike. Data privacy and more importantly identity-protected, risk mitigated data processing are likely to further elevate in importance as major new privacy-sensitive technologies emerge. These include wearables, Internet of Things (IoT), APIs, and social media that powers both big data and analytics that further increase associated privacy risks and concerns. Brands that establish and build trust with users will be rewarded with market share, while those that repeatedly abuse user trust with privacy faux pas will see eroding user trust and market share. Providing transparency and protection to users’ data, regardless of how it is stored or processed, is key to establishing and building user trust. This can only happen if the providers are willing to provide this location and processing transparency to the corporations that are using them.

Read more of this post

Don’t be stupid, use (cloud) protection!

– By Andy Thurai (Twitter: @AndyThurai)

This article originally appeared on PandoDaily.

Looks like Obama read my blog! The White House got the message. Politicians now seem to understand that while they are trying to do things to save the country, such as creating NSA programs, they cannot do that at the cost of thriving and innovative businesses, especially cloud programs, which are in their infancy. Recently, Obama met with technology leaders from Apple, AT&T, Google and others behind closed doors to discuss this issue.

While American initiatives, both federal and commercial, are trying to do everything to fix this issue, I see vultures in the air. I saw articles urging nationalism among Canadian companies, asking them to go Canadian. In addition, they are also trying to use scare tactics to steer the business towards them, which is not necessarily going to help global companies in my view.

Read more of this post

%d bloggers like this: