Part 2: Context aware Data Privacy

If you missed my Part 1 of this article, shame on you :). You can read it here when you get a chance (link).

As a continuation to part 1, where I discussed the issues with Data Protection, we will explore how to solve some of those issues in this article.

People tend to forget that the hackers are attacking your systems for one reason only: DATA. You can spin that any way you want, but at the end of the day, they are not attacking your systems to see how you have configured your workflow or how efficiently you processed your orders. They could care less. They are looking for the gold nuggets of information that either they can either resell or use to their own advantage to gain monetary benefits. This means your files, databases, data in transit, storage data, archived data, etc. are all vulnerable and will mean something to the hacker.

Gone are the old days when someone was sitting in mom’s basement and hacking into US military systems to boast their ability with small group of friends. Remember Wargames, an awesome movie? The modern day hackers are very sophisticated, well-funded, for profit organizations, backed by either big organized cyber gangs or by some entity of an organization.

So you need to protect your data at rest (regardless of how the old data is – as a matter of fact, the older the data, the chances are they are less protected), data in motion (going from somewhere to somewhere – whether it is between processes, services, between enterprises, or into/from the cloud or to storage), data in process/usage. You need to protect your data with your life.

Let us closely examine the things I said in my last blog (Part 1 of this blog), the things that are must for a cloud data privacy solution.

More importantly, let us examine the elegance of our data privacy gateways (code named ETB – Expressway Token Broker) that can help you with this costly, scary, mind-numbing experience to go easily and smoothly. Here are the following elements that embedded in our solution that are going to make your problem go away sooner.

1.       Security of your sensitive message processing device

As they say, Caesar’s wife must be above suspicion (did you know Caesar divorced his wife in 62 BC). What is the point of having a security device, inspecting your crucial traffic, if it can’t be trusted? You need to put in devices/ a solution that not only makes claims by the vendor “claiming” to be secure, but also has the certifications to back it up. This means a third party validation agency should have tested this solution and certified this to be kosher enough for an enterprise, data center or cloud location. This certification must include FIPS 140-2 Level 3, CC EAL 4+, DOD PKI, STIG vulnerability tested, NIST SP 800-21, and support for HSM, etc. The validation must come from recognized authorities, not just from the vendor.

2.       Support for multiple protocols

When you are looking to protect your data, it is imperative that you choose a solution that not just does the http/ https/ SOAP, JOSN, AJAX and REST protocols. In addition, you need to consider whether the solution supports all standard protocols known to enterprise/ cloud, “Legacy” protocols such as JMS, MQ, EMS, FTP, TCP/IP (and secure versions of all of the above) and JDBC. More importantly, you also need to look at if the solution can speak industry standard protocols natively such as SWIFT, ACORD, FIX, HL-7, MLLP, etc. You also need to look at whether or not the solution has the capabilities to extension options in order to support custom protocols that you might have. In other words, the solution you are looking at should give you the flexibility of inspecting your ingress and egress traffic regardless of how your traffic flows.

3.       Able to read into anything

This is an interesting concept. I was listening to one of our competitor’s webcast and there came this dreaded question: How do you help me protect against a specific format of data that I transact with a partner?Without hesitation, the presenter answered it suggesting they don’t support it. While I am not trying to pick on them, the point is that you should have the capability to be able to look into any format of data that is flowing into, or out of, your system when the necessity arises. This will mean that you should be able to inspect not only XML, SOAP, JSON, and other modern format messages,but also the solution should be able to retrofit your existing legacy systems to provide the same level of support. Message formats such as COBOL (oh yes, we will be doing a Y10K on this alright), ASCII, Binary, EBCDIC, and other unstructured data streams as equally important as well. Sprinkle in the industry format messages such as SWIFT, NACHA, HIPAA, HL7, EDI, ACORD, EDIFACT, FIX, FpML to make the scenario interesting. But don’t forget our good old messages that can be sent in conventional ways such as MS Word, MS Excel, PDF, PostScript and good old HTML, etc. You need a solution that can look into any of these data types and help you protect the data in those messages seamlessly.

4.       Have an option to sense not only the sensitive nature of the message, but who is requesting it and on what context and from where

This is where my whole blog started. Essentially, you should be able to not only identify sensitive data but also based on the context. Intention, or heuristics, is lot more important than just sensing something is going out, or in. So this essentially means you should be able to sense who is accessing what, when, from where, and more importantly from what device. Once you identify that fact you should be able to able to determine how do you want to protect that data. For example, if a person is accessing a specific data from a laptop from within the corporate network, you can let the data go with the transport security, assuming he has enough rights to access that data. But if the same person is trying to access the same data using a mobile device, you can tokenize the data and send only the token to the mobile device. This allows you to solve the problem of unknown location as well. All conditions being same, the tokenization will occur based on a policy that senses that the request came from a mobile device.

5.       Have an option to dynamically tokenize, encrypt, format preserve the encryption based on the need

This will allow you to be flexible to encrypt certain messages/ fields, tokenize certain messages/ fields or do an FPE on certain messages. While you are at it, don’t forget to read my blog on why Intel’s implementation of the FPE variation is one of strongest in the industry here.

6.       Support the strongest possible algorithms to encrypt, storage, and use the most random possible random number for tokenization

Not only that you should verify the solution has strong encryption algorithm options available out of the box (such as AES-256, SHA 256, etc.), but you should also deliver cutting edge security options as and when they come out. Such as support for latest versions of any security updates.

7.       Protect the encryption keys with your life. There is no point in encrypting the data, yet giving away the “Keys to the Kingdom” easily

Now this is the most important point of all. If there is one thing you take away from this article let this be it: When you are looking at solutions, make sure that not only that the solution is strong on all of the above points, but most important of all protect the keys with its life / your life and their life. This means the key storage should be encrypted, should have a SOD (separation of duties) capabilities, key encrypting keys, strong key management options, key rotation, re-key options when the keys need to be rotated/expired or lost, key protection, key lifetime management, key expiration notifications, etc. In addition, you also need to explore if there is an option to integrate with your existing key manager in house such as RSA DPM (the last thing you need is to disrupt the existing infrastructure by introducing newer technology).

8.       Encrypt the message while preserving the format, so it won’t break the back-end systems

This is really important if you want to do the tokenization or encryption on the fly without the backend or connected client applications knowing about it. When you encrypt the data and yet format preserve it, it will not only look and feel the same as the original data but the receiving party couldn’t tell the difference.

So if you are wondering where does Intel come into the picture in this topic, we do all of the discussion points mentioned in #1 to #8, with our Intel Cloud data privacy solution (aka Intel ETB – Expressway Token Broker) and a lot more. Every single standard that is mentioned in there is supported, and we are working on adding the newer, better standards as they come along.

Check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

About Andy Thurai
My website is

2 Responses to Part 2: Context aware Data Privacy

  1. Pingback: Another classic case of Data Loss that could have been easily prevented « SOA, Cloud, Identity & Security Blog

  2. Pingback: You are Gazetted… « SOA, Mobile, Cloud, Identity & Security Blog @AndyThurai

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: