Effective PCI tokenization methods

Recently a colleague and a friend of mine wrote a great article about different ways to be PCI 2.0 compliant by tokenizing PAN data. If in case you missed it I want to draw your attention to it.

Essentially, if you are looking to be PCI-DSS 2.0 compliant there are few ways you can achieve that. The most painful would be obviously a rip-and-replace strategy and the easiest would be to do it in an incremental, less intrusive method.

First approach, the Monolithic big bang approach, is the legacy way of doing things. Once you figure out the areas of your system that are non-compliant (that is either storing PAN data –encrypted or not, or processing PAN in clear), you decide whether you need that component to be PCI compliant. As the PCI audit is very extensive, time consuming and very methodical, in which every process, application, storage, database, and system will be looked at and thereby it becomes very expensive. Once you figure out which components need to be PCI compliant you can do the rip and replace approach in which you will touch every system component that needs to be modified and rewrite the system to become compliant. This might involve touching every component and change your entire architecture. This essentially will be the most expensive, painful and the slowest before you can be compliant. While this can be the most effective for spot solutions, this could be an issue if you have to do this every time when the PCI-DSS needs change (which seems to be every year).

Second approach, API/SDK based tokenization is much more effective. In this case, you can retrofit applications, processes, systems, databases, etc. by making those components call an API (or SDK) which will convert the PAN data and return a token which can be used to replace the original PAN data. This requires you to do minimally invasive procedures. While this doesn’t require you to change your entire architecture/ system it still requires you to touch all those components that need to be compliant. Effectively this method is a lot faster and quicker to the market, while also giving you an opportunity to change quickly when the PCI-DSS needs change.

The third approach is called a gateway approach. In this you essentially monitor the traffic between components and tokenize/ de-tokenize the data in transit. This is also known as in-line tokenization. This method is effectively the cheapest, and the quickest to the market. But the biggest advantage is that your changes to your existing systems will be very minimal to nil. Essentially, you make the PAN data flow through the gateway which will take care of converting the PAN data to tokens before it hits your systems. Imagine the painful exercise when you have to make your Mainframe and legacy systems compliant by having them deal with tokenized data if you were to re-code those legacy systems. This method will essentially eliminate that.

You can read his entire article here.
Cost Effective PCI DSS Tokenization for Retail (Part I)
Cost Effective PCI DSS Tokenization for Retail (Part II)

Also, don’t forget to check out our tokenization buyer’s guide here.

Advertisements

About Andy Thurai
This blog is published by Andy Thurai, Program Director - API Economy, IoT, Connected cloud solutions with IBM. The views expressed here are my own and not of my employer. Please feel free to comment or engage in a stimulating conversation, but please keep it professional. I can be reached via the “Contact Me” page here. You can also find me on LinkedIn or on Twitter @AndyThurai

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: