All Stakeholders Aren’t Fully Committed
The first opportunity for a project to start to head down the wrong path is when all stakeholders don’t buy into the project. This path to failure applies to all large projects and especially to all aspects of ERP implementations. Implementations are scary and risky. There are any number of places where problems can arise. If all the stakeholders aren’t willing to jump into the darkness together and commit to the project’s successful completion, the finger pointing will start at the first sign of trouble, the team members will spend a majority of their energy protecting themselves instead of pushing the project forward, and the project will eventually just fall apart.
IT Takes the Lead
The next opportunity for the data migration to go wrong is if it is treated as an IT problem instead of a business problem. The first golden rule in Johny Morris's book, Practical Data Migration v2, is that data migration is a business issue and not an IT issue, and I couldn't agree more. IT tends to focus on figuring out the actual values and structure to get a record to load. However, that is a much smaller part of the process. The complexity is figuring out how to structure or restructure the data to make sure it will meet the needs of all the business processes for the organization to run smoothly. New ERP implementations are also an ideal opportunity for business processes to be redesigned to complement the new software capabilities. IT will help decipher the data and take care of the technical details, but the business needs to drive the decisions around structuring data so that all business needs are met.
Assume Data Migration Is Only Taking Data from Point A to B
Another opportunity for failure is when it is assumed that the data migration effort is only taking data from point A to point B. Project teams that haven't been through a migration effort before do not consider that the data migration consists of several different phases, and only budget to account for coding up some transformation rules and pushing it to the new system. However, for a data migration project to be successful, all phases of the project need to be accounted for: data assessment, data cleansing, data mapping, data enrichment, data transformation, conversion reconciliation, and finally legacy system retirement. When all these phases are not accounted for at the start of the project, the project team will have to scramble to complete them as they are necessary to bring the project to completion. Side products of this scrambling are significant budget overruns, long hours, and much cursing.
Assume Transformations Are Simple
In addition to thinking that the data migration effort only includes the activities involved in converting data from point A to B, teams underestimate the effort to just perform those translations for a variety of reasons. On data migration projects, the systems involved were developed in different eras, using different technologies, so the data structures are significantly different. Often we need to combine data from disparate systems into a single data set. This combining of data and removing duplicate information can lead to tremendously complex transformation rules. On an implementation for a locomotive manufacturer, we needed to combine item master data from four different disparate systems to come up with a complete record in the target system. It was possible that an item existed in any or all of the four systems. When we found an item in multiple systems, we had to take different characteristics from each disparate system depending on a wide variety of characteristics. On and on the transformation rules went until we came to a complete item master record. Then the path we used to create the item master record affected the path that we used to create the corresponding BOM and routing data.
Lack of Legacy System Knowledge
A frequent point of project derailment is basing data mapping specifications on out-of-date or nonexistent legacy system documentation with no efficient method to understand the underlying data structures and the data contained in those structures. Without a way to figure out what's in the legacy systems and how the data is structured, both the functional and technical teams have an incredibly difficult time figuring out the mapping specification. Then once they've figured out the initial versions of the specifications, they'll be woefully inaccurate. During one presentation for a client that had been struggling on an Oracle EBS implementation, we showed some analysis that we did on their legacy data. A data migration team member commented, “It took us several weeks to do what you just did in a few hours.” Since they didn't have the right tools at their disposal, there was no way that the team could have provided the support needed to generate the mapping specifications.
Don’t Account for Data Quality Issues until It Is Too Late
On occasion we hear about teams having the attitude that since the data works in the legacy systems, it must be clean. However, the legacy systems are often old, don’t have data controls, and/or do not have a master data management policy. These systems were also developed under different business requirements, using different technologies compared to the target system. These differences undoubtedly lead to data issues. When it comes time to actually process the data through, the entire process explodes into disaster, as the legacy data is missing or has too many incorrect values to actually be loaded. Or the data successfully gets loaded into the new system, but is full of duplicate, incomplete, and inaccurate information. In these situations, a last-minute data cleansing effort is launched to attempt to correct the issues, but the effort usually ends up being too little, too late.
Can’t React to Specification Changes Fast Enough
My friend and colleague, Matthew Punnoose, says data migration projects are “all about the changes”, and how well the team handles those changes determines how successful the project will be. On larger implementations, it is not uncommon to have hundreds or thousands of specification changes. A large number of these changes are discovered right before each test cycle and need to be implemented before the test cycle. This can lead to a mad scramble and a delay of the test cycle, which ends up delaying the entire project.
The biggest reason for the large number of specification changes is that it is difficult for the business to know exactly what’s needed until they have a chance to see what the target system can do with their data. Once the business sees some of the features and how their data behaves, the functional team will gain a huge amount of insight into the true business requirements and how the legacy data needs to be transformed to meet those requirements. This discovery process is a bit like peeling an onion, and with each new data load, among other things, the business uncovers incorrect assumptions and different ways that the data needs to be structured for it to behave like they want.
Target System Isn’t Ready until the Last Minute
ERP systems are hugely complex, and setting one up takes many people and considerable expertise. With this complexity, it is understandable and common that the target system is not ready to accept converted data until the last minute. For the data migration team, this means that it is difficult to comprehensively validate assumptions made during the initial specification design as well as test the programs for defects before the first test cycle begins. This situation tends to exacerbate the “code, load, and explode” scenario, where the programs have all been coded, but as soon as the team attempts to load data, disaster strikes. The data migration team cannot do much around the schedule of when the target system will be ready, but they can prepare for the issues that will need to be addressed during that initial load period by having the proper plan and tools in place to react in a timely manner.
Assume that a Loaded Record Is a Good Record
There have been projects where we hear from some team members, “If it went in, it must be good,” and they call it done. This attitude is a true path to disaster, and the team is forgetting a critical component of the data migration process. Records that are loaded into the target system must be validated and reconciled with the data from the disparate legacy systems. There are three things that must be checked on loaded data. One is that the loaded data meets the business requirements for all desired functionality in the target system. Two is that the data meets the specified transformation rules. Lastly, the data needs to reconcile with the source system data. What tends to happen when the team adopts the “If it went in” attitude is that business users start to use the new system, immediately complain that nothing works, reject the system, and start developing ad hoc processes that reside outside the system to do their jobs. Once this happens, project leadership launches the mad scramble/death march to get the data into shape to meet the business requirements.
There are many paths to pain on the data migration journey, and I outlined some of the main ones here, but these projects do not have be painful. Over the next posts, I hope to get into the different aspects of data migration projects and how to avoid some of these pitfalls.
In the meantime, if you have stories about pain that you’ve experienced or seen, share them in the comments. A little schadenfreude is always educational and entertaining.
If you have an upcoming data project that you have concerns about or are involved on one that is currently going sideways, call me at 773.789.9324 and I’ll do everything I can to help your project succeed.