Data Migration Process

Data Migration for a wide range of CRM Systems​​

Data Stream

Migrate Data has migrated data from a multitude of CRM systems which include:

Salesforce
Raisers Edge
Saleslogix
ThankQ
Group Office
MySQL
Bullhorn
Practice Engine
Progress
Dynamics
ACT
Sugar
Donor Strategy
Access
SQL
Civi
Prospect
Goldmine

Migrate Data has migrated data from a multitude of CRM systems which include:

Salesforce
Raisers Edge
Saleslogix
ThankQ
Group Office
MySQL
Bullhorn
Practice Engine
Progress
Dynamics
ACT
Sugar
Donor Strategy
Access
SQL
Civi
Prospect
Goldmine
data-config

Whilst the tools and data migration process for each project will differ slightly depending on the complexity and scale, the approach will be identified from the outset setting expectations of time and costs.

We are conversant with project software such as Informatica, Talend or Pandora and use these when required and it the right way – normally in conjunction with our process below.

It is really important that a methodology is used otherwise the migration will most likely fail. Our approach, based on “Practical Data Migration v2”, is laid out below

Our Data migration methodology

1. Extract

This is the extraction of the data from the source systems. This normally consolidates data from a number of different systems and areas and each may well use a different data format. The goal of the extraction phase is to convert the data into a single format which is appropriate for the transformation processing. Once the data is extracted we then import the data into our staging environment.

2. Landscape Analysis

This is the discovery, review and documenting of the legacy data stores, including their linkages, data quality and key data stakeholders. The discovery phase is where we understand which tables are within your source system and which fields are related to each table. We complete a free Table And Field Analysis (TAFA) of the source database and we provide this output back to the client. The TAFA gives us an understanding of what’s in scope for the migration. The Landscape analysis should also cover an understanding of the structure of the target system.

3. Gap Analysis and Mapping

This is an opportunity to identify any gaps between the source and target systems and to understand what is business critical. Once the source and target objects have been identified we map all the fields per object between source and target. Mapping documents are shared with the client and updated as the project progresses.

4. Transform

The transform stage applies a series of rules or functions to allow the extracted data to load into the end target. Some data sources will require very little or even no manipulation. In other cases, one or more transformation types may be required to meet the business and technical needs. This stage is where the migration resource writes the migration process based on the mapping and transforms the data to fit the target system. It is often done in parallel with gap analysis and mapping. The process is scripted using SQL to make it repeatable for test loads. Data cleansing and data enrichment can also take place at this stage.

5. Test loads

An integral part of any data migration is to test its effectiveness in achieving the desired result. Completing full test loads will provide this by essentially doing a dry run of the go live load. We will run test loads into a full copy sandbox environment of the target system and usually this is done with the entire data set. The testing process can vary widely depending on the agreed requirements.

6. Amendments

Once the test load has been completed, we will then hand over to the client to review the data in the target sandbox environment. This should be done as thoroughly as possible over the course of a week or more depending on size, complexity and resources available to review the data. We will provide a feedback sheet for the client to fill out any queries per object. When the feedback sheet is updated with any queries, we will in parallel make any necessary changes in preparation for either another test load or the go live load.

7. Go live

Once all the changes and amendments have been actioned, we will request a refresh of the source data. After extracting the data into our staging environment, we will then begin running the go live loading scripts. We will load the data into the target system usually over a weekend or at a designated time when the client is not using either the source or target system. Once the go live load is completed, we will update the client and then they can start using their new system!

8. Aftercare

We don’t just leave once the data has been imported – we need to make sure that it’s all working as it should. Occasionally once the new system is in use there may be some issues that weren’t raised during the test load and amendment phases. We provide some post go live support to make any minor changes that are required.

All of the these processes are supported by:-

Data Quality Rules (DQRs)

These are a centralised list of all the rules about particular fields, subsets criteria and transformations. Generally the first DQR identifies the problem, the second analyses the data and the third fixes it. They form a contract between the business and the technicians about what constitutes quality data and how to go about securing it. Data quality rules or DQR’s are our way of raising any questions or queries surrounding the data or on the source and target systems. This allows the client to make decisions on their data based on the questions we raise.

Demilitarised Zone (DMZ) and key Data Stakeholders Management

The DMZ is the dividing line between the responsibilities of parties from the project side to the supplier side. There can be shared activities between the two parties for example end to end testing and DQRs. Data owners must be contactable, empowered to make judgements and decisions.