Retry data load via ProcessingHub Manager

Most people that have worked with Salesforce as an admin or power user know Salesforce Dataloader. When designing Dataload Retry Management our goal was to implement a similar experience. See table below for a comparison.

Salesforce DataloaderProcessingHub Dataload Retry Management
CSV files to load.On the ProcessingHub we prepare every data load in staging table. Every row in the csv file in case of Dataloader, will be a staged table row in case of ProcessingHub.
Success and error filesFor every record that we try to push into Salesforce, we track and store if that was successful or not.
Error column in error csv fileWe store the same error message in the staging table.
Retry to uploading error csv fileProcessingHub has “Retry All”, “Retry Batch”, “Retry 1st Failed Record”.
When there are no more errors, the admin proceeds with his/her next task.Once all staged records are successfully pushed, the process continues to be

processed.

Structure

Retry functionality

  1. Open dataload details and retry functionality by clicking on the dataload badge.
  2. Dataload status. See section on dataload statuses for details.
  3. Key dataload information: operation, object and number of records.
  4. Retry buttons.
    • Retry 1st failed record
    • If API Type is bulk, retry first batch.
    • Retry all failed records.
  5. More detailed dataload information, including record counts.
  6. If there are failed records, all the data for the first failed record will be shown. When the “Retry 1st Failed Record” is clicked, this record will be retried.

Recommended Retry Way of Working

Assess if the error is record related or batch related. It will almost always be record related, but if the error has to do with locked rows or the batch size being to big it’s batch related. If it’s record related:

  1. Analyse the error and fields in the first failed record.
  2. Then solve the root cause in Salesforce, for example by removing a validation rule.
  3. Then click retry 1st failed record
  4. If the dataload is still in a Failed status, analyse if you have at least solved that first failed record. If not, keep on trying. If yes, solve the next issue.
  5. If you are comfortable that the full root cause has been taken away, use the “Retry All Failed” to retry all records at once.

If it’s batch related:

  1. Navigate to Setup > ProcessingHub > General Settings
  2. Tweak Bulk API settings and try again using the “Retry 1st Batch”. If batch size is too big, try a lower batch size. If there are locked row errors, set concurrency mode to Serial.