Import CSV to Customers (C)

Import CSV to Customers (C)

Overview

This example demonstrates how to import customer master data from a CSV file into Pricefx Customers (object type C). The route reads a CSV file, parses it, and loads customer records with their attributes into Pricefx using the pfx-api:loaddata endpoint.

Files

routes/import-csv-to-customers.xml

XML
<routes xmlns="http://camel.apache.org/schema/spring">
    <route id="import-csv-to-customers">
        <from uri="file:{{import.customers.directory}}?{{archive.file}}&amp;{{read.lock}}"/>
        <log message="Processing customer file: ${header.CamelFileNameOnly}" loggingLevel="INFO"/>
        <split aggregationStrategy="recordsCountAggregation" streaming="true">
            <tokenize token="
" group="10000"/>
            <to uri="pfx-csv:unmarshal?skipHeaderRecord=true"/>
            <to uri="pfx-api:loaddata?objectType=C&amp;mapper=import-csv-to-customers.mapper&amp;businessKeys=customerId"/>
        </split>
        <log message="Customer import complete. Total records: ${header.PfxTotalInputRecordsCount}" loggingLevel="INFO"/>
    </route>
</routes>

mappers/import-csv-to-customers.mapper.xml

XML
<mappers>
    <loadMapper id="import-csv-to-customers.mapper">
        <body in="customerId" out="customerId"/>
        <body in="customerName" out="label"/>
        <body in="country" out="attribute1"/>
        <body in="region" out="attribute2"/>
        <body in="segment" out="attribute3"/>
        <body in="industry" out="attribute4"/>
        <body in="accountManager" out="attribute5"/>
        <body in="creditLimit" out="attribute6"/>
        <body in="paymentTerms" out="attribute7"/>
        <body in="status" out="attribute8"/>
    </loadMapper>
</mappers>

config/application.properties (snippet)

# Import directory for customer CSV files
import.customers.directory=/data/imports/customers

How It Works

  1. File Pickup: The file: component monitors the configured directory for CSV files. {{archive.file}} moves processed files to a timestamped archive; {{read.lock}} waits until the file is fully written before reading.

  2. Splitting: The split with tokenize token=" " group="10000" breaks the file into batches of 10,000 lines for memory-efficient streaming. The recordsCountAggregation strategy accumulates the total processed record count.

  3. CSV Unmarshalling: pfx-csv:unmarshal parses each batch into a list of maps using the CSV header row as keys.

  4. Loading to Pricefx: pfx-api:loaddata with objectType=C targets the Customers table. The import-csv-to-customers.mapper maps CSV columns to Pricefx customer fields. The businessKeys=customerId ensures records are upserted by customer ID.

  5. Completion Log: The total record count is logged after all batches are processed.

Common Pitfalls

  • customerId is mandatory: Every customer record must have a unique customerId. Records without this field will be rejected.

  • customerId vs label: The customerId is the unique business identifier (e.g., account number). The label is the display name. Do not confuse the two.

  • Business key must be customerId: For customer imports, always use businessKeys=customerId. Using a different field will cause duplicate records.

  • Attribute limits: Customers support attribute1 through attribute30. Plan your attribute mapping carefully as renumbering later requires data migration.

  • Special characters in customer IDs: Avoid using special characters (slashes, ampersands, etc.) in customer IDs as they can cause issues with API calls and URL encoding.