Using Data Validator Administration - SmartPlant Foundation - IM Update 46 - Help - Hexagon

SmartPlant Foundation Help

Language
English
Product
SmartPlant Foundation
Search by Category
Help
SmartPlant Foundation / SDx Version
10
SmartPlant Markup Plus Version
10.0 (2019)
Smart Review Version
2020 (15.0)

With Data Validator Administration, you can control how supplied data is imported, validated, and loaded into a specific target system. Data Validator Administration consists of a set of components that manage the import of CSV files and validate the data for loading into the target system database.

We recommend using a text editor like Notepad to modify your CSV files to avoid any unexpected results.

After the data is imported into the staging area, the remaining Data Validator components are fully configurable and optional. For example, a job might include just the import and validation of the new data and provide a report on the quality of the imported data. You can control the export of the data so that the export process is not initiated until the quality of the imported data is approved.

Data files are imported using an import definition, which specifies how the data is mapped to the target system using existing class definitions, property definitions, and relationship definitions. The imported data is validated against defined validation rule sets that can use the target system as their basis, and the export mapping ensures that the data is mapped correctly into the target system format. Job definitions combine all these components into one set of steps.

Data Validator does not process reference files. For example, if a document contains a reference to another file and the document is imported into Data Validator, the software does not maintain any relationships to the reference file when exported to a target system. You can manage reference files in the Desktop Client.

There are several components that are defined in Data Validator Administration and used to manage the movement of data:

  • Import process - Uses a definition to map data files to the staging area.

  • Validation process - Evaluates imported data against a defined set of rules to ensure the validity of the data.

  • Implicit delete process - Allows you to manage the deletion or termination of objects from a target system.

  • Export process - Uses a defined mapping, based on the structure of the target system, to manage loading data into the final destination system.

  • Job definition - Links together various importing, validating, and loading components to define how a set of data will be processed by Data Validator Job Management.

  • Target system - Destination where the validated data is loaded.

    • The UID definitions (unique identifiers) in the staging area must match the UID definitions in the target system. UID definitions must be set on all objects that Data Validator exports from the staging area. When you create your mappings, you must set the correct UID in the import mapping by setting up the UID definitions for any object mapping. For more information, see Define column headers.

    • Data Validator Job Management runs the complete process using all the components to import, validate, and load the data into a target system database. For more information on Job Management, see Defining a Job in Data Validator Job Management.

What do you want to do?