Synthetic Data Vault
GitHubSlackDataCebo
  • Welcome to the SDV!
  • Tutorials
  • Explore SDV
    • SDV Community
    • SDV Enterprise
      • ⭐Compare Features
    • SDV Bundles
      • ❖ AI Connectors
      • ❖ CAG
      • ❖ Differential Privacy
      • ❖ XSynthesizers
  • Single Table Data
    • Data Preparation
      • Loading Data
      • Creating Metadata
    • Modeling
      • Synthesizers
        • GaussianCopulaSynthesizer
        • CTGANSynthesizer
        • TVAESynthesizer
        • ❖ XGCSynthesizer
        • ❖ SegmentSynthesizer
        • * DayZSynthesizer
        • ❖ DPGCSynthesizer
        • ❖ DPGCFlexSynthesizer
        • CopulaGANSynthesizer
      • Customizations
        • Constraints
        • Preprocessing
    • Sampling
      • Sample Realistic Data
      • Conditional Sampling
    • Evaluation
      • Diagnostic
      • Data Quality
      • Visualization
  • Multi Table Data
    • Data Preparation
      • Loading Data
        • Demo Data
        • CSV
        • Excel
        • ❖ AlloyDB
        • ❖ BigQuery
        • ❖ MSSQL
        • ❖ Oracle
        • ❖ Spanner
      • Cleaning Your Data
      • Creating Metadata
    • Modeling
      • Synthesizers
        • * DayZSynthesizer
        • * IndependentSynthesizer
        • HMASynthesizer
        • * HSASynthesizer
      • Customizations
        • Constraints
        • Preprocessing
      • * Performance Estimates
    • Sampling
    • Evaluation
      • Diagnostic
      • Data Quality
      • Visualization
  • Sequential Data
    • Data Preparation
      • Loading Data
      • Cleaning Your Data
      • Creating Metadata
    • Modeling
      • PARSynthesizer
      • Customizations
    • Sampling
      • Sample Realistic Data
      • Conditional Sampling
    • Evaluation
  • Concepts
    • Metadata
      • Sdtypes
      • Metadata API
      • Metadata JSON
    • Constraints
      • Predefined Constraints
        • Positive
        • Negative
        • ScalarInequality
        • ScalarRange
        • FixedIncrements
        • FixedCombinations
        • ❖ FixedNullCombinations
        • ❖ MixedScales
        • OneHotEncoding
        • Inequality
        • Range
        • * ChainedInequality
      • Custom Logic
        • Example: IfTrueThenZero
      • ❖ Constraint Augmented Generation (CAG)
        • ❖ CarryOverColumns
        • ❖ CompositeKey
        • ❖ ForeignToForeignKey
        • ❖ ForeignToPrimaryKeySubset
        • ❖ PrimaryToPrimaryKey
        • ❖ PrimaryToPrimaryKeySubset
        • ❖ SelfReferentialHierarchy
        • ❖ ReferenceTable
        • ❖ UniqueBridgeTable
  • Support
    • Troubleshooting
      • Help with Installation
      • Help with SDV
    • Versioning & Backwards Compatibility Policy
Powered by GitBook

Copyright (c) 2023, DataCebo, Inc.

On this page
  • Creating a synthesizer
  • Parameter Reference
  • get_parameters
  • get_metadata
  • Learning from your data
  • fit
  • Saving your synthesizer
  • save
  • PARSynthesizer.load
  • What's next?
  • FAQs
  1. Sequential Data
  2. Modeling

PARSynthesizer

PreviousModelingNextCustomizations

Last updated 7 months ago

The PARSynthesizer uses a deep learning method to train a model and generate synthetic data.

from sdv.sequential import PARSynthesizer

synthesizer = PARSynthesizer(metadata)
synthesizer.fit(data)

synthetic_data = synthesizer.sample(num_sequences=100)

Is the PARSynthesizer suited for your dataset? The PARSynthesizer is designed to work on multi-sequence data, which means that there are multiple sequences (usually belonging to different entities) present within the same dataset. This means that your metadata should include a sequence_key. Using this information, the PARSynthesizer creates brand new entities and brand new sequences for each one.

If your dataset contains only a single sequence of data, then the PARSynthesizer is not suited for your dataset.

Creating a synthesizer

When creating your synthesizer, you are required to pass in a object as the first argument. All other parameters are optional. You can include them to customize the synthesizer.

synthesizer = PARSynthesizer(
    metadata, # required
    enforce_min_max_values=True,
    enforce_rounding=False,
    context_columns=['Address', 'Smoker']
)

Parameter Reference

enforce_min_max_values: Control whether the synthetic data should adhere to the same min/max boundaries set by the real data

(default) True

The synthetic data will contain numerical values that are within the ranges of the real data.

False

enforce_rounding: Control whether the synthetic data should have the same number of decimal digits as the real data

(default) True

The synthetic data will be rounded to the same number of decimal digits that were observed in the real data

False

The synthetic data may contain more decimal digits than were observed in the real data

locales: A list of locale strings. Any PII columns will correspond to the locales that you provide.

(default) ['en_US']

Generate PII values in English corresponding to US-based concepts (eg. addresses, phone numbers, etc.)

<list>

Create data from the list of locales. Each locale string consists of a 2-character code for the language and 2-character code for the country, separated by an underscore.

context_columns: Provide a list of strings that represent the names of the context columns. Context columns do not vary inside of a sequence. For example, a user's 'Address' may not vary within a sequence while other columns such as 'Heart Rate' would. Defaults to an empty list.

epochs: Number of times to train the GAN. Each new epoch can improve the model.

(default) 128

Run all the data through the neural network 128 times during training

<number>

Train for a different number of epochs. Note that larger numbers will increase the modeling time.

verbose: Control whether to print out the results of each epoch. You can use this to track the training time as well as the improvements per epoch.

(default) False

Do not print out any results

True

Print out the loss value per epoch. The loss values indicate how well the neural network is currently performing, lower values indicating higher quality.

(default) True

If available, use CUDA to speed up modeling time. If it's not available, then there will be no difference.

False

Do not use CUDA to speed up modeling time.

Looking for more customizations? Other settings are available to fine-tune the architecture of the underlying neural network used to model the data. Click the section below to expand.

Click to expand additional neural network customization options

These settings are specific to the neural network. Use these settings if you want to optimize the technical architecture and modeling.

sample_size: The number of times to sample (before choosing and returning the sample which maximizes the likelihood). Defaults to 1.

segment_size: Cut each training sequence into several segments by using this parameter. For example, if the segment_size=10 then each segment contains 10 data points. Defaults to None, which means the sequences are not cut into any segments.

get_parameters

Use this function to access the custom parameters you have included for the synthesizer

Parameters None

Output A dictionary with the parameter names and the values

synthesizer.get_parameters()
{
    'enforce_min_max_values': True,
    'enforce_rounding': False
    'context_columns': ['Address', 'Smoker']
}

The returned parameters are a copy. Changing them will not affect the synthesizer.

get_metadata

Use this function to access the metadata object that you have included for the synthesizer

Parameters None

metadata = synthesizer.get_metadata()

The returned metadata is a copy. Changing it will not affect the synthesizer.

Learning from your data

To learn a machine learning model based on your real data, use the fit method.

fit

Parameters

Output (None)

synthesizer.fit(data)

Technical Details: PAR is a Probabilistic Auto-Regressive model that is based in neural networks. It learns how to create brand new sequences of multi-dimensional data, by conditioning on the unchanging, context values.

Saving your synthesizer

Save your trained synthesizer for future use.

save

Use this function to save your trained synthesizer as a Python pickle file.

Parameters

  • (required) filepath: A string describing the filepath where you want to save your synthesizer. Make sure this ends in .pkl

Output (None) The file will be saved at the desired location

synthesizer.save(
    filepath='my_synthesizer.pkl'
)

PARSynthesizer.load

Use this function to load a trained synthesizer from a Python pickle file

Parameters

  • (required) filepath: A string describing the filepath of your saved synthesizer

Output Your synthesizer, as a PARSynthesizer object

from sdv.sequential import PARSynthesizer

synthesizer = PARSynthesizer.load(
    filepath='my_synthesizer.pkl'
)

What's next?

FAQs

How do I cite PAR?

Kevin Zhang, Kalyan Veeramachaneni, Neha Patki. Sequential Models in the Synthetic Data Vault. Preprint, June 2022.

@unpublished{par,
   title={Sequential Models in the Synthetic Data Vault},
   author={Zhang, Kevin and Veeramachaneni, Kalyan and Patki, Neha},
   year={2022}
}
What happens if columns don't contain numerical data?

This synthesizer models non-numerical columns, including columns with missing values.

Can I call fit again even if I've previously fit some data?

Yes, even if you're previously fit data, you should be able to call the fit method again.

If you do this, the synthesizer will start over from scratch and fit the new data that you provide it. This is the equivalent of creating a new synthesizer and fitting it with new data.

The synthetic data may contain numerical values that are less than or greater than the real data. Note that you can still set the limits on individual columns using .

For example [, ].

For all options, see the .

cuda: Whether to use , a parallel computing platform that allows you to speed up modeling time using the GPU

Output A object

(required) data: A object containing the real data that the machine learning model will learn from

For more details, see the , a preprint from June 2022 that describes the PAR model.

After training your synthesizer, you can now sample synthetic data. See the section for more details.

Want to improve your synthesizer? Customize the transformations used for pre- and post-processing the data. For more details, see .

Although the Gaussian Copula algorithm is designed for only numerical data, this synthesizer converts other data types using Reversible Data Transforms (RDTs). To access and modify the transformations, see .

Metadata
CUDA
Metadata
pandas DataFrame
Sequential Models in the Synthetic Data Vault
Sampling
Advanced Features
Advanced Features
Constraints
"en_US"
"fr_CA"
Faker docs