PARSynthesizer

The PARSynthesizer uses a deep learning method to train a model and generate synthetic data.

from sdv.sequential import PARSynthesizer

synthesizer = PARSynthesizer(metadata)
synthesizer.fit(data)

synthetic_data = synthesizer.sample(num_sequences=100)

Is the PARSynthesizer suited for your dataset? The PARSynthesizer is designed to work on multi-sequence data, which means that there are multiple sequences (usually belonging to different entities) present within the same dataset. This means that your metadata should include a sequence_key. Using this information, the PARSynthesizer creates brand new entities and brand new sequences for each one.

If your dataset contains only a single sequence of data, then the PARSynthesizer is not suited for your dataset.

Creating a synthesizer

When creating your synthesizer, you are required to pass in a Metadata object as the first argument. All other parameters are optional. You can include them to customize the synthesizer.

synthesizer = PARSynthesizer(
    metadata, # required
    enforce_min_max_values=True,
    enforce_rounding=False,
    context_columns=['Address', 'Smoker']
)

Parameter Reference

enforce_min_max_values: Control whether the synthetic data should adhere to the same min/max boundaries set by the real data

(default) True

The synthetic data will contain numerical values that are within the ranges of the real data.

False

The synthetic data may contain numerical values that are less than or greater than the real data. Note that you can still set the limits on individual columns using Constraints.

enforce_rounding: Control whether the synthetic data should have the same number of decimal digits as the real data

(default) True

The synthetic data will be rounded to the same number of decimal digits that were observed in the real data

False

The synthetic data may contain more decimal digits than were observed in the real data

locales: A list of locale strings. Any PII columns will correspond to the locales that you provide.

(default) ['en_US']

Generate PII values in English corresponding to US-based concepts (eg. addresses, phone numbers, etc.)

<list>

Create data from the list of locales. Each locale string consists of a 2-character code for the language and 2-character code for the country, separated by an underscore.

For example ["en_US", "fr_CA"].

For all options, see the Faker docs.

context_columns: Provide a list of strings that represent the names of the context columns. Context columns do not vary inside of a sequence. For example, a user's 'Address' may not vary within a sequence while other columns such as 'Heart Rate' would. Defaults to an empty list.

epochs: Number of times to train the GAN. Each new epoch can improve the model.

(default) 128

Run all the data through the neural network 128 times during training

<number>

Train for a different number of epochs. Note that larger numbers will increase the modeling time.

verbose: Control whether to print out the results of each epoch. You can use this to track the training time as well as the improvements per epoch.

(default) False

Do not print out any results

True

Print out the loss value per epoch. The loss values indicate how well the neural network is currently performing, lower values indicating higher quality.

cuda: Whether to use CUDA, a parallel computing platform that allows you to speed up modeling time using the GPU

(default) True

If available, use CUDA to speed up modeling time. If it's not available, then there will be no difference.

False

Do not use CUDA to speed up modeling time.

Looking for more customizations? Other settings are available to fine-tune the architecture of the underlying neural network used to model the data. Click the section below to expand.

Click to expand additional neural network customization options

These settings are specific to the neural network. Use these settings if you want to optimize the technical architecture and modeling.

sample_size: The number of times to sample (before choosing and returning the sample which maximizes the likelihood). Defaults to 1.

segment_size: Cut each training sequence into several segments by using this parameter. For example, if the segment_size=10 then each segment contains 10 data points. Defaults to None, which means the sequences are not cut into any segments.

get_parameters

Use this function to access the custom parameters you have included for the synthesizer

Parameters None

Output A dictionary with the parameter names and the values

synthesizer.get_parameters()
{
    'enforce_min_max_values': True,
    'enforce_rounding': False
    'context_columns': ['Address', 'Smoker']
}

The returned parameters are a copy. Changing them will not affect the synthesizer.

get_metadata

Use this function to access the metadata object that you have included for the synthesizer

Parameters None

Output A Metadata object

metadata = synthesizer.get_metadata()

The returned metadata is a copy. Changing it will not affect the synthesizer.

Learning from your data

To learn a machine learning model based on your real data, use the fit method.

fit

Parameters

  • (required) data: A pandas DataFrame object containing the real data that the machine learning model will learn from

Output (None)

synthesizer.fit(data)

Technical Details: PAR is a Probabilistic Auto-Regressive model that is based in neural networks. It learns how to create brand new sequences of multi-dimensional data, by conditioning on the unchanging, context values.

For more details, see the Sequential Models in the Synthetic Data Vault, a preprint from June 2022 that describes the PAR model.

Saving your synthesizer

Save your trained synthesizer for future use.

save

Use this function to save your trained synthesizer as a Python pickle file.

Parameters

  • (required) filepath: A string describing the filepath where you want to save your synthesizer. Make sure this ends in .pkl

Output (None) The file will be saved at the desired location

synthesizer.save(
    filepath='my_synthesizer.pkl'
)

PARSynthesizer.load

Use this function to load a trained synthesizer from a Python pickle file

Parameters

  • (required) filepath: A string describing the filepath of your saved synthesizer

Output Your synthesizer, as a PARSynthesizer object

from sdv.sequential import PARSynthesizer

synthesizer = PARSynthesizer.load(
    filepath='my_synthesizer.pkl'
)

What's next?

After training your synthesizer, you can now sample synthetic data. See the Sampling section for more details.

Want to improve your synthesizer? Customize the transformations used for pre- and post-processing the data. For more details, see Advanced Features.

FAQs

How do I cite PAR?

Kevin Zhang, Kalyan Veeramachaneni, Neha Patki. Sequential Models in the Synthetic Data Vault. Preprint, June 2022.

@unpublished{par,
   title={Sequential Models in the Synthetic Data Vault},
   author={Zhang, Kevin and Veeramachaneni, Kalyan and Patki, Neha},
   year={2022}
}
What happens if columns don't contain numerical data?

This synthesizer models non-numerical columns, including columns with missing values.

Although the Gaussian Copula algorithm is designed for only numerical data, this synthesizer converts other data types using Reversible Data Transforms (RDTs). To access and modify the transformations, see Advanced Features.

Can I call fit again even if I've previously fit some data?

Yes, even if you're previously fit data, you should be able to call the fit method again.

If you do this, the synthesizer will start over from scratch and fit the new data that you provide it. This is the equivalent of creating a new synthesizer and fitting it with new data.

Last updated

Copyright (c) 2023, DataCebo, Inc.