The TVAE Synthesizer uses a variational autoencoder (VAE)-based, neural network techniques to train a model and generate synthetic data.

from sdv.single_table import TVAESynthesizer

synthesizer = TVAESynthesizer(metadata)

synthetic_data = synthesizer.sample(num_rows=10)

Creating a synthesizer

When creating your synthesizer, you are required to pass in a SingleTableMetadata object as the first argument. All other parameters are optional. You can include them to customize the synthesizer.

synthesizer = TVAESynthesizer(
    metadata, # required

Parameter Reference

enforce_min_max_values: Control whether the synthetic data should adhere to the same min/max boundaries set by the real data

(default) True

The synthetic data will contain numerical values that are within the ranges of the real data.


The synthetic data may contain numerical values that are less than or greater than the real data. Note that you can still set the limits on individual columns using Constraints.

enforce_rounding: Control whether the synthetic data should have the same number of decimal digits as the real data

(default) True

The synthetic data will be rounded to the same number of decimal digits that were observed in the real data


The synthetic data may contain more decimal digits than were observed in the real data

locales: A list of locale strings. Any PII columns will correspond to the locales that you provide.

(default) ['en_US']

Generate PII values in English corresponding to US-based concepts (eg. addresses, phone numbers, etc.)


Create data from the list of locales. Each locale string consists of a 2-character code for the language and 2-character code for the country, separated by an underscore.

For example ["en_US", "fr_CA"].

For all options, see the Faker docs.

epochs: Number of times to train the VAE. Each new epoch can improve the model.

(default) 300

Run all the data through the Generator and Discriminator 300 times during training


Train for a different number of epochs. Note that larger numbers will increase the modeling time.

cuda: Whether to use CUDA, a parallel computing platform that allows you to speed up modeling time using the GPU

(default) True

If available, use CUDA to speed up modeling time. If it's not available, then there will be no difference.


Do not use CUDA to speed up modeling time.

Looking for more customizations? Other settings are available to fine-tune the architecture of the neural network used to model the data. Click the section below to expand.

Click to expand additional VAE customization options

These settings are specific to the neural network. Use these settings if you want to optimize the technical architecture and modeling.

batch_size: Number of data samples to process in each step. This value must be even, and it must be divisible by the pac parameter (see below). Defaults to 500.

compress_dims: Size of each hidden layer in the encoder. Defaults to (128, 128).

decompress_dims: Size of each hidden layer in the decoder. Defaults to (128, 128).

embedding_dim: Size of the random sample passed to the Generator. Defaults to 128.

l2scale: Regularization term. Defaults to 1e-5.

loss_factor: Multiplier for the reconstruction error. Defaults to 2.

pac: Number of samples to group together when applying the discriminator. Defaults to 10.


Use this function to access the all parameters your synthesizer uses -- those you have provided as well as the default ones.

Parameters None

Output A dictionary with the parameter names and the values

    'enforce_rounding': False,
    'epochs': 500,

The returned parameters are a copy. Changing them will not affect the synthesizer.


Use this function to access the metadata object that you have included for the synthesizer

Parameters None

Output A SingleTableMetadata object

metadata = synthesizer.get_metadata()

The returned metadata is a copy. Changing it will not affect the synthesizer.

Learning from your data

To learn a machine learning model based on your real data, use the fit method.



  • (required) data: A pandas DataFrame object containing the real data that the machine learning model will learn from

Output (None)

Technical Details: This synthesizer uses the TVAE to learn a model from real data and create synthetic data. The TVAE uses variational autoencoders (VAEs) to model data, as described in the Modeling Tabular data using Conditional GAN paper which was presented at the NeurIPS conference in 2019.


After fitting, you can access the loss values computed during each epoch and batch.

Parameters (None)

Output A pandas.DataFrame object containing epoch number, batch number and loss value.

Epoch     Batch    Loss 
1         1        1.7863
1         2        1.5484
1         3        1.3633

Saving your synthesizer

Save your trained synthesizer for future use.


Use this function to save your trained synthesizer as a Python pickle file.


  • (required) filepath: A string describing the filepath where you want to save your synthesizer. Make sure this ends in .pkl

Output (None) The file will be saved at the desired location


Use this function to load a trained synthesizer from a Python pickle file


  • (required) filepath: A string describing the filepath of your saved synthesizer

Output Your synthesizer, as a TVAESynthesizer object

from sdv.single_table import TVAESynthesizer

synthesizer = TVAESynthesizer.load(

What's next?

After training your synthesizer, you can now sample synthetic data. See the Sampling section for more details.

Want to improve your synthesizer? Input logical rules in the form of constraints, and customize the transformations used for pre- and post-processing the data.

For more details, see Customizations.


What happens if columns don't contain numerical data?

This synthesizer models non-numerical columns, including columns with missing values.

Although the TVAE algorithm is designed for complete data with non-missing values, this synthesizer converts other data types using Reversible Data Transforms (RDTs). To access and modify the transformations, see Advanced Features.

How many epochs should I train for?

Unfortunately, there is no one-size-fits-all solution for this question! The optimal number of epochs depends on both the complexity of your dataset and the metrics you are using to quantify success.

Our experiments suggest that increasing the number of epochs helps up until a certain inflection point. After this, there is no significant improvement. Keep in mind that increasing the epochs also increases the training time. More information is available in this discussion.

Can I call fit again even if I've previously fit some data?

Yes, even if you're previously fit data, you should be able to call the fit method again.

If you do this, the synthesizer will start over from scratch and fit the new data that you provide it. This is the equivalent of creating a new synthesizer and fitting it with new data.

How do I cite TVAE?

The TVAE model was introduced in the same paper as CTGAN.

Lei Xu, Maria Skoularidou, Alfredo Cuesta-Infante, Kalyan Veeramachaneni. Modeling Tabular data using Conditional GAN. NeurIPS, 2019.

   title={Modeling Tabular data using Conditional GAN},
   author={Xu, Lei and Skoularidou, Maria and Cuesta-Infante, Alfredo and Veeramachaneni, Kalyan},
   booktitle={Advances in Neural Information Processing Systems},

Last updated

Copyright (c) 2023, DataCebo, Inc.