Synthetic Data Vault
GitHubSlackDataCebo
  • Welcome to the SDV!
  • Tutorials
  • Explore SDV
    • SDV Community
    • SDV Enterprise
      • ⭐Compare Features
    • SDV Bundles
      • ❖ AI Connectors
      • ❖ CAG
      • ❖ Differential Privacy
      • ❖ XSynthesizers
  • Single Table Data
    • Data Preparation
      • Loading Data
      • Creating Metadata
    • Modeling
      • Synthesizers
        • GaussianCopulaSynthesizer
        • CTGANSynthesizer
        • TVAESynthesizer
        • ❖ XGCSynthesizer
        • ❖ SegmentSynthesizer
        • * DayZSynthesizer
        • ❖ DPGCSynthesizer
        • ❖ DPGCFlexSynthesizer
        • CopulaGANSynthesizer
      • Customizations
        • Constraints
        • Preprocessing
    • Sampling
      • Sample Realistic Data
      • Conditional Sampling
    • Evaluation
      • Diagnostic
      • Data Quality
      • Visualization
  • Multi Table Data
    • Data Preparation
      • Loading Data
        • Demo Data
        • CSV
        • Excel
        • ❖ AlloyDB
        • ❖ BigQuery
        • ❖ MSSQL
        • ❖ Oracle
        • ❖ Spanner
      • Cleaning Your Data
      • Creating Metadata
    • Modeling
      • Synthesizers
        • * DayZSynthesizer
        • * IndependentSynthesizer
        • HMASynthesizer
        • * HSASynthesizer
      • Customizations
        • Constraints
        • Preprocessing
      • * Performance Estimates
    • Sampling
    • Evaluation
      • Diagnostic
      • Data Quality
      • Visualization
  • Sequential Data
    • Data Preparation
      • Loading Data
      • Cleaning Your Data
      • Creating Metadata
    • Modeling
      • PARSynthesizer
      • Customizations
    • Sampling
      • Sample Realistic Data
      • Conditional Sampling
    • Evaluation
  • Concepts
    • Metadata
      • Sdtypes
      • Metadata API
      • Metadata JSON
    • Constraints
      • Predefined Constraints
        • Positive
        • Negative
        • ScalarInequality
        • ScalarRange
        • FixedIncrements
        • FixedCombinations
        • ❖ FixedNullCombinations
        • ❖ MixedScales
        • OneHotEncoding
        • Inequality
        • Range
        • * ChainedInequality
      • Custom Logic
        • Example: IfTrueThenZero
      • ❖ Constraint Augmented Generation (CAG)
        • ❖ CarryOverColumns
        • ❖ CompositeKey
        • ❖ ForeignToForeignKey
        • ❖ ForeignToPrimaryKeySubset
        • ❖ PrimaryToPrimaryKey
        • ❖ PrimaryToPrimaryKeySubset
        • ❖ SelfReferentialHierarchy
        • ❖ ReferenceTable
        • ❖ UniqueBridgeTable
  • Support
    • Troubleshooting
      • Help with Installation
      • Help with SDV
    • Versioning & Backwards Compatibility Policy
Powered by GitBook

Copyright (c) 2023, DataCebo, Inc.

On this page
  • Sample Realistic Data
  • sample
  • reset_sampling
  • Export Your Data
  1. Multi Table Data

Sampling

Previous* Performance EstimatesNextEvaluation

Last updated 11 months ago

Use these sampling methods to create synthetic data from your multi table model.

Sample Realistic Data

Create realistic synthetic data data that follows the same format and mathematical properties as the real data.

sample

Use this function to create synthetic data that mimics the real data

synthetic_data = synthesizer.sample(
    scale=1.5
)

Parameters

  • scale: A float >0.0 that describes how much to scale the data by

(default) 1

Don't scale the data. The model will create synthetic data that is roughly the same size as the original data.

>1

Scale the data by the specified factor. For example, 2.5 will create synthetic data that is roughly 2.5x the size of the original data.

<1

Shrink the data by the specified pecentage. For example, 0.9 will create synthetic data that is roughtly 90% of the size of the original data.

Returns A dictionary that maps each table name (string) to a object with synthetic data for that table. The synthetic data mimics the real data.

How large will the synthetic data be? The scale is based on the size of the data you used for training. The scale determines the size of every parent table (ie a table without any foreign keys).

Note that the synthesizer will algorithmically determine the size of the child tables, so their final sizes will approximately follow the scale, with some minor deviations.

reset_sampling

Use this function to reset any randomization in sampling. After calling this, your synthesizer will generate the same data as before. For example in the code below, synthetic_data1 and synthetic_data2 are the same.

synthesizer.reset_sampling()
synthetic_data1 = synthesizer.sample(scale=1.5)

synthesizer.reset_sampling()
synthetic_data2 = synthesizer.sample(scale=1.5)

Parameters None

Returns None. Resets the synthesizer.

Export Your Data

After sampling, export the data back into its original format.

See the section for options.

pandas DataFrame
Loading Data