AWS Runs
This page guides you through running the SDGym benchmark on the cloud using AWS. AWS will be used for accessing any custom datasets you may have on S3, running the synthesizers on EC2, and writing the final results into S3.
To run on the locally, please see the guide for Local Runs.
import sdgym
results = sdgym.benchmark_single_table_aws(
aws_access_key_id='my_access_key',
aws_secret_access_key='my_secret',
output_destination='s3://sdgym_results_bucket/'
)See Interpreting Results for a description of the benchmarking results.
Authentication Parameters
These parameters are required unless you have followed Amazon's instructions to set up environment variables. We recommend supplying these parameters to ensure the benchmark can access S3 and EC2.
aws_access_key_id: A string containing your AWS access key id
aws_secret_access_key: A string containing your AWS secret access key
Optional Parameters
Every step of the benchmarking process is customizable. Use the optional parameters to control the setup, execution and evaluation.
Setup
Use these parameters to control which synthesizers and datasets to include in the benchmark.
synthesizers: Control which synthesizers to use by supplying a list of strings with the synthesizer names
(default)
['GaussianCopulaSynthesizer', 'CTGANSynthesizer', 'UniformSynthesizer']Options include
'GaussianCopulaSynthesizer', 'CTGANSynthesizer','TVAESynthesizer'and'CopulaGANSynthesizer', and many more. You may supply SDV Synthesizers, Basic Synthesizers, or 3rd Party Synthesizers. Currently, custom synthesizers are not supported for AWS runs; please run your benchmark locally in this case.
sdgym.benchmark_single_table_aws(
synthesizers=['TVAESynthesizer', 'ColumnSynthesizer', 'RealTabFormerSynthesizer'])Simulating graceful degradation. SDGym always runs the UniformSynthesizer as a backup synthesizer, even if it is explicitly specified. This backup synthesizer is used to simulate graceful degradation in an enterprise setting. For more information, see Graceful Handling of Errors.
sdv_datasets: Control which of the SDV demo datasets to use by supplying their names as a list of strings.
(default)
['adult', 'alarm', 'census', 'child', 'expedia_hotel_logs', 'insurance', 'intrusion', 'news', 'covtype']See Datasets for more options
additional_datasets_folder: Supply the name of an S3 bucket that contains additional datasets.
(default)
None: Do not run the benchmark for any additional datasets.<string>: The path to your S3 bucket that contains additional datasets. This should start with the prefixs3://. Make sure your datasets are in the correct format as described in the Dataset Format guide. Also make sure that you have provided the permissions to read from this folder.
Execution
Use these parameters to control speed and flow of the benchmarking.
limit_dataset_size: Set this boolean to limit the size of every dataset. This will yield faster results but may affect the overall quality.
(default)
False: Use the full datasets for benchmarking.True: Limit the dataset size before benchmarking. For every dataset selected, use only 1000 rows (randomly sampled) and the first 10 columns.
timeout: The maximum number of seconds to give to each synthesizer to train and sample a dataset
(default)
None: Do not set a maximum. Allow the synthesizer to take as long as it needs.<integer>: Allow a synthesizer to run on the integer number of seconds for each dataset. If the synthesizer is exceeding the time, the benchmark will report aSynthesizerTimeoutError.
output_destination: Supply the name of an S3 bucket where you'd like to save the final results, as well as all the detailed artifacts created in the process.
(default)
None: Do not save any of the results.<string>: The path to your S3 bucket where you'd like to store the final results and detailed artifacts. This should start with the prefixs3://. For more details on what will be saved, see the Results Summary and Artifacts guides.
Evaluation
Use the evaluation parameters to control what to measure when benchmarking.
The SDGym benchmark will always measure performance (time and memory). Use additional parameters to evaluate other aspects of the synthetic data after it's created.
compute_diagnostic_score: Set this boolean to generate an overall diagnostic score for every synthesizer and dataset. This may increase the benchmarking time.
(default)
True: Compute an overall diagnostic score. See the SDMetrics Diagnostic Report for more details.False: Do not compute a diagnostic score.
compute_quality_score: Set this boolean to generate an overall quality score for every synthesizer and dataset. This may increase the benchmarking time.
(default)
True: Compute an overall quality score. See the SDMetrics Quality Report for more details.False: Do not compute a quality score.
compute_privacy_score: Set this boolean to generate an overall privacy score for every synthesizer and dataset. This may increase the benchmarking time.
(default)
True: Compute the privacy score. See the DCRBaselineProtection metric for more details.False: Do not compute a privacy score.
sdmetrics: Provide a list of strings corresponding to additional metrics from the SDMetrics library.
(default)
None: Do not apply any additional metrics.See the SDMetrics library for more metric options
Last updated