* BigQuery

If your data is available in a BigQuery database, you can directly connect to it in order to extract the data and metadata. Later, you can use the same connections to write the data back to a database.

This functionality is in Beta! Beta functionality may have bugs and may change in the future. Help us out by testing this functionality and letting us know if you encounter any issues.

Installation

*SDV Enterprise Feature. This feature is only available for licensed, enterprise users. To learn more about the SDV Enterprise features and purchasing a license, visit our website.

To use this feature, please make sure you have installed SDV Enterprise with the optional db-bigquery dependency. For more information, see the SDV Enterprise Installation guide.

pip install sdv_enterprise[db-bigquery] --index-url https://pypi.datacebo.com --timeout 600

* BigQueryConnector

Use this connector to create a connection to your Google BigQuery database.

from sdv.io import BigQueryConnector

connector = BigQueryConnector()

Parameters (None)

Output A BigQueryConnector object that you can use to import data and metadata

Importing Real Data

Import your real data from a database to use with SDV.

* set_import_config

Use this function to authenticate into the project and database you'd like to import from.

connector.set_import_config(
    project_id='my_project_id',
    dataset_id='my_dataset',
    auth={
        'info': { ... }
    }
)

Parameters

  • (required) project_id: A string with the name of your project in BigQuery

  • (required) dataset_id: A string with the name of your dataset in BigQuery

  • auth: A dictionary with your authentication credentials.

    • (default) None: Use the auth credentials from your environment

How do you pass auth credentials? The recommended approach is to download a JSON file from BigQuery and pass in the filepath. To generate the JSON file, see the BigQuery docs.

auth={
    'info': {
        'json_credentials_path': 'my_folder/credentials.json'
    }
}

Alternatively, you can put this information directly as a dictionary

auth={
    'info': {
        'private_key': ...,
        'client_email': ...,
    }
}

Which permissions are needed for importing? Importing data requires read access. For BigQuery, this includes: bigquery.jobs.create, bigquery.tables.get, bigquery.tables.getData, and bigquery.tables.list, bigquery.datasets.get. If you do not have these permissions, please contact your database admin.

Output (None)

* create_metadata

Use this function to create metadata based on the connection to your database.

metadata = connector.create_metadata(
    table_names=['users', 'transactions', 'sessions'])

Parameters

  • table_names: A list of strings representing the table names that you want to create metadata for

    • (default) None: Create metadata for all the tables in the database

Output A MultiTableMetadata object representing your metadata

The detected metadata is not guaranteed to be accurate or complete. Be sure to carefully inspect the metadata and update information. For more information, see the Metadata Inspection and Update API.

* import_random_subset

Use this function to import a random subset of your data from the database and inferred metadata. The size of the subset is automatically determined.

data = connector.import_random_subset(
    metadata=metadata,
    verbose=True
)

Parameters

  • (required) metadata: A MultiTableMetadata object that describes the data you want to import

  • random_state: An integer that represents the random seed

    • (default) None: Different random data will be imported every time you call the function

    • <integer>: Any time you call the function with this integer, the same random data will be imported

  • verbose: A boolean describing whether to print out details about the progress of the import

    • (default) True: Print out the table names and number of rows being imported

    • False: Do not print any details

Output A dictionary that maps each table name of your database (string) to the data, represented as a pandas DataFrame.

* import_optimized_subset

Use this function to import a subset of your data, optimized specifically for a given table. You can also control the size.

data, metadata = connector.import_optimized_subset(
    metadata=metadata, 
    main_table_name='users',
    num_rows=5000
)

Parameters

  • (required) metadata: A MultiTableMetadata object that describes the data you want to import

  • (required) main_table_name: A string containing the name of the most important table of your database. This table will generally represent the entity that is most critical to your application or business. It must be one of the tables listed in your metadata object.

  • num_rows: The number of rows to sample from the main table. The size of every other table is automatically determined by its connection to the main table.

  • random_state: An integer that represents the random seed

    • (default) None: Different random data will be imported every time you call the function

    • <integer>: Any time you call the function with this integer, the same random data will be imported

  • verbose: A boolean describing whether to print out details about the progress of the import

    • (default) True: Print out the table names and number of rows being imported

    • False: Do not print any details

Output A dictionary that maps each table name of your database (string) to the data, represented as a pandas DataFrame.

After importing the data and metadata, you are now ready to create an SDV synthesizer.

Exporting synthetic data

Export synthetic data into a new database.

We recommend using the same connector as your import. This connector object already knows about the specifics of your database schema. It will ensure that the exported data schema has the same format.

* set_export_config

Use this function to specify which project and database you'd like to import data frame. Also provide your authentication credentials.

connector.set_export_config(
    project_id='my_project_id',
    dataset_id='my_dataset',
    auth={
        'info': { ... }
    }
)

Parameters

  • (required) project_id: A string with the name of your project in BigQuery

  • (required) dataset_id: A string with the name of your dataset in BigQuery

  • auth: A dictionary with your authentication credentials.

    • (default) None: Use the auth credentials from your environment

How do you pass auth credentials? The recommended approach is to download a JSON file from BigQuery and pass in the filepath. To generate the JSON file, see the BigQuery docs.

auth={
    'info': {
        'json_credentials_path': 'my_folder/credentials.json'
    }
}

Alternatively, you can put this information directly as a dictionary

auth={
    'info': {
        'private_key': ...,
        'client_email': ...,
    }
}

Which permissions are needed for exporting? Exporting data requires write access. For BigQuery, this includes: bigquery.datasets.create, bigquery.datasets.get , bigquery.jobs.create, bigquery.tables.create, and bigquery.tables.export. If you do not have these permissions, please contact your database admin.

Output (None)

* export

Use this function to export your synthetic data into a database

connector.export(
    data=synthetic_data,
    mode='write',
    verbose=True)

Parameters

  • (required) synthetic_data: A dictionary that maps each table name to the synthetic data, represented as a pandas DataFrame

  • (required) metadata: A MultiTableMetadata object that describes the data

  • mode: The mode of writing to use during the export

    • (default) 'write': Write a new database from scratch. If the database or data already exists, then the function will error out.

    • 'append': Append rows to existing tables in the database

    • 'overwrite': Remove any existing tables in the database and replace them with this data

  • verbose: A boolean describing whether to print out details about export

    • (default) True: Print the details

    • False: Do not print anything

Output (None) Your data will be written to the database and ready for use by your downstream application!

Last updated

Copyright (c) 2023, DataCebo, Inc.