Data Likelihood describes a set of metrics that calculate the likelihood of the synthetic data belonging to the real data. This metric uses Gaussian Mixture Models to make this calculation.
- Numerical : This metric is meant for continuous, numerical data
This metric ignores any incompatible column types.
This metric does not accept missing values
(highest) ∞: According to the algorithm, the synthetic data has the highest possible likelihood of belonging to the real data
(lowest) -∞: According to the algorithm used, the synthetic data has the lowest possible likelihood of belonging to the real data
There are multiple interpretations of the score. A high score can indicates high synthetic data quality as well as low privacy. A low score can indicate low synthetic data quality as well as high privacy.
This metric fits multiple Gaussian mixture models  to learn the distribution of the real data. The model learns to produce a likelihood estimate for every row ranging from -∞ to to +∞, where -∞ means the row is likely not part of the data and +∞ means that it is.
We apply the model to all the synthetic data and return the average likelihood score.
Access this metric from the
single_tablemodule and use the
from sdmetrics.single_table import GMLikelihood
real_data: A pandas.DataFrame containing the real data
synthetic_data: A pandas.DataFrame containing the same columns of synthetic data
n_components: Number of components to use for the mixture model
covariance_type: A string describing the the covariance type to use for the mixture models. If multiple values are passed, the best one will be searched. Defaults to
'diag'. See the sklearn API for other possible values.
iterations: Number of times that each number of components should be evaluated before averaging the scores. Defaults to 3.
retries: Number of times that each iteration will be retried if the mixture model crashes during fit. Defaults to 3.
This metric is in Beta. Be careful when using the metric and interpreting its score.
- The score heavily depends on algorithm used to model the data. If the overall distribution of the real data cannot be learned well, then the likelihood estimates of the synthetic data may not be valid.