Data Likelihood describes a set of metrics that calculate the likelihood of the synthetic data belonging to the real data. This metric uses a Bayesian Network to make this calculation.
- Categorical: This metric is meant for discrete, categorical data
- Boolean: This metric works on boolean data
This metric ignores any incompatible column types.
This metric does not accept missing values
(highest) ∞: According to the algorithm, the synthetic data has the highest possible likelihood of belonging to the real data
(lowest) -∞: According to the algorithm used, the synthetic data has the lowest possible likelihood of belonging to the real data
There are multiple interpretations of the score. A high score can indicates high synthetic data quality as well as low privacy. A low score can indicate low synthetic data quality as well as high privacy.
This metric uses a Bayesian Network  from pomegranate  to learn the distribution of the real data. The model learns to produce a likelihood estimate for every row ranging from 0 to 1, where 0 means the row is likely not part of the data and 1 means that it is.
We apply the model to all the synthetic data and return the average likelihood score.
You will need to install the
pomegranatelibrary in order to use this metric
pip install pomegranate
Access this metric from the
single_tablemodule and use the
from sdmetrics.single_table import BNLikelihood
real_data: A pandas.DataFrame containing the real data
synthetic_data: A pandas.DataFrame containing the same columns of synthetic data
structure: The BayesianNetwork structure to use when fitting to the real data. If not passed, learn it from the data using the Chow-Liu algorithm .
This metric is in Beta. Be careful when using the metric and interpreting its score.
- The score heavily depends on algorithm used to model the data. If the overall distribution of the real data cannot be learned well, then the likelihood estimates of the synthetic data may not be valid.
- There are multiple interpretations for this metric. (See the Score section above.) Of course, this is heavily dependent on how well we trust the algorithm to model the real data.