KeyUniqueness

This metric measures whether the keys in a particular dataset are unique. We expect that certain types of keys, such as primary keys, are always unique in order to be valid.

Data Compatibility

  • ID : This metric is meant for ID data

  • Other : This metric can work with any other type of semantic data that is used in place of an ID, such as a natural key like email

Score

(best) 1.0: All of the key values in the synthetic data are unique

(worst) 0.0: None of the key values in the synthetic data are unique

How does it work?

This metric measures how many values in the synthetic data, s, are duplicates, meaning that there is another value that is exactly the same. Call this set Ds. The score is the proportion of values that are not duplicates.

score=1Dssscore = 1 - \frac{|D_s|}{|s|}

Usage

Recommended Usage: The Diagnostic Report applies this metric to applicable keys (primary and alternate keys).

To manually run this metric, access the single_column module and use the compute method.

from sdmetrics.single_column import KeyUniqueness

KeyUniqueness.compute(
    real_data=real_table['primary_key_name'],
    synthetic_data=synthetic_table['primary_key_name']
)

Parameters

  • (required) real_data: A pandas.Series object with the column of real data

  • (required) synthetic_data: A pandas.Series object with the column of synthetic data

FAQ

Should the score always be 1?

If you are running this score on a primary key, then the score should always be 1. Primary keys are expected to be unique.

If you are running this score on a foreign key, then the score may not be 1, as foreign keys are allowed to repeat. For foreign keys, we recommend using the ReferentialIntegrity metric instead.

Does this metric use the real data?

This metric checks to see if the real data also has unique values and alerts you if this is not the case. However, the final score is only based on the synthetic data.

Last updated