Skip to content

Data

PyTorch Tabular uses Pandas Dataframes as the container which holds data. As Pandas is the most popular way of handling tabular data, this was an obvious choice. Keeping ease of useability in mind, PyTorch Tabular accepts dataframes as is, i.e. no need to split the data into X and y like in Sci-kit Learn.

Pytorch Tabular handles this using a DataConfig object.

Basic Usage

  • target: List[str]: A list of strings with the names of the target column(s)
  • continuous_columns: List[str]: Column names of the numeric fields. Defaults to []
  • categorical_columns: List[str]: Column names of the categorical fields to treat differently

Usage Example

data_config = DataConfig(
    target=["label"],
    continuous_columns=["feature_1", "feature_2"],
    categorical_columns=["cat_feature_1", "cat_feature_2"],
)

Advanced Usage:

Date Columns

If you have date_columns in the dataframe, mention the column names in date_columns parameter and set encode_date_columns to True. This will extract relevant features like the Month, Week, Quarter etc. and add them to your feature list internally.

date_columns is not just a list of column names, but a list of (column name, freq) tuples. The freq is a standard Pandas date frequency tags which denotes the lowest temporal granularity which is relevant for the problem.

For eg., if there is a date column for Launch Date for a Product and they only launch once a month. Then there is no sense in extracting features like week, or day etc. So, we keep the frequency at M

date_columns = [("launch_date", "M")]

Feature Transformations

Feature Scaling is an almost essential step to get goog performance from most Machine Learning Algorithms, and Deep Learning is not an exception. normalize_continuous_features flag(which is True by default) scales the input continuous features using a StandardScaler

Sometimes, changing the feature distributions using non-linear transformations helps the machine learning/deep learning algorithms.

PyTorch Tabular offers 4 standard transformations using the continuous_feature_transform parameter:

  • yeo-johnson
  • box-cox
  • quantile_uniform
  • quantile_normal

yeo-johnson and box-cox are a family of parametric, monotonic transformations that aim to map data from any distribution to as close to a Gaussian distribution as possible in order to stabilize variance and minimize skewness. box-cox can only be applied to strictly positive data. Sci-kit Learn has a good write-up about them

quantile_normal and quantile_uniform are monotonic, non-parametric transformations which aims to transfom the features to a normal distribution or a uniform distribution, respectively.By performing a rank transformation, a quantile transform smooths out unusual distributions and is less influenced by outliers than scaling methods. It does, however, distort correlations and distances within and across features.

pytorch_tabular.config.DataConfig dataclass

Data configuration.

PARAMETER DESCRIPTION
target

A list of strings with the names of the target column(s). It is mandatory for all except SSL tasks.

TYPE: Optional[List[str]] DEFAULT: field(default=None, metadata={'help': 'A list of strings with the names of the target column(s). It is mandatory for all except SSL tasks.'})

continuous_cols

Column names of the numeric fields. Defaults to []

TYPE: List DEFAULT: field(default_factory=list, metadata={'help': 'Column names of the numeric fields. Defaults to []'})

categorical_cols

Column names of the categorical fields to treat differently. Defaults to []

TYPE: List DEFAULT: field(default_factory=list, metadata={'help': 'Column names of the categorical fields to treat differently. Defaults to []'})

date_columns

(Column names, Freq) tuples of the date fields. For eg. a field named introduction_date and with a monthly frequency should have an entry ('intro_date','M'}

TYPE: List DEFAULT: field(default_factory=list, metadata={'help': "(Column names, Freq) tuples of the date fields. For eg. a field named `introduction_date` and with a monthly frequency should have an entry ('intro_date','M'}"})

encode_date_columns

Whether or not to encode the derived variables from date

TYPE: bool DEFAULT: field(default=True, metadata={'help': 'Whether or not to encode the derived variables from date'})

validation_split

Percentage of Training rows to keep aside as validation. Used only if Validation Data is not given separately

TYPE: Optional[float] DEFAULT: field(default=0.2, metadata={'help': 'Percentage of Training rows to keep aside as validation. Used only if Validation Data is not given separately'})

continuous_feature_transform

Whether or not to transform the features before modelling. By default it is turned off.. Choices are: [None,yeo-johnson,box- cox,quantile_normal,quantile_uniform].

TYPE: Optional[str] DEFAULT: field(default=None, metadata={'help': 'Whether or not to transform the features before modelling. By default it is turned off.', 'choices': [None, 'yeo-johnson', 'box-cox', 'quantile_normal', 'quantile_uniform']})

normalize_continuous_features

Flag to normalize the input features(continuous)

TYPE: bool DEFAULT: field(default=True, metadata={'help': 'Flag to normalize the input features(continuous)'})

quantile_noise

NOT IMPLEMENTED. If specified fits QuantileTransformer on data with added gaussian noise with std = :quantile_noise: * data.std ; this will cause discrete values to be more separable. Please not that this transformation does NOT apply gaussian noise to the resulting data, the noise is only applied for QuantileTransformer

TYPE: int DEFAULT: field(default=0, metadata={'help': 'NOT IMPLEMENTED. If specified fits QuantileTransformer on data with added gaussian noise with std = :quantile_noise: * data.std ; this will cause discrete values to be more separable. Please not that this transformation does NOT apply gaussian noise to the resulting data, the noise is only applied for QuantileTransformer'})

num_workers

The number of workers used for data loading. For windows always set to 0

TYPE: Optional[int] DEFAULT: field(default=0, metadata={'help': 'The number of workers used for data loading. For windows always set to 0'})

pin_memory

Whether or not to pin memory for data loading.

TYPE: bool DEFAULT: field(default=True, metadata={'help': 'Whether or not to pin memory for data loading.'})

handle_unknown_categories

Whether or not to handle unknown or new values in categorical columns as unknown

TYPE: bool DEFAULT: field(default=True, metadata={'help': 'Whether or not to handle unknown or new values in categorical columns as unknown'})

handle_missing_values

Whether or not to handle missing values in categorical columns as unknown

TYPE: bool DEFAULT: field(default=True, metadata={'help': 'Whether or not to handle missing values in categorical columns as unknown'})