Utilities¶
Special Feature Classes¶
Bases: BaseEstimator
, TransformerMixin
Source code in src/pytorch_tabular/categorical_encoders.py
163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 |
|
__init__(tabular_model)
¶
Initializes the Transformer and extracts the neural embeddings.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tabular_model |
TabularModel
|
The trained TabularModel object |
required |
Source code in src/pytorch_tabular/categorical_encoders.py
fit(X, y=None)
¶
fit_transform(X, y=None)
¶
Encode given columns of X based on the learned embedding.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
DataFrame
|
DataFrame of features, shape (n_samples, n_features). Must contain columns to encode. |
required |
y |
[type]
|
Only for compatibility. Not used. Defaults to None. |
None
|
Returns:
Name | Type | Description |
---|---|---|
DataFrame |
DataFrame
|
The encoded dataframe |
Source code in src/pytorch_tabular/categorical_encoders.py
transform(X, y=None)
¶
Transforms the categorical columns specified to the trained neural embedding from the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
DataFrame
|
DataFrame of features, shape (n_samples, n_features). Must contain columns to encode. |
required |
y |
[type]
|
Only for compatibility. Not used. Defaults to None. |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
[description] |
Returns:
Name | Type | Description |
---|---|---|
DataFrame |
DataFrame
|
The encoded dataframe |
Source code in src/pytorch_tabular/categorical_encoders.py
Bases: BaseEstimator
, TransformerMixin
Source code in src/pytorch_tabular/feature_extractor.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
__init__(tabular_model, extract_keys=['backbone_features'], drop_original=True)
¶
Initializes the Transformer and extracts the neural features.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tabular_model |
TabularModel
|
The trained TabularModel object |
required |
extract_keys |
list
|
The keys of the features to extract. Defaults to ["backbone_features"]. |
['backbone_features']
|
drop_original |
bool
|
Whether to drop the original columns. Defaults to True. |
True
|
Source code in src/pytorch_tabular/feature_extractor.py
fit(X, y=None)
¶
fit_transform(X, y=None)
¶
Encode given columns of X based on the learned features.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
DataFrame
|
DataFrame of features, shape (n_samples, n_features). Must contain columns to encode. |
required |
y |
[type]
|
Only for compatibility. Not used. Defaults to None. |
None
|
Returns:
Type | Description |
---|---|
DataFrame
|
pd.DataFrame: The encoded dataframe |
Source code in src/pytorch_tabular/feature_extractor.py
load_from_object_file(path)
¶
Loads the feature extractor from a pickle file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
str
|
The path to load the file from |
required |
Source code in src/pytorch_tabular/feature_extractor.py
save_as_object_file(path)
¶
Saves the feature extractor as a pickle file.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
str
|
The path to save the file |
required |
Source code in src/pytorch_tabular/feature_extractor.py
transform(X, y=None)
¶
Transforms the categorical columns specified to the trained neural features from the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
DataFrame
|
DataFrame of features, shape (n_samples, n_features). Must contain columns to encode. |
required |
y |
[type]
|
Only for compatibility. Not used. Defaults to None. |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
[description] |
Returns:
Type | Description |
---|---|
DataFrame
|
pd.DataFrame: The encoded dataframe |
Source code in src/pytorch_tabular/feature_extractor.py
Data Utilities¶
Source code in src/pytorch_tabular/utils/data_utils.py
Source code in src/pytorch_tabular/utils/data_utils.py
Predicting forest cover type from cartographic variables only (no remotely sensed data). The actual forest cover type for a given observation (30 x 30 meter cell) was determined from US Forest Service (USFS) Region 2 Resource Information System (RIS) data. Independent variables were derived from data originally obtained from US Geological Survey (USGS) and USFS data. Data is in raw form (not scaled) and contains binary (0 or 1) columns of data for qualitative independent variables (wilderness areas and soil types).
This study area includes four wilderness areas located in the Roosevelt National Forest of northern Colorado. These areas represent forests with minimal human-caused disturbances, so that existing forest cover types are more a result of ecological processes rather than forest management practices.
It is from UCI ML Repository, but with small changes: - The one hot encoded columns are converted to categorical - Soli Type and Wilderness type
Parameters:
Name | Type | Description | Default |
---|---|---|---|
download_dir |
str
|
Directory to download the data to. Defaults to None, which will download to ~/.pytorch_tabular/datasets/ |
None
|
Source code in src/pytorch_tabular/utils/data_utils.py
Creates a synthetic dataset with mixed data types.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
task |
str
|
Either "classification" or "regression" |
required |
n_samples |
int
|
Number of samples to generate |
required |
n_features |
int
|
Number of total features to generate |
7
|
n_categories |
int
|
Number of features to be categorical |
2
|
n_informative |
int
|
Number of informative features |
5
|
random_state |
int
|
Random seed for reproducibility |
42
|
n_targets |
int
|
Number of targets to generate. n_targets>1 will generate a multi-target dataset for regression and multi-class dataset for classification. Defaults to 2 classes for classification and 1 for regression |
None
|
kwargs |
Additional arguments to pass to the make_classification or make_regression function |
{}
|
Source code in src/pytorch_tabular/utils/data_utils.py
Source code in src/pytorch_tabular/utils/data_utils.py
NN Utilities¶
Source code in src/pytorch_tabular/utils/nn_utils.py
Source code in src/pytorch_tabular/utils/nn_utils.py
Source code in src/pytorch_tabular/utils/nn_utils.py
Resets all parameters in a network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Module
|
The model to reset the parameters of. |
required |
refs
- https://discuss.pytorch.org/t/how-to-re-set-alll-parameters-in-a-network/20819/6
- https://stackoverflow.com/questions/63627997/reset-parameters-of-a-neural-network-in-pytorch
- https://pytorch.org/docs/stable/generated/torch.nn.Module.html
Source code in src/pytorch_tabular/utils/nn_utils.py
Takes integer with n dims and converts it to 1-hot representation with n + 1 dims.
The n+1'st dimension will have zeros everywhere but at y'th index, where it will be equal to 1. Args: y: input integer (IntTensor, LongTensor or Variable) of any shape depth (int): the size of the one hot dimension
Source code in src/pytorch_tabular/utils/nn_utils.py
Python Utilities¶
Loads a checkpoint.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path_or_url |
Union[IO, _PATH]
|
Path or URL of the checkpoint. |
required |
map_location |
_MAP_LOCATION_TYPE
|
a function, |
None
|