4. Dataset transformations¶
scikit-learn provides a library of transformers, which may clean (see Preprocessing data), reduce (see Unsupervised dimensionality reduction), expand (see Kernel Approximation) or generate (see Feature extraction) feature representations.
Like other estimators, these are represented by classes with a fit
method,
which learns model parameters (e.g. mean and standard deviation for
normalization) from a training set, and a transform
method which applies
this transformation model to unseen data. fit_transform
may be more
convenient and efficient for modelling and transforming the training data
simultaneously.
Combining such transformers, either in parallel or series is covered in Pipeline and FeatureUnion: combining estimators. Pairwise metrics, Affinities and Kernels covers transforming feature spaces into affinity matrices, while Transforming the prediction target (y) considers transformations of the target space (e.g. categorical labels) for use in scikit-learn.
- 4.1. Pipeline and FeatureUnion: combining estimators
- 4.2. Feature extraction
- 4.2.1. Loading features from dicts
- 4.2.2. Feature hashing
- 4.2.3. Text feature extraction
- 4.2.3.1. The Bag of Words representation
- 4.2.3.2. Sparsity
- 4.2.3.3. Common Vectorizer usage
- 4.2.3.4. Tf–idf term weighting
- 4.2.3.5. Decoding text files
- 4.2.3.6. Applications and examples
- 4.2.3.7. Limitations of the Bag of Words representation
- 4.2.3.8. Vectorizing a large text corpus with the hashing trick
- 4.2.3.9. Performing out-of-core scaling with HashingVectorizer
- 4.2.3.10. Customizing the vectorizer classes
- 4.2.4. Image feature extraction
- 4.3. Preprocessing data
- 4.4. Unsupervised dimensionality reduction
- 4.5. Random Projection
- 4.6. Kernel Approximation
- 4.7. Pairwise metrics, Affinities and Kernels
- 4.8. Transforming the prediction target (
y
)