sklearn.model_selection.ShuffleSplit

class sklearn.model_selection.ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)[source]

Random permutation cross-validator

Yields indices to split data into training and test sets.

Note: contrary to other cross-validation strategies, random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets.

Read more in the User Guide.

Parameters:

n_splits : int (default 10)

Number of re-shuffling & splitting iterations.

test_size : float, int, or None, default 0.1

If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is automatically set to the complement of the train size.

train_size : float, int, or None (default is None)

If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.

random_state : int or RandomState

Pseudo-random number generator state used for random sampling.

Examples

>>> from sklearn.model_selection import ShuffleSplit
>>> X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
>>> y = np.array([1, 2, 1, 2])
>>> rs = ShuffleSplit(n_splits=3, test_size=.25, random_state=0)
>>> rs.get_n_splits(X)
3
>>> print(rs)
ShuffleSplit(n_splits=3, random_state=0, test_size=0.25, train_size=None)
>>> for train_index, test_index in rs.split(X):
...    print("TRAIN:", train_index, "TEST:", test_index)
...  
TRAIN: [3 1 0] TEST: [2]
TRAIN: [2 1 3] TEST: [0]
TRAIN: [0 2 1] TEST: [3]
>>> rs = ShuffleSplit(n_splits=3, train_size=0.5, test_size=.25,
...                   random_state=0)
>>> for train_index, test_index in rs.split(X):
...    print("TRAIN:", train_index, "TEST:", test_index)
...  
TRAIN: [3 1] TEST: [2]
TRAIN: [2 1] TEST: [0]
TRAIN: [0 2] TEST: [3]

Methods

get_n_splits([X, y, groups]) Returns the number of splitting iterations in the cross-validator
split(X[, y, groups]) Generate indices to split data into training and test set.
__init__(n_splits=10, test_size=0.1, train_size=None, random_state=None)[source]
get_n_splits(X=None, y=None, groups=None)[source]

Returns the number of splitting iterations in the cross-validator

Parameters:

X : object

Always ignored, exists for compatibility.

y : object

Always ignored, exists for compatibility.

groups : object

Always ignored, exists for compatibility.

Returns:

n_splits : int

Returns the number of splitting iterations in the cross-validator.

split(X, y=None, groups=None)[source]

Generate indices to split data into training and test set.

Parameters:

X : array-like, shape (n_samples, n_features)

Training data, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape (n_samples,)

The target variable for supervised learning problems.

groups : array-like, with shape (n_samples,), optional

Group labels for the samples used while splitting the dataset into train/test set.

Returns:

train : ndarray

The training set indices for that split.

test : ndarray

The testing set indices for that split.

Examples using sklearn.model_selection.ShuffleSplit