Cross-Validation is a widely used term in machine learning. Generally, some testing data is required to verify a model. It is often seen that testing data is not compatible with the training data, in those cases the model can provide some wrong predictions. Also, sometimes the available data is not enough to train or test the model; cross-validation has emerged to get rid of all these problems.

In cross-validation, some parts from the whole dataset are divided to be used for training and testing, and this is called Cross-Validation. Gradually, all training data is once used as testing data, and then all testing data is used as training data. As a result, the above two problems are solved.

K – Fold Cross ValiDation

The process of dividing data for training and testing is called ‘Fold’. Basically. K-fold means K-proportion. Suppose we will divide our whole data into five parts, one part is for testing and another four percent will be used in training. Then this folding will be called 5 – Folded Cross Validation (5-Fold Cross Validation).

10 – Fold Cross Validation

In most cases, 10-fold cross-validation is used. Suppose we have 100 samples and we want to use 10-fold cross-validation. Then we have to divide these 100 samples into 10 parts. That means each part will have 10 samples. That is, we will use 90 samples as training and 10 samples for testing, and it will be continued repetitively.

0 0 votes
Article Rating
0
Would love your thoughts, please comment.x
()
x