Cross Validation is a widely used term in Machine Learning. Usually some testing data is needed to validate a model. Many times it is seen that there is no similarity between the training data and the testing data, then to a large extent the model can predict something wrong. Sometimes there is not enough data for training or testing; To get rid of all these problems, cross validation was born.

In cross-validation, the same data is divided into part for training and part for testing, and this method is called cross-validation. Alternately, all training data are used as testing data and all testing data are used as training data. This solves both the above problems.

K – Fold Cross Validation

The process of dividing the data for training and testing is called fold. K – fold basically refers to the K ratio. Let’s say we divide the data into five parts and use one part for testing and the other four for training. Then this folding will be called 5-Fold Cross Validation (5-Fold Cross Validation).

10 – Fold Cross Validation

In most cases 10-fold cross validation is used. Let’s say we have 100 samples, we want to 10-fold cross validate them. Then we have to divide these 100 samples into 10 parts. That is, there will be 10 samples in each section. That is, we will use 90 samples as training and 10 samples as testing, and repeat it.

0 0 votes
Article Rating
0
Would love your thoughts, please comment.x
()
x