Question on creating batches while training Neural Network models


I have a come across the idea of creating batches of your training set while using a Neural Network model.

  1. Is it a good idea to create batches of your data while training?
  2. How does it improve our model?
  3. Is it standard practice to use batch sizes as 32, 64, 128, 256,…? If yes, then why?

Yes it is a good idea, it creates more stable training due to frequent updates the steps taken towards the minima of the loss function have oscillations which can help getting out of local minimums of the loss function – and it is faster and less memory intensive.