Test Time Augmentations? What? How? When?
Test Time Augmentations are set of image augmentation methods applied before obtaining the final predictions.
Similar to Data Augmentations, Test Time Augmentations provide random modification to the test images. For a single image to obtain predictions on, multiple versions of that image will be obtained via augmentations and average of those predictions will be the final prediction.
Let's say there's an image to obtain predictions on. The original prediction for it was class B out of class A,B,C,D,E. This was a wrong prediction.
After applying Test Time Augmentation,
The predictions for each image will be ->
[0.1 , 0.2, 0.4, 0.1, 0.,2]
[0.5, 0.1, 0.1, 0.05, 0.25]
[0.6, 0.1, 0.1, 0.1, 0.1]
[0.4, 0.2, 0.1, 0.1, 0.2]
[0.3, 0.5, 0.1, 0.05, 0.05]
[0.7, 0.1, 0.1, 0.03, 0.07]
From here, there are two options. Either you obtain image wise prediction class. In that case, output would be the average / max voted class.
The other way is to sum up all the predictions and average them. To find the predicted class, perform np.argmax(). This is usually the preferred and most common way of performing Test Time Augmentations.
Averaging all the above predictions -> output class A
You can see with the help of Test Time Augmentations, we are able to obtain correct prediction for our test image although initially it was incorrectly predicted.
tta_steps = 10 predictions =  for i in tqdm(range(tta_steps)): preds = model.predict_generator(train_datagen.flow(x_val, batch_size=bs, shuffle=False), steps = len(x_val)/bs) predictions.append(preds) pred = np.mean(predictions, axis=0) np.mean(np.equal(np.argmax(y_val, axis=-1), np.argmax(pred, axis=-1)))