This Content is from Stack Overflow. Question asked by C. Cooney
I have a CNN model that performs well on a training set, but when I extend it to my “production” data, the performance is lagging. The reason that I cannot use my “production” data for training purposes is that it is not tagged with the requisite outcome values (I note the gap in its performance by visual comparison of samples).
Similarly, I note that my training dataset is somewhat different in colour tones, etc., from my production data. Accordingly, I have translated my training and scoring datasets to black and white (returning to three channels for Keras Application compatibility).
To help improve the generalization of my model, I am introducing more “noise” to my training process as follows:
datagen = ImageDataGenerator(brightness_range=[0.6, 1.0], zoom_range=[0.98, 1.0204], rotation_range=3, width_shift_range=0.03, height_shift_range=0.03, shear_range=0.03)
What I am noting, however, is that my model fit process is either quite a bit longer (it’s still running without improving accuracy), or it will no longer fit.
I am thinking, however, that it may be better to introduce these randomizations more iteratively by letting the model fit first the initial data set without noise / translations, and then introducing the translations iteratively. Looking at the documentation, however, I see few cases where it is possible to “save a resume” training.
Given that I am using an iterator, it should be quite easy to update my data source with a new “noise feature” each loop, but first I am curious how I can save my weights and resume training each step of the loop.
Further, is it also possible to update callbacks and epochs each iteration? Thanks for your help.
This question is not yet answered, be the first one who answer using the comment. Later the confirmed answer will be published as the solution.
This Question and Answer are collected from stackoverflow and tested by JTuto community, is licensed under the terms of CC BY-SA 2.5. - CC BY-SA 3.0. - CC BY-SA 4.0.