即使使用Sequence类,Keras`step = None`错误

问题描述 投票:0回答:1

我正在尝试使用带有Tensorflow后端的Keras进行一些自定义培训。我正在使用fit_generator()提供数据。我的生成器是keras.utils.Sequence的派生类。

gen = PitsSequence( PITS_PATH,nP=nP, nN=nN, n_samples=n_samples, initial_epoch=initial_epoch, image_nrows=image_nrows, image_ncols=image_ncols, image_nchnl=image_nchnl)
gen_validation = PitsSequence(PITS_VAL_PATH, nP=nP, nN=nN, n_samples=n_samples, image_nrows=image_nrows, image_ncols=image_ncols, image_nchnl=image_nchnl )

history = t_model.fit_generator( generator = gen,
                            epochs=2200, verbose=1, 
                            initial_epoch=initial_epoch,
                            validation_data = gen_validation ,
                            callbacks=[tb,saver_cb,reduce_lr],
                            use_multiprocessing=True, workers=0,
                         )

但是,当我运行它时,我收到以下错误。

Epoch 1/2200
m_int_logr= ./models.keras/tmp/
12/13 [==========================>...] - ETA: 1s - loss: 1.7347 - allpair_count_goodfit: 0.0000e+00 - positive_set_deviation: 0.0039Traceback (most recent call last):
  File "noveou_train_netvlad_v3.py", line 260, in <module>
    use_multiprocessing=True, workers=0,
  File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1415, in fit_generator
    initial_epoch=initial_epoch)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_generator.py", line 230, in fit_generator
    workers=0)
  File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1469, in evaluate_generator
    verbose=verbose)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_generator.py", line 298, in evaluate_generator
    raise ValueError('`steps=None` is only valid for a generator'
ValueError: `steps=None` is only valid for a generator based on the `keras.utils.Sequence` class. Please specify `steps` or use the `keras.utils.Sequence` class.

怎么解决这个问题? Keras版本:2.2.2 Tensorflow版本:1.11.0

这是PitSequence类的实现;它涉及2个外部功能

self.pr = PittsburgRenderer( PTS_BASE )
        self.D = self.pr.step_n_times(n_samples=self.n_samples_pitts, nP=nP, nN=nN, resize=self.resize, return_gray=self.return_gray, ENABLE_IMSHOW=False )

AND 

self.D = do_typical_data_aug( self.D )

这里,


class PitsSequence(keras.utils.Sequence):
    """  This class depends on CustomNets.dataload_ for loading data. """
    def __init__(self, PTS_BASE, nP, nN, n_samples=500, initial_epoch=0, image_nrows=240, image_ncols=320, image_nchnl=1 ):

        # assert( type(n_samples) == type(()) )
        self.n_samples_pitts = int(n_samples)
        self.epoch = initial_epoch
        self.batch_size = 4
        self.refresh_data_after_n_epochs = 20
        self.nP = nP
        self.nN = nN
        # self.n_samples = n_samples
        print tcolor.OKGREEN, '-------------PitsSequence Config--------------', tcolor.ENDC
        print 'n_samples  : ', self.n_samples_pitts
        print 'batch_size : ', self.batch_size
        print 'refresh_data_after_n_epochs : ', self.refresh_data_after_n_epochs
        print 'image_nrows: ', image_nrows, '\timage_ncols: ', image_ncols, '\timage_nchnl: ', image_nchnl
        print '# positive samples (nP) = ', self.nP
        print '# negative samples (nP) = ', self.nN
        print tcolor.OKGREEN, '----------------------------------------------', tcolor.ENDC


        self.resize = (image_ncols, image_nrows)
        if image_nchnl == 3:
            self.return_gray = False
        else :
            self.return_gray = True


        # PTS_BASE = '/Bulk_Data/data_Akihiko_Torii/Pitssburg/'
        self.pr = PittsburgRenderer( PTS_BASE )
        self.D = self.pr.step_n_times(n_samples=self.n_samples_pitts, nP=nP, nN=nN, resize=self.resize, return_gray=self.return_gray, ENABLE_IMSHOW=False )
        print 'len(D)=', len(self.D), '\tD[0].shape=', self.D[0].shape
        self.y = np.zeros( len(self.D) )
        self.steps = int(np.ceil(len(self.D) / float(self.batch_size)))



    def __len__(self):
        return int(np.ceil(len(self.D) / float(self.batch_size)))

    def __getitem__(self, idx):
        batch_x = self.D[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]

        return np.array( batch_x ), np.array( batch_y )
        # return np.array( batch_x )*1./255. - 0.5, np.array( batch_y )
       #TODO: Can return another number (sample_weight) for the sample. Which can be judge say by GMS matcher. If we see higher matches amongst +ve set ==> we have good positive samples,


    def on_epoch_end(self):
        N = self.refresh_data_after_n_epochs

        if self.epoch % N == 0 and self.epoch > 0 :
            print '[on_epoch_end] done %d epochs, so load new data\t' %(N), int_logr.dir()
            # Sample Data
            # self.D = dataload_( n_tokyoTimeMachine=self.n_samples_tokyo, n_Pitssburg=self.n_samples_pitts, nP=nP, nN=nN )

            self.D = self.pr.step_n_times(n_samples=self.n_samples_pitts, nP=self.nP, nN=self.nN, resize=self.resize, return_gray=self.return_gray, ENABLE_IMSHOW=False )
            print 'len(D)=', len(self.D), '\tD[0].shape=', self.D[0].shape


            # if self.epoch > 400:
            if self.epoch > 400 and self.n_samples_pitts<0:
                # Data Augmentation after 400 epochs. Only do for Tokyo which are used for training. ie. dont augment Pitssburg.
                self.D = do_typical_data_aug( self.D )

            print 'dataload_ returned len(self.D)=', len(self.D), 'self.D[0].shape=', self.D[0].shape
            self.y = np.zeros( len(self.D) )
            # modify data
        self.epoch += 1


python python-2.7 tensorflow keras deep-learning
1个回答
0
投票

我认为你的问题在于use_multiprocessing=Trueworkers=0的结合。如果你看看documentation,你可以阅读他们的设置。希望有所帮助。

© www.soinside.com 2019 - 2024. All rights reserved.