每次运行均获得相同结果-Keras,Google Colab

问题描述 投票:0回答:1

我在带有GPU的Google Colab中运行以下代码:

import random
random.seed(1)
import numpy as np
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
import pandas as pd
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.layers import Flatten, Dense, Lambda, SimpleRNN
from keras.optimizers import *
from keras.utils import np_utils
from keras.initializers import *
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score, roc_auc_score, auc, precision_recall_curve
from sklearn.metrics import confusion_matrix
from keras.callbacks import EarlyStopping
from keras import backend as K
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)


##Loading dataset train and validation files, the files are same for every run

es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)

print("***********************************************************************************************")

def make_model():
    model = Sequential()        
    model.add(Conv2D(10,(5,5), kernel_initializer=glorot_uniform(seed=1), input_shape = (22,10,1), use_bias = True, activation = "relu", strides = 1, padding = "valid"))
    model.add(MaxPooling2D(pool_size=(2,2)))    
    model.add(Flatten())
    model.add(Dense(20, kernel_initializer=glorot_uniform(seed=1), activation = "relu"))
    model.add(Lambda(lambda x: tf.expand_dims(x, axis=1)))
    model.add(SimpleRNN(20, kernel_initializer=glorot_uniform(seed=1), activation="relu",return_sequences=False))
    model.add(Dense(1, kernel_initializer=glorot_uniform(seed=1), activation="sigmoid"))    
    opti = SGD(lr = 0.01)
    model.compile(loss = "binary_crossentropy", optimizer = opti, metrics = ["accuracy"])

    return model

model = make_model()
model.fit(x_train, y_train, validation_data = (x_validation,y_validation), epochs = 50, batch_size = 20, verbose = 2, callbacks=[es])

尽管设置了所有种子值,但我的model预测结果在随后的运行中有所不同。模型的训练和测试在同一Colab单元中进行。

tensorflow google-colaboratory keras-layer tf.keras reproducible-research
1个回答
0
投票

您正在处理的浮点数在不同的线程上相乘并相加,因此可能以不同的顺序发生。浮点加法和乘法不是可交换的。参见What Every Computer Scientist Should Know About Floating-Point Arithmetic

© www.soinside.com 2019 - 2024. All rights reserved.