如何腌制Keras模型?

问题描述 投票:9回答:4

官方文件声明“不建议使用pickle或cPickle来保存Keras模型。”

然而,我对酸洗Keras模型的需求源于使用sklearn的RandomizedSearchCV(或任何其他超参数优化器)的超参数优化。将结果保存到文件中至关重要,因为脚本可以在分离的会话中远程执行等。

基本上,我想:

trial_search = RandomizedSearchCV( estimator=keras_model, ... )
pickle.dump( trial_search, open( "trial_search.pickle", "wb" ) )
python machine-learning keras pickle
4个回答
6
投票

截至目前,Keras模型是可以腌制的。但我们仍然建议使用model.save()将模型保存到磁盘。


3
投票

这就像一个魅力http://zachmoshe.com/2017/04/03/pickling-keras-models.html

import types
import tempfile
import keras.models

def make_keras_picklable():
    def __getstate__(self):
        model_str = ""
        with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
            keras.models.save_model(self, fd.name, overwrite=True)
            model_str = fd.read()
        d = { 'model_str': model_str }
        return d

    def __setstate__(self, state):
        with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
            fd.write(state['model_str'])
            fd.flush()
            model = keras.models.load_model(fd.name)
        self.__dict__ = model.__dict__


    cls = keras.models.Model
    cls.__getstate__ = __getstate__
    cls.__setstate__ = __setstate__

make_keras_picklable()

PS。我有一些问题,由于我的model.to_json()由于循环引用而引发了TypeError('Not JSON Serializable:', obj),并且这个错误已被上面的代码吞噬,因此导致pickle函数永远运行。


3
投票

USE get_weights AND set_weights TO SAVE AND LOAD MODEL, RESPECTIVELY.

看看这个链接:Unable to save DataFrame to HDF5 ("object header message is too large")

#for heavy model architectures, .h5 file is unsupported.
weigh= model.get_weights();    pklfile= "D:/modelweights.pkl"
try:
    fpkl= open(pklfile, 'wb')    #Python 3     
    pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
    fpkl.close()
except:
    fpkl= open(pklfile, 'w')    #Python 2      
    pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
    fpkl.close()

1
投票

您可以使用deploy-ml模块Pickle a Keras神经网络,该模块可以通过pip安装

pip install deploy-ml

使用deploy-ml包装器对kera神经网络进行全面培训和部署,如下所示:

import pandas as pd
from deployml.keras import NeuralNetworkBase


# load data 
train = pd.read_csv('example_data.csv')

# define the moel 
NN = NeuralNetworkBase(hidden_layers = (7, 3),
                   first_layer=len(train.keys())-1, 
                   n_classes=len(train.keys())-1)

# define data for the model 
NN.data = train

# define the column in the data you're trying to predict
NN.outcome_pointer = 'paid'

# train the model, scale means that it's using a standard 
# scaler to scale the data
NN.train(scale=True, batch_size=100)

NN.show_learning_curve()

# display the recall and precision 
NN.evaluate_outcome()

# Pickle your model
NN.deploy_model(description='Keras NN',
            author="maxwell flitton", organisation='example',
            file_name='neural.sav')

Pickled文件包含模型,测试中的度量,变量名称列表及其输入顺序,使用的Keras和python的版本,如果使用了缩放器,它也将存储在文件。文档是here。加载和使用该文件由以下内容完成:

import pickle

# use pickle to load the model 
loaded_model = pickle.load(open("neural.sav", 'rb'))

# use the scaler to scale your data you want to input 
input_data = loaded_model['scaler'].transform([[1, 28, 0, 1, 30]])

# get the prediction 
loaded_model['model'].predict(input_data)[0][0]

我很欣赏培训可能有点限制。 Deploy-ml支持为Sk-learn导入自己的模型,但它仍在为Keras提供支持。但是,我发现你可以创建一个deploy-ml NeuralNetworkBase对象,在Deploy-ml之外定义你自己的Keras神经网络,并将它分配给deploy-ml模型属性,这很好用:

 NN = NeuralNetworkBase(hidden_layers = (7, 3),
               first_layer=len(train.keys())-1, 
               n_classes=len(train.keys())-1)

NN.model = neural_network_you_defined_yourself
© www.soinside.com 2019 - 2024. All rights reserved.