对 pickle.dump 中的“OSError: [Errno 22] Invalid argument”有什么想法吗?

问题描述 投票:0回答:3

下面是我的代码: 在此代码中,我尝试将“.p”文件拆分并规范化为具有不同规范的文件。然而,似乎分割正在工作,但我无法使用 pickle.dump 将它们保存到“.p”文件中。对于这个错误有什么建议吗?

import numpy as np
import pandas as pd
import pickle 
import gzip


# in this example tanh normalization is used
# fold 0 is used for testing and fold 1 for validation (hyperparamter    selection)
norm = 'tanh'
test_fold = 0
val_fold = 1

def normalize(X, means1=None, std1=None, means2=None, std2=None, feat_filt=None, norm='tanh_norm'):
if std1 is None:
    std1 = np.nanstd(X, axis=0)
if feat_filt is None:
    feat_filt = std1!=0
X = X[:,feat_filt]
X = np.ascontiguousarray(X)
if means1 is None:
    means1 = np.mean(X, axis=0)
X = (X-means1)/std1[feat_filt]
if norm == 'norm':
    return(X, means1, std1, feat_filt)
elif norm == 'tanh':
    return(np.tanh(X), means1, std1, feat_filt)
elif norm == 'tanh_norm':
    X = np.tanh(X)
    if means2 is None:
        means2 = np.mean(X, axis=0)
    if std2 is None:
        std2 = np.std(X, axis=0)
    X = (X-means2)/std2
    X[:,std2==0]=0
    return(X, means1, std1, means2, std2, feat_filt)

#contains the data in both feature ordering ways (drug A - drug B - cell line     and drug B - drug A - cell line)
#in the first half of the data the features are ordered (drug A - drug B - cell line)
#in the second half of the data the features are ordered (drug B - drug A - cell line)
file = gzip.open('X.p.gz', 'rb')
X = pickle.load(file)
file.close()



#contains synergy values and fold split (numbers 0-4)
labels = pd.read_csv('labels.csv', index_col=0) 
#labels are duplicated for the two different ways of ordering in the data
labels = pd.concat([labels, labels])



#indices of training data for hyperparameter selection: fold 2, 3, 4
idx_tr = np.where(np.logical_and(labels['fold']!=test_fold,            labels['fold']!=val_fold))
#indices of validation data for hyperparameter selection: fold 1
idx_val = np.where(labels['fold']==val_fold)

#indices of training data for model testing: fold 1, 2, 3, 4
idx_train = np.where(labels['fold']!=test_fold)
#indices of test data for model testing: fold 0
idx_test = np.where(labels['fold']==test_fold)



X_tr = X[idx_tr]
X_val = X[idx_val]
X_train = X[idx_train]
X_test = X[idx_test]

y_tr = labels.iloc[idx_tr]['synergy'].values
y_val = labels.iloc[idx_val]['synergy'].values
y_train = labels.iloc[idx_train]['synergy'].values
y_test = labels.iloc[idx_test]['synergy'].values


if norm == "tanh_norm":
    X_tr, mean, std, mean2, std2, feat_filt = normalize(X_tr, norm=norm)
    X_val, mean, std, mean2, std2, feat_filt = normalize(X_val, mean, std, mean2, std2, 
                                                      feat_filt=feat_filt, norm=norm)
else:
X_tr, mean, std, feat_filt = normalize(X_tr, norm=norm)
X_val, mean, std, feat_filt = normalize(X_val, mean, std, feat_filt=feat_filt, norm=norm)


if norm == "tanh_norm":
X_train, mean, std, mean2, std2, feat_filt = normalize(X_train, norm=norm)
X_test, mean, std, mean2, std2, feat_filt = normalize(X_test, mean, std, mean2, std2, 
                                                      feat_filt=feat_filt, norm=norm)
else:
X_train, mean, std, feat_filt = normalize(X_train, norm=norm)
X_test, mean, std, feat_filt = normalize(X_test, mean, std, feat_filt=feat_filt, norm=norm)

pickle.dump((X_tr, X_val, X_train, X_test, y_tr, y_val, y_train, y_test),    open('data_test_fold%d_%s.p'%(test_fold, norm), 'wb'))

我认为最后两行是最有问题的,但也可能是其他地方的错误触发了这个问题。

python pickle dump
3个回答
8
投票

这很可能是由 Pickle 实现中的一个错误引起的,该错误不允许生成大于 4GB 的文件。

Python 3 - pickle 可以处理大于 4GB 的字节对象吗?


0
投票

我在 Windows 10、Anaconda Python 3.8.5 上使用

pickle.load()
时遇到此错误。事实证明,我尝试读取的文件位于 OneDrive 文件夹中,但在我的计算机上尚不可用。我等待几秒钟让 OneDrive 完成下载并重新运行代码。问题就解决了。

这令人惊讶。我一直认为对 OneDrive 文件的操作会等到从云端获取文件而不是失败。但事实证明恰恰相反。


0
投票

Python 试图访问我的 OneDrive 上的 pickle 文件,但我没有登录 OneDrive。我登录后问题就消失了。

© www.soinside.com 2019 - 2024. All rights reserved.