使用 intel arc GPU 进行神经网络时出错

问题描述 投票:0回答:1

我收到此错误

 File "D:\gputest\ann.py", line 50, in <module> y_pred=model.forward(X_train) ^^^^^^^^^^^^^^^^^^^^^^ File "D:\gputest\ann.py", line 26, in forward x=F.relu(self.f_connected1(x)) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\neelesh\.conda\envs\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\neelesh\.conda\envs\venv\Lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ **RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and xpu:0! (when checking argument for argument self in method wrapper_XPU_out_addmm_out)**

**当我尝试运行 https://github.com/krishnaik06/Pytorch-Tutorial

中的示例时

我使用的代码如下** ` 将 pandas 导入为 pd 进口火炬 将 torch.nn 导入为 nn 导入 torch.nn.function 作为 F 将 intel_extension_for_pytorch 导入为 ipex 从 sklearn.model_selection 导入 train_test_split

df=pd.read_csv('diabetes.csv')
X=df.drop('Outcome',axis=1).values### independent features
y=df['Outcome'].values###dependent features
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)

X_train=torch.FloatTensor(X_train)
X_test=torch.FloatTensor(X_test)
y_train=torch.LongTensor(y_train)
y_test=torch.LongTensor(y_test)

class ANN_Model(nn.Module):
    def __init__(self,input_features=8,hidden1=20,hidden2=20,out_features=2):
        super().__init__()
        self.f_connected1=nn.Linear(input_features,hidden1)
        self.f_connected2=nn.Linear(hidden1,hidden2)
        self.out=nn.Linear(hidden2,out_features)
    def forward(self,x):
        x=F.relu(self.f_connected1(x))
        x=F.relu(self.f_connected2(x))
        x=self.out(x)
        return x

####instantiate my ANN_model
torch.manual_seed(20)
model=ANN_Model()
##transferring model and data to GPU
X_train.to('xpu')
y_train.to('xpu')
model=model.to('xpu')

###Backward Propogation-- Define the loss_function,define the optimizer
loss_function=nn.CrossEntropyLoss()
optimizer=torch.optim.Adam(model.parameters(),lr=0.01)


model,optimizer=ipex.optimize(model,optimizer=optimizer)

epochs=500
final_losses=[]
for i in range(epochs):
    i=i+1
    y_pred=model.forward(X_train)
    loss=loss_function(y_pred,y_train)
    final_losses.append(loss)
    if i%10==1:
        print("Epoch number: {} and the loss : {}".format(i,loss.item()))
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

`

我使用的是 intel arc a770 GPU,我已根据 intel 安装指南验证安装是否正确

我收到一个错误,即使我已将模型和输入数据移动到('xpu'),至少有两个设备

deep-learning pytorch neural-network intel
1个回答
0
投票
© www.soinside.com 2019 - 2024. All rights reserved.