Pytorch 中的 ANN 训练给了我不变的损失函数

问题描述 投票:0回答:1

我正在学习pytorch。当我在 jupyter cell 中运行代码时,损失函数保持不变。它应该升高或降低。为什么会发生这种情况?

import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
    def __init__(self,in_features=4,h1=8,h2=9,out_features=3):
        #how many layers:
        #intiate the inherrited class of nn.Module
        
        super().__init__()
        #Input layer(4 feature)--> hiddenlayer 1 neural net--->hiddenlayer2 neural net-->output(3 classes of Iris dataset)
        #fully connected layer. I could edit the layers here for example first layer is connected with  h1, h1 is connected with h2 
        # then h2 is connecte with out_Features
        #Alternatively i can use  all values of in_features,hidden layers value and out features  loaded in init parameter
        self.fc1=nn.Linear(in_features,h1)
        self.fc2=nn.Linear(h1,h2)
        self.out=nn.Linear(h2,out_features)
        
    #propagate method  here start  foroward propagation  
    def forward(self,x):
        x=F.relu(self.fc1(x))
        x=F.relu(self.fc2(x))
        x=self.out(x)
        return x
    torch.manual_seed(32)
    model=Model()
    df=pd.read_csv('iris.csv')
    df.tail()
    X=df.drop('target',axis=1).values
    y=df['target'].values
    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=33)


    X_train=torch.FloatTensor(X_train)
    X_test=torch.FloatTensor(X_test)

    y_train=torch.LongTensor(y_train)

    y_test=torch.LongTensor(y_test)
    criterion=nn.CrossEntropyLoss()
    optimizer=torch.optim.Adam(model.parameters(),lr=0.01)
    epoch=100
    #for tracking loss  lets make it a empty list and then make the loss
    losses=[]
    for i in range(epoch):


   #forward and get a prediction
    y_pred=model.forward(X_train)#passing x_train  to forward function which actually applies features in fully connected neural net and activation function applied on those
    #measuring loss between predicted y, and actual y which is y_train. Using CrossEntropyloss, so we dont need to one hot encoding here
    loss=criterion(y_pred,y_train)
    #append the loss into losses for tracking losses in each epoch completion
    losses.append(loss.item())
    # we print this performance every 10 epoch completion
    if i%10==0:
        print(f'epoch{i} and loss is :{loss}')
    
    #Back propagation
    
    optimizer.zero_grad() #resetting the gradient since it accumulates in every epoch
    #print('Optimizer check',optimizer.zero_grad())
    loss.backward()#adjusting the paramter
   
    optimizer.step# updting the weight and bias

输出:

epoch0 and loss is :1.1507114171981812
epoch10 and loss is :1.1507114171981812
epoch20 and loss is :1.1507114171981812
epoch30 and loss is :1.1507114171981812
epoch40 and loss is :1.1507114171981812
epoch50 and loss is :1.1507114171981812
epoch60 and loss is :1.1507114171981812
epoch70 and loss is :1.1507114171981812
epoch80 and loss is :1.1507114171981812
epoch90 and loss is :1.1507114171981812
deep-learning pytorch neural-network
1个回答
0
投票

step
optimizer
方法
,所以你需要调用它来让优化器更新参数(基于你执行
loss.backward()
时损失函数计算的梯度):

optimizer.step()  # note the () after optimizer.step
© www.soinside.com 2019 - 2024. All rights reserved.