图形神经网络模型的训练/测试精度和运行时错误:pytorch

kfgdxczn  于 2021-08-20  发布在  Java
关注(0)|答案(0)|浏览(148)

我用我的图形神经网络面对一些非常奇怪的精确度。我的模型是:

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
         self.conv1 = ChebConv(132, 32,20) # input layer
         self.conv2 = ChebConv(32, 12,20)
    def forward(self, data):  #define forward network
        x, edge_index = data.x, data.edge_index
        x = self.conv1(x, edge_index)
        x = F.relu(x)
        x = self.conv2(x, edge_index)
Model=Net()
print(Model)

培训和测试功能为:

optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=5e-4)
def train():
    model.train()
    cum_loss = 0
    for data in dataset:  # Iterate in batches over the training dataset.
        data = data.to(device)
        out = model(data) #, data.batch)  # Perform a single forward pass.
        loss = F.mse_loss(out, data.y)
        #loss = criterion(out, data.y)  # Compute the loss.
        cum_loss += loss
        loss.backward()  # Derive gradients.
        optimizer.step()  # Update parameters based on gradients.
        optimizer.zero_grad()  # Clear gradients.
    print(f'Epoch: {epoch:03d}, loss: {cum_loss/len(dataset):.4f}')
    csvwriter.writerow([epoch, (cum_loss/len(dataset)).cpu().detach().numpy()])
filename = "output1.1.csv"
filename_label = "output_label1.csv"
fields = ['a', 'b', ...]

# test function

def test(loader):
    model.eval()
    with open(filename, 'w') as csvfile, open(filename_label, 'w') as csvoutput:
        csvwriter = csv.writer(csvfile)
        csvoutwriter = csv.writer(csvoutput)
        csvwriter.writerow(fields)
        csvoutwriter.writerow(fields)
        correct = 0
        for data in loader:  
            data = data.to(device)
            out = model(data) 
            correct += int((out.argmax(-1)==data.y.view(-1,1)).sum())
            for i  in range(10):
                csvwriter.writerow(out.cpu().detach().numpy()[i,:])
                csvoutwriter.writerow((data.y.cpu().detach()).numpy()[i,:])
    return correct / len(loader)  # Derive ratio of correct predictions.
model.train()
filename_train = "trainloss1.1.csv"
with open(filename_train, 'w') as csvfile:
    csvwriter = csv.writer(csvfile)
    csvwriter.writerow(['Epoch', 'Loss'])

    for epoch in range(300): 
        train()
        test_acc = test(dataset) 
        print(f'Test Acc: {test_acc:.4f}')

我得到的准确度很低

Epoch: 000, loss: 4.2021
Test Acc: 9.9178

之前我试着改变了 correct 测试数据中的参数表示为

correct += int((out.argmax(-1)==data.y).sum())

导致

RuntimeError: The size of tensor a (10) must match the size of tensor b (12) at non-singleton dimension 1

我遇到了这个运行时错误并尝试了

correct += int((out.argmax(-1)==data.y.view(-1,1)).sum())

但正如我所说的,准确度很低。我如何解决这个问题?我做错什么了吗?我在loader中的数据大小是

Data(edge_attr=[18], edge_index=[2, 18], x=[10, 132], y=[10, 12])

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题