数据太复杂,无法学习模型?

问题描述 投票:0回答:1

[目前,我正在为RTS游戏(确切地说是《魔兽争霸III》)开发一种AI。我正在使用TFlearn教模型如何独自玩游戏。我正在以这种方式收集数据:

[image array in grayscale] = [x-axis position of action, y-axis position of action, tokenized type of action]

因此,实际数据看起来像这样:

[[[36]
  [39]
  [38]
  ...
  [12]
  [48]
  [65]]

 [[30]
  [48]
  [ 0]
  ...
  [34]
  [49]
  [ 8]]

 [[28]
  [29]
  [23]
  ...
  [93]
  [38]
  [53]]

 ...

 [[ 0]
  [ 0]
  [ 0]
  ...
  [ 0]
  [ 0]
  [ 4]]

 [[ 0]
  [ 0]
  [ 0]
  ...
  [ 0]
  [ 0]
  [ 0]]

 [[ 0]
  [ 0]
  [ 0]
  ...
  [60]
  [19]
  [43]]]=[1, 1, 35]

并且这意味着,在图像阵列(灰度)上显示情况的情况下,应将鼠标押注在1-x和1-y位置,并进行适当的操作(鼠标操作或键盘按下操作,具体取决于未标记的字符)值)。

尽管如此,我对适当的模型拟合存在问题。我有30000帧,对于模型来说,正确学习似乎太少了,因为在拟合模型后进行预测的情况下,它总是给出相同的输出:

[1,1,0]

我几乎可以肯定,这是因为学习的取景框数量很少。我正在尝试学习一个大约有60个动作的模型,并且屏幕上的更改不太引人注目,因此这组数据确实很难。尽管如此,我想问一问在这种情况下是否还有其他方法可以改善模型的学习。我尝试过:

  1. 将学习率降低到(1e ^ 5)
  2. 模型配置上的不同激活方法/优化器/层数
  3. 更改要学习的动作的数量(例如,我从学习池中删除了动作)

这是我获取数据的方式:

import os
import cv2
import mss
import pyWinhook as pyHook
import pythoncom
import numpy as np


def get_screen():
    with mss.mss() as sct:
        screen = np.array(sct.grab((0, 0, 1366, 768)))
    screen = cv2.cvtColor(screen, cv2.COLOR_BGR2GRAY)
    screen = cv2.resize(screen, (136, 76))
    return screen

def get_data():
    # file names for training data arrays
    file_name = 'training_data.npy'
    copy_file_name = 'training_data_copy.npy'

    # deciding if previous file with data is saved. If yes, it is opened. If not it's created
    if os.path.isfile(file_name):
        print('File exists, loading previous data!')
        print(os.path.realpath(file_name))
        training_data = list(np.load(file_name, allow_pickle=True))
        np.save(copy_file_name, training_data)
    else:
        print('File does not exist, starting fresh!')
        training_data = []
    # saving data after acquiring 2500 sets of inputs and screenshots
    def save_data(screen, output):
        training_data.append([screen, output])
        if len(training_data) % 2500 == 0:
            print(len(training_data))
            np.save(file_name, training_data)
        print("Frames taken: " + str(len(training_data)))
        index = len(training_data) - 1
        print(training_data[index])

    # getting inputs and screen on mouse event
    def OnMouseEvent(event):
        action = event.MessageName
        screen = get_screen()
        output = [event.Position, 0]
        if action == 'mouse move':
            output[1] = 'move'
        elif action == 'mouse left down':
            output[1] = 'left'
        elif action == 'mouse right down':
            output[1] = 'right'
        save_data(screen, output)
        return True

    # getting inputs and screen on keyboard event
    def OnKeyboardEvent(event):
        if event == 'Delete':
            np.save(file_name, training_data)
            print("Save and exit")
            exit()
        screen = get_screen()
        output = [(1,1), event.Key]
        ctrl_pressed = pyHook.GetKeyState(pyHook.HookConstants.VKeyToID('VK_CONTROL'))
        shift_pressed = pyHook.GetKeyState(pyHook.HookConstants.VKeyToID('VK_SHIFT'))
        try:
            if ctrl_pressed and int(pyHook.HookConstants.IDToName(event.KeyID)) in range(10):
                output[1] = 'bind' + event.Key
        except ValueError:
            pass
        try:
            if shift_pressed and int(pyHook.HookConstants.IDToName(event.KeyID)) in range(10):
                output[1] = 'add' + event.Key
        except ValueError:
            pass
        save_data(screen, output)
        return True

    # create a hook manager
    hm = pyHook.HookManager()
    # watch for all mouse events
    hm.MouseLeftDown = OnMouseEvent
    hm.MouseRightDown = OnMouseEvent
    #hm.MouseMove = OnMouseEvent
    hm.KeyUp = OnKeyboardEvent
    # set the hook
    hm.HookMouse()
    hm.HookKeyboard()
    # wait forever
    try:
        pythoncom.PumpMessages()
    except KeyboardInterrupt:
        pass

    # looping getting data
    while True:
        pass

这是我的模型配置:

import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
from tflearn.layers.normalization import local_response_normalization
import tensorflow as tf

def trainingmodel(width, height, lr):
    network = input_data(shape=[None, width, height, 1], name='input')
    network = conv_2d(network, 96, 11, strides=4, activation='relu')
    network = max_pool_2d(network, 3, strides=2)
    network = local_response_normalization(network)
    network = conv_2d(network, 256, 5, activation='relu')
    network = max_pool_2d(network, 3, strides=2)
    network = local_response_normalization(network)
    network = conv_2d(network, 384, 3, activation='relu')
    network = conv_2d(network, 256, 3, activation='relu')
    network = max_pool_2d(network, 3, strides=2)
    network = local_response_normalization(network)
    network = fully_connected(network, 4096, activation='tanh')
    network = dropout(network, 0.25)
    network = fully_connected(network, 3, activation='softmax')
    sgd = tflearn.optimizers.SGD(learning_rate=0.01, lr_decay=0.96, decay_step=100)
    network = regression(network, optimizer='adam',
                         loss='categorical_crossentropy',
                         learning_rate=lr, name='targets')
    model = tflearn.DNN(network, checkpoint_path='model_training_model',
                        max_checkpoints=1, tensorboard_verbose=2, tensorboard_dir='log')

    return model
python tensorflow machine-learning tflearn
1个回答
0
投票

我不确定使用图片完全可以实现您想做的事情。我已经看到这适用于超级马里奥兄弟,但WC3是一种更为复杂的方案。您需要模型中每个角色的每个动作的图片,才能得出正确的结论。

您的模型可能无法处理未显示给模型的情况(以类似方式)。您可以尝试使用模板匹配为角色提取运动,并在(x,y)位置上而不是在CNN图像上教授模型。

© www.soinside.com 2019 - 2024. All rights reserved.