我正试图按照DeepMind关于Q-learning的论文进行游戏突破,到目前为止,性能并没有提高,也就是说,它根本没有学到任何东西。我只是在运行游戏,保存一些数据和训练,然后再运行游戏,而不是体验重播。我已经发表了一些评论来解释我的实现,任何帮助都是非常感谢的。另外,我可能遗漏了一些要点,请看一下。
我发送4帧作为输入和一个热键按下矩阵乘以奖励的按键。同时,我也在尝试突破威慑-v0,正如在论文中提到的

import gym
import tflearn
import numpy as np
import cv2
from collections import deque
from tflearn.layers.estimator import regression
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d


game = "BreakoutDeterministic-v4"
env = gym.make(game)
env.reset()


LR = 1e-3
num_games = 10     # arbitrary number, not final
num_frames = 500
possible_actions = env.action_space.n
accepted_score = 2
MODEL_NAME = 'data/Model_{}'
gamma = 0.9
epsilon = 0.7
generations = 30    # arbitrary number, not final
height = 84
width = 84

# instead of using experience replay, i'm simply calling this function in generations to generate training data
def play4data(gen):
    training_data = []
    for i in range(num_games):

        score = 0
        data = []
        prev_observation = []
        env.reset()
        done = False
        d = deque()

        while not done:

            # env.render()

            # if it's 0th generation, model hasn't been trained yet, so can't call predict funtion
            # or if i want to take a random action based on some fixed epsilon value
            # or if it's in later gens , but doesn't have 4 frames yet , to send to model
            if gen == 0 or len(prev_observation)==0 or np.random.rand() <= epsilon or len(d) < 4:
                theta = np.random.randn(possible_actions)
            else:
                theta = model.predict(np.array(d).reshape(-1, 4, height, width))[0]

            # action is a single value, namely max from an output like [0.00147357 0.00367402 0.00365852 0.00317618]
            action = np.argmax(theta)
            # action = env.action_space.sample()

            # take an action and record the results
            observation, reward, done, info = env.step(action)


            # since observation is 210 x 160 pixel image, resizing to 84 x 84
            observation = cv2.resize(observation, (height, width))

            # converting image to grayscale
            observation = cv2.cvtColor(observation, cv2.COLOR_RGB2GRAY)

            # d is a queue of 4 frames that i pass as an input to the model
            d.append(observation)
            if len(d) > 4:
                d.popleft()

            # for gen 0 , since model hasn't been trained yet, Q_sa is set to zeros or random
            # or i dont yet have 4 frames to call predict
            if gen == 0 or len(d) < 4:
                Q_sa = np.zeros(possible_actions)
            else:
                Q_sa = model.predict(np.array(d).reshape(-1, 4, height, width))[0]

            # this one is just total score after each game
            score += reward

            if not done:
                Q = reward + gamma*np.amax(Q_sa)
            else:
                Q = reward

            # instead of mask, i just used list comparison to multiply with Q values
            # theta is one-hot after this, like  [0.         0.         0.         0.00293484]
            theta = (theta == np.amax(theta)) * 1 * Q


            # only appending those actions, for which some reward was generated
            # otherwise data-set becomes mostly zeros and model is 99 % accurate by just predicting zeros
            if len(prev_observation) > 0 and len(d) == 4 np.sum(theta) > 0:
                data.append([d, theta])

            prev_observation = observation

            if done:
                break

        print('gen {1} game {0}: '.format(i, gen) + str(score))

        # only taking those games for which total score at the end of game was above accpetable score
        if score >= accepted_score:
            for d in data:
                training_data.append(d)

    env.reset()
    return training_data


# exact model described in DeepMind paper, just added a layer to end for 18 to 4
def simple_model(width, height, num_frames, lr, output=9, model_name='intelAI.model'):
    network = input_data(shape=[None, num_frames, width, height], name='input')
    conv1 = conv_2d(network, 8, 32,strides=4, activation='relu', name='conv1')
    conv2 = conv_2d(conv1, 4, 64, strides=2, activation='relu', name='conv2')
    conv3 = conv_2d(conv2, 3, 64, strides=1, activation='relu', name='conv3')
    fc4 = fully_connected(conv3, 512, activation='relu')
    fc5 = fully_connected(fc4, 18, activation='relu')
    fc6 = fully_connected(fc5, output, activation='relu')

    network = regression(fc6, optimizer='adam',
                         loss='mean_square',
                         learning_rate=lr, name='targets')

    model = tflearn.DNN(network,
                        max_checkpoints=0, tensorboard_verbose=0, tensorboard_dir='log')
    return model


# defining/ declaring the model
model = simple_model(width, height, 4, LR, possible_actions)

# this function is responsible for training the model
def train2play(training_data):

    X = np.array([i[0] for i in training_data]).reshape(-1, 4, height, width)
    Y = [i[1] for i in training_data]


    # X is the queue of 4 frames
    model.fit({'input': X}, {'targets': Y}, n_epoch=5, snapshot_step=500, show_metric=True, run_id='openai_learning')

# repeating the whole process in terms of generations
# training again and again after playing for set number of games
for gen in range(generations):

    training_data =  play4data(gen)
    np.random.shuffle(training_data)
    train2play(training_data)

    model.save(MODEL_NAME.format(game))

最佳答案

我没有详细检查每一行代码,因此可能遗漏了一些内容,但以下是一些值得研究的内容:
你训练了多少帧(例如,多少次呼叫)?我不知道DeepMind的DQN需要多少时间来完成这个特定的游戏,但是很多atari游戏确实需要数百万步才能在性能上得到显著的提高。仅仅从少量的训练就很难判断它是否按预期工作。
除非我错过了,否则你看起来不会随着时间而腐烂。起始值step()是可以的(或者我认为在开始时有更高的值更为常见),但是随着时间的推移,它确实应该降低,以类似epsilon0.7的值结束。如果你把它保持在这么高的水平,它将开始限制你能学到多少。
你提到你有意不使用经验回放,但经验回放在DQN论文中被描述为稳定学习的一个重要组成部分。其重要性的一个假设是,它消除/减少了经验样本之间的相关性,这对于神经网络的训练是至关重要的(如果你给你的网络提供的所有样本看起来都一样,因为它们都是最近从同一策略生成的,那么它将无法获得足够多的训练数据)。
我看不到你在使用目标网络(用于计算0.1学习目标的网络的单独副本,它只是偶尔通过将学习网络的参数复制到其中来更新)。与经验回放一样,这在DQN论文中被描述为稳定学习过程的一个重要组成部分。我认为没有它,你不可能合理地期望一个稳定的学习过程。

关于python - 难以实现Breakout DeepMind的模型,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/49409790/

10-13 00:09