ゼロから作る Deep Learning
のサンプルソースコードを少し変更して、39層のネットワークを動かします。
以降、ゼロから作る Deep Learningを本と記載します。
環境は、Mac mini (CPU 2.8GHz / メモリ 16GB / SSD / (GPU Intel Iris 1536MB))です。
処理はCPUで行われます。
ディスプレイの電源が落ちているとき、スリープさせないように、システム設定を設定しておきます。
プログラムで実現することを決める
カラー画像に対応します。
下記の2種類の分類をします。
- 犬、猫
プログラムを設計する
vgg16
を参考にして、ニューラルネットワークの構成を39層にします。
MathWorksのホームページでは、41層として紹介されています。本では、入力と出力を1層と数えていないので、本に合わせて39層としています。
メモリ使用量を抑えるために、学習結果の精度を得る部分の実装を変更します。
横にスクロールします。
準備する
7層のネットワークを動かす 1と同様です。
ディレクトリ構成を下記の様にします。
ch07とch08とcommonは、本のサンプルソースコードです。参考にしたり、そのまま使ったりします。
ch08を参考にvgg16を追加します。
- develop
- deep_learning
- ch07(サンプルソースコード)
- ch08(サンプルソースコード)
- common(サンプルソースコード)
- simple(追加済み)
- deep(追加済み)
- vgg16(追加)
プログラムを実装する
ゼロから作る Deep LearningのGitHubからソースコードを取得します。
サンプルソースコードch08/train_deepnet.pyを書き換える
21層のネットワークを動かす 1と同様です。
vgg16ディレクトリを作成して、deepディレクトリからtrain.pyをコピーします。
ネットワークを置き換えます。Trainerも置き換えます。
ソースコードを全て掲載します。
vgg16/train.py
# coding: utf-8
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import numpy as np
import matplotlib.pyplot as plt
# from dataset.mnist import load_mnist
from vgg16 import Vgg16
from common.trainer_small_memory import Trainer
import time
import glob
import sys
# ------------------------------------------------------------------------------
## dog, cat, 224x224
# ------------------------------------------------------------------------------
argv = sys.argv
if 1 < len(argv):
print( 'argv[1] :', argv[1], flush=True )
# ------------------------------------------------------------------------------
## for Network
# IMAGE_SIZE = 56
# IMAGE_SIZE = 112
# IMAGE_SIZE = 168
IMAGE_SIZE = 224
## for Trainer
CATEGORIES_LIST = [ 'dog', 'cat' ]
# EPOCHS = 1
# EPOCHS = 2
# EPOCHS = 10
EPOCHS = 20
# EPOCHS = 100
# EPOCHS = 200
# EPOCHS = 500
# EPOCHS = 1000
TRAINING_IMAGE_PATH = '../../../image/train/'
TESTING_IMAGE_PATH = '../../../image/test/'
## dog(12500), cat(12500), tomato(12500), ...
## how many pictures from each category
## dog: N from 12500 && cat N from 12500 && tomato N from 12500
# TRAINING_EACH_DATA_NUM = 10
TRAINING_EACH_DATA_NUM = 100
## for each categories
## now, this function is dead.
LOAD_IMAGE_NUM_AT_ONE_TIME = TRAINING_EACH_DATA_NUM
## how many pictures from each category
TESTING_EACH_DATA_NUM = 10
# TRAINING_MINI_BATCH_SIZE = 1
TRAINING_MINI_BATCH_SIZE = 2
# TRAINING_MINI_BATCH_SIZE = 4
# TRAINING_MINI_BATCH_SIZE = 10
# TRAINING_MINI_BATCH_SIZE = 20
# TRAINING_MINI_BATCH_SIZE = 50
# TRAINING_MINI_BATCH_SIZE = 100
# TRAINING_MINI_BATCH_SIZE = 1000
## use 50 pictures to get accuracy
ACCURACY_TRAIN_DATA_NUM = 20
ACCURACY_TEST_DATA_NUM = 20
# OPTIMIZER = 'SGD'
# OPTIMIZER = 'Momentum'
# OPTIMIZER = 'Nesterov'
OPTIMIZER = 'AdaGrad'
# OPTIMIZER = 'RMSprop'
# OPTIMIZER = 'Adam'
# LEARNING_RATE = 0.001
# LEARNING_RATE = 0.0058626586691910305
# LEARNING_RATE = 10 ** np.random.uniform(-6, -2)
# LEARNING_RATE = 10 ** np.random.uniform(-5, -1)
LEARNING_RATE = 10 ** np.random.uniform(-4, -1)
# ------------------------------------------------------------------------------
print( 'learning rate :', LEARNING_RATE, flush=True )
print( 'categories list:', CATEGORIES_LIST, flush=True )
print( 'image size :', IMAGE_SIZE, flush=True )
print( 'epochs :', EPOCHS, flush=True )
print( 'traning image path :', TRAINING_IMAGE_PATH, flush=True )
print( 'testing image path :', TESTING_IMAGE_PATH, flush=True )
print( 'traning each data numbers :', TRAINING_EACH_DATA_NUM, flush=True )
print( 'testing each data numbers :', TESTING_EACH_DATA_NUM, flush=True )
print( 'mini batch size :', TRAINING_MINI_BATCH_SIZE, flush=True )
print( 'accuracy train data numbers :', ACCURACY_TRAIN_DATA_NUM, flush=True )
print( 'accuracy test data numbers :', ACCURACY_TEST_DATA_NUM, flush=True )
# ------------------------------------------------------------------------------
network = Vgg16( input_dim=(3, IMAGE_SIZE, IMAGE_SIZE), output_size=len(CATEGORIES_LIST) )
# ------------------------------------------------------------------------------
PARAMETER_LOAD_MODE = 0
if 1 < len(argv):
if argv[1] == 'LOAD_PARAMETER':
PARAMETER_LOAD_MODE = 1
else:
print( 'ERROR : argv[1] :', argv[1], flush=True )
print( 'LOAD_PARAMETER is valid.', flush=True )
quit()
# ------------------------------------------------------------------------------
## if pickle file exists, load it.
pickle_list = glob.glob('network_parameters.pkl.*')
print( 'pickle_list :', pickle_list, flush=True )
new_pickle_name = 'network_parameters.pkl.' + '1'
pickle_file_latest_number = 1
if PARAMETER_LOAD_MODE == 1:
if pickle_list:
pickle_list.sort()
print( 'sorted pickle_list :', pickle_list, flush=True )
pickle_file_latest_name = pickle_list[-1]
print( 'pickle_file_latest_name :', pickle_file_latest_name, flush=True )
tmp_array = pickle_file_latest_name.split('.')
pickle_file_latest_number = int( tmp_array[-1] )
pickle_file_latest_number = pickle_file_latest_number + 1
new_pickle_name = 'network_parameters.pkl.' + str(pickle_file_latest_number)
network.load_params(file_name=pickle_file_latest_name)
print( 'new_pickle_name :', new_pickle_name, flush=True )
# ------------------------------------------------------------------------------
trainer = Trainer(network,
training_image_path = TRAINING_IMAGE_PATH,
testing_image_path = TESTING_IMAGE_PATH,
load_image_num_at_one_time = LOAD_IMAGE_NUM_AT_ONE_TIME,
training_each_data_num = TRAINING_EACH_DATA_NUM,
testing_each_data_num = TESTING_EACH_DATA_NUM,
categories_list = CATEGORIES_LIST, image_size = IMAGE_SIZE,
epochs = EPOCHS,
training_mini_batch_size = TRAINING_MINI_BATCH_SIZE,
accuracy_train_data_num = ACCURACY_TRAIN_DATA_NUM,
accuracy_test_data_num = ACCURACY_TEST_DATA_NUM,
optimizer = OPTIMIZER, optimizer_param = {'lr': LEARNING_RATE},
)
start_time = time.time()
trainer.train()
print('train_time :', time.time() - start_time )
# ------------------------------------------------------------------------------
## save parameter
network.save_params( new_pickle_name )
print( 'Saved Network Parameters! :', new_pickle_name )
## draw graph after
accuracy_list_file_name = 'accuracy_list.' + str(pickle_file_latest_number)
accuracy_list_file = open(accuracy_list_file_name, 'w')
print('y (train accuracy) :', trainer.train_acc_list, file=accuracy_list_file)
print('y (test accuracy) :', trainer.test_acc_list, file=accuracy_list_file)
accuracy_list_file.close()
# ------------------------------------------------------------------------------
## draw graph
x = np.arange(EPOCHS)
print('x (epochs) :', x)
print('y (train accuracy) :', trainer.train_acc_list)
print('y (test accuracy) :', trainer.test_acc_list)
plt.plot(x, trainer.train_acc_list, marker='o', label='train', markevery=1)
plt.plot(x, trainer.test_acc_list, marker='s', label='test', markevery=1)
plt.xlabel("epochs")
plt.ylabel("accuracy")
plt.ylim(0, 100.0)
plt.legend(loc='lower right')
# plt.show()
plt.savefig('learning_graph.png')
# plt.pause(30)
学習の結果の精度を得るとき、下記の値に到達するまで、ミニバッチサイズずつの処理を繰り返します。
ACCURACY_TRAIN_DATA_NUM = 20
ACCURACY_TEST_DATA_NUM = 20
サンプルソースコードcommon/trainer.pyを書き換える
common/trainer.pyをコピーしてtrainer_small_memory.pyとします。これを書き換えます。
7層や21層のネットワークを使うときの実装では、ミニバッチサイズは学習のときにだけ使われて、精度を求めるときは固定の値を使っていました。
学習時のミニバッチサイズを小さくすることで最大メモリ使用量を抑える効果が得られていることは確認していますが、39層のネットワークで常に16GB以下に抑えるために精度を求めるときにもミニバッチサイズ分を繰り返すようにします。
common/trainer_small_memory.py
# coding: utf-8
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import numpy as np
from common.optimizer import *
import cv2 as open_cv
import time
import matplotlib.pyplot as plt
from memory_profiler import profile
class Trainer:
"""ニューラルネットの訓練を行うクラス
"""
# @profile
def __init__(self, network,
training_image_path = '',
testing_image_path = '',
load_image_num_at_one_time = 0,
training_each_data_num = 0,
testing_each_data_num = 0,
categories_list = [], image_size = 56,
epochs = 20,
training_mini_batch_size = 100,
accuracy_train_data_num = 10,
accuracy_test_data_num = 10,
optimizer='SGD', optimizer_param={'lr':0.01},
evaluate_sample_num_per_epoch=None, verbose=True):
self.network = network
self.training_image_path = training_image_path
self.testing_image_path = testing_image_path
self.load_image_num_at_one_time = load_image_num_at_one_time
self.training_each_data_num = training_each_data_num
self.testing_each_data_num = testing_each_data_num
self.categories_list = categories_list
self.image_size = image_size
self.verbose = verbose
self.epochs = epochs
self.training_mini_batch_size = training_mini_batch_size
self.accuracy_train_data_num = accuracy_train_data_num
self.accuracy_test_data_num = accuracy_test_data_num
self.evaluate_sample_num_per_epoch = evaluate_sample_num_per_epoch
# optimzer
optimizer_class_dict = {'sgd':SGD, 'momentum':Momentum, 'nesterov':Nesterov,
'adagrad':AdaGrad, 'rmsprpo':RMSprop, 'adam':Adam}
self.optimizer = optimizer_class_dict[optimizer.lower()](**optimizer_param)
## the number of images for training
# self.train_size = x_train.shape[0]
## ex. iter_per_epoch(9) = self.train_size(900) / mini_batch_size(100)
# self.iter_per_epoch = max(self.train_size / mini_batch_size, 1)
self.iter_per_epoch = int( max( training_each_data_num * len(categories_list) / training_mini_batch_size, 1) )
print( 'one epoch : mini batch size x', self.iter_per_epoch )
self.iter_per_accuracy_train = int( accuracy_train_data_num / training_mini_batch_size )
self.iter_per_accuracy_test = int( accuracy_test_data_num / training_mini_batch_size )
## ex. self.max_iter(9000) = epochs(1000) / self.iter_per_epoch(9)
# self.max_iter = int(epochs * self.iter_per_epoch)
self.current_iter = 0
self.current_epoch = 0
self.train_loss_list = []
self.train_acc_list = []
self.train_acc_list_tmp = []
self.test_acc_list = []
self.test_acc_list_tmp = []
def get_image_and_label( self, path, categories_list, x_array, t_array, load_image_numbers ):
for dir in os.listdir( path ):
if dir == ".DS_Store":
continue
file_numbers = 0
dir_with_path = path + dir
# print( 'dir : ' + dir_with_path )
## dir : 'tachikoma' or 'dog' or 'cat' or ...
# label = 0
if dir in categories_list:
label = categories_list.index( dir )
else:
continue
for file in os.listdir(dir_with_path):
# ------------------------------------------------------------------------------
if file == ".DS_Store":
continue
# ------------------------------------------------------------------------------
if file.startswith('.'):
print('file name : ', file_with_path)
print('read skip')
continue
# ------------------------------------------------------------------------------
file_with_path = dir_with_path + "/" + file
image = open_cv.imread(file_with_path)
if image is None:
print( 'image read failed.', flush=True )
print( 'file name : ', file_with_path, flush=True )
else:
## debug
# print('file name : ', file_with_path)
image = open_cv.cvtColor(image, open_cv.COLOR_BGR2RGB)
image = open_cv.resize(image, (self.image_size, self.image_size))
## debug
# print( 'category : ', dir, flush=True )
# print( 'label : ', label, flush=True )
## set
x_array.append(image)
t_array.append(label)
file_numbers += 1
if file_numbers == load_image_numbers:
break
## debug : break >> load one image only
# break
print( '%s : the number of files : ' % dir , file_numbers, flush=True )
## debug
## 1920 x 2160
fig = plt.figure(figsize=(19.2, 21.6))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for image_i in range( len(x_array) ):
if len(x_array) < 10:
break
ax = fig.add_subplot(int(len(x_array)/10), 20, image_i + 1)
ax.imshow(x_array[image_i], interpolation='nearest')
# plt.show(block=False)
# plt.pause(.3)
load_image_savefig_name = 'load_image_savefig' + '.png'
plt.savefig(load_image_savefig_name)
## clear the current axis
# plt.cla()
## clear the current figure
# plt.clf()
## clear a figure window
plt.close()
# @profile
def train_step( self, x_batch, t_batch, x_test, t_test ):
# ------------------------------------------------------------------------------
## gradient update loss
grads = self.network.gradient( x_batch, t_batch )
self.optimizer.update( self.network.params, grads )
loss = self.network.loss(x_batch, t_batch)
self.train_loss_list.append(loss)
if self.verbose: print("train loss:" + str(loss), flush=True)
# ------------------------------------------------------------------------------
self.current_iter += 1
print('current iter :', self.current_iter, flush=True)
# @profile
def train( self ):
# ------------------------------------------------------------------------------
## get images for testing
x_test, t_test = [], []
self.get_image_and_label( self.testing_image_path, self.categories_list, x_test, t_test, self.testing_each_data_num )
x_test = np.array(x_test)
t_test = np.array(t_test)
## numbers, height, width, channel -> numbers channel, height, width
x_test = x_test.transpose(0, 3, 1, 2)
# ------------------------------------------------------------------------------
## self.training_each_data_num : 12500
## self.load_image_num_at_one_time : 12500 or 5000 or 1000
## get images for training (self.load_image_num_at_one_time * self.categories_list)
x_train, t_train = [], []
self.get_image_and_label( self.training_image_path, self.categories_list, x_train, t_train, self.load_image_num_at_one_time )
x_train = np.array(x_train)
t_train = np.array(t_train)
print( 'load image total :', x_train.shape, flush=True )
## numbers, height, width, channel -> numbers channel, height, width
x_train = x_train.transpose(0, 3, 1, 2)
# ------------------------------------------------------------------------------
## training
## get accuracy (per epoch)
for epoch_i in range( self.epochs ):
print( 'epoch :', epoch_i + 1, flush=True )
for mini_batch_i in range( self.iter_per_epoch ):
start_time = time.time()
# print( 'mini batch :', mini_batch_i + 1, flush=True )
batch_mask = np.random.choice(x_train.shape[0], self.training_mini_batch_size)
# print( 'batch mask :', batch_mask, flush=True )
x_batch = x_train[batch_mask]
t_batch = t_train[batch_mask]
self.train_step( x_batch, t_batch, x_test, t_test )
print( 'train_step_time :', time.time() - start_time, flush=True )
# one epoch complete >> get accuracy
if self.current_iter % self.iter_per_epoch == 0:
self.current_epoch += 1
print('start getting train accuracy')
for mini_batch_i in range( self.iter_per_accuracy_train ):
train_acc = self.network.accuracy(x_train, t_train, self.training_mini_batch_size) * int(100 / self.training_mini_batch_size)
self.train_acc_list_tmp.append(train_acc)
print('start getting test accuracy')
for mini_batch_i in range( self.iter_per_accuracy_test ):
test_acc = self.network.accuracy(x_test, t_test, self.training_mini_batch_size) * int(100 / self.training_mini_batch_size)
self.test_acc_list_tmp.append(test_acc)
self.train_acc_list.append( np.mean(self.train_acc_list_tmp) )
self.test_acc_list.append( np.mean(self.test_acc_list_tmp) )
print('train accuracy :', np.mean(self.train_acc_list_tmp), flush=True)
print('test accuracy :', np.mean(self.test_acc_list_tmp), flush=True)
self.train_acc_list_tmp = []
self.test_acc_list_tmp = []
# ------------------------------------------------------------------------------
## get accuracy
# test_acc = self.network.accuracy(self.x_test, self.t_test)
# last_test_num = int( self.testing_each_data_num * len(self.categories_list) )
# test_acc = self.network.accuracy(x_test, t_test, last_test_num) * int(100 / last_test_num)
# if self.verbose:
# print("=============== Final Test Accuracy ===============")
# print("test acc:" + str(test_acc))
サンプルソースコードch08/deep_convnet.pyを書き換える
vgg16ディレクトリに、deep/deep_convnet.pyをコピーして書き換えます。
すでに書き換えた部分を利用します。
39層にします。
多分、VGG16と同じです。同じだったらいいな。
メモリ使用量を抑えるために、設定したミニバッチサイズでaccuracy関数を繰り返すように変更します。ミニバッチサイズ=2にすると、大体6.5GBから7.5GBの間のメモリ使用量で動作し続けます。
vgg16.pyのソースコードを全て掲載します。
vgg16/vgg16.py
# coding: utf-8
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import pickle
import numpy as np
from collections import OrderedDict
from common.layers import *
# ------------------------------------------------------------------------------
class Vgg16:
## VGG16
"""
conv 64 - relu - conv 64 - relu - pool -
conv 128 - relu - conv 128 - relu - pool -
conv 256 - relu - conv 256 - relu - conv 256 - relu - pool -
conv 512 - relu - conv 512 - relu - conv 512 - relu - pool -
conv 512 - relu - conv 512 - relu - conv 512 - relu - pool -
affine 4096 - relu - dropout 50% - affine 4096 - relu - dropout 50% - affine 1000 - softmax(+cross entropy error)
"""
# ------------------------------------------------------------------------------
## con_p : convolution parameter
def __init__(self, input_dim=(3, 224, 224),
con_p_1 = {'filter_num': 64, 'filter_size':3, 'pad':1, 'stride':1},
con_p_2 = {'filter_num': 64, 'filter_size':3, 'pad':1, 'stride':1},
con_p_3 = {'filter_num':128, 'filter_size':3, 'pad':1, 'stride':1},
con_p_4 = {'filter_num':128, 'filter_size':3, 'pad':1, 'stride':1},
con_p_5 = {'filter_num':256, 'filter_size':3, 'pad':1, 'stride':1},
con_p_6 = {'filter_num':256, 'filter_size':3, 'pad':1, 'stride':1},
con_p_7 = {'filter_num':256, 'filter_size':3, 'pad':1, 'stride':1},
con_p_8 = {'filter_num':512, 'filter_size':3, 'pad':1, 'stride':1},
con_p_9 = {'filter_num':512, 'filter_size':3, 'pad':1, 'stride':1},
con_p_10 = {'filter_num':512, 'filter_size':3, 'pad':1, 'stride':1},
con_p_11 = {'filter_num':512, 'filter_size':3, 'pad':1, 'stride':1},
con_p_12 = {'filter_num':512, 'filter_size':3, 'pad':1, 'stride':1},
con_p_13 = {'filter_num':512, 'filter_size':3, 'pad':1, 'stride':1},
hidden_size=4096, output_size=1000):
## pooling layers : 5
## 5 layers : 224 112 56 28 14 7
## 3 layers : 56 28 14 7
size = input_dim[1] >> 5
# pre_node_nums = np.array([1*3*3, 16*3*3, 16*3*3, 32*3*3, 32*3*3, 64*3*3, 64*4*4, hidden_size])
# Conv1 Conv2 Conv3 Conv4 Conv5 Conv6 Affine1 Affine2
# pre_node_nums = np.array([3*3*3, 16*3*3, 16*3*3, 32*3*3, 32*3*3, 64*3*3, 64*7*7, hidden_size])
pre_node_nums = np.array([3*3*3,
64*3*3, 64*3*3,
128*3*3, 128*3*3,
256*3*3, 256*3*3, 256*3*3,
512*3*3, 512*3*3, 512*3*3,
512*3*3, 512*3*3, 512*3*3,
512*size*size, hidden_size, hidden_size, 1000])
weight_init_scales = np.sqrt(2.0 / pre_node_nums) # ReLUを使う場合に推奨される初期値
print(weight_init_scales)
self.params = {}
pre_channel_num = input_dim[0]
# ------------------------------------------------------------------------------
print('channels : ', input_dim[0])
print('output size : ', output_size)
print('HEIGHT 1 : ', ( (input_dim[1] + 2*con_p_1['pad']-con_p_1['filter_size'] ) / con_p_1['stride'] ) + 1 )
print('WIDTH 1 : ', ( (input_dim[2] + 2*con_p_1['pad']-con_p_1['filter_size'] ) / con_p_1['stride'] ) + 1 )
# ------------------------------------------------------------------------------
for idx, con_p in enumerate([con_p_1, con_p_2, con_p_3, con_p_4, con_p_5, con_p_6, con_p_7, con_p_8, con_p_9, con_p_10, con_p_11, con_p_12, con_p_13]):
# weight : init by random
self.params['W' + str(idx+1)] = weight_init_scales[idx] * np.random.randn(con_p['filter_num'], pre_channel_num, con_p['filter_size'], con_p['filter_size'])
# bias : init by zero
self.params['b' + str(idx+1)] = np.zeros(con_p['filter_num'])
pre_channel_num = con_p['filter_num']
self.params['W14'] = weight_init_scales[13] * np.random.randn(512*size*size, hidden_size)
self.params['b14'] = np.zeros(hidden_size)
self.params['W15'] = weight_init_scales[14] * np.random.randn(hidden_size, hidden_size)
self.params['b15'] = np.zeros(hidden_size)
self.params['W16'] = weight_init_scales[15] * np.random.randn(hidden_size, output_size)
self.params['b16'] = np.zeros(output_size)
# ------------------------------------------------------------------------------
## VGG16 layers
self.layers = []
## 1 [64]
self.layers.append(Convolution(self.params['W1'], self.params['b1'], con_p_1['stride'], con_p_1['pad']))
## 2
self.layers.append(Relu())
## 3 [64]
self.layers.append(Convolution(self.params['W2'], self.params['b2'], con_p_2['stride'], con_p_2['pad']))
## 4
self.layers.append(Relu())
## 5
self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
## 6 [128]
self.layers.append(Convolution(self.params['W3'], self.params['b3'], con_p_3['stride'], con_p_3['pad']))
## 7
self.layers.append(Relu())
## 8 [128]
self.layers.append(Convolution(self.params['W4'], self.params['b4'], con_p_4['stride'], con_p_4['pad']))
## 9
self.layers.append(Relu())
## 10
self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
## 11 [256]
self.layers.append(Convolution(self.params['W5'], self.params['b5'], con_p_5['stride'], con_p_5['pad']))
## 12
self.layers.append(Relu())
## 13 [256]
self.layers.append(Convolution(self.params['W6'], self.params['b6'], con_p_6['stride'], con_p_6['pad']))
## 14
self.layers.append(Relu())
## 15 [256]
self.layers.append(Convolution(self.params['W7'], self.params['b7'], con_p_7['stride'], con_p_7['pad']))
## 16
self.layers.append(Relu())
## 17
self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
## 18 [512]
self.layers.append(Convolution(self.params['W8'], self.params['b8'], con_p_8['stride'], con_p_8['pad']))
## 19
self.layers.append(Relu())
## 20 [512]
self.layers.append(Convolution(self.params['W9'], self.params['b9'], con_p_9['stride'], con_p_9['pad']))
## 21
self.layers.append(Relu())
## 22 [512]
self.layers.append(Convolution(self.params['W10'], self.params['b10'], con_p_10['stride'], con_p_10['pad']))
## 23
self.layers.append(Relu())
## 24
self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
## 25 [512]
self.layers.append(Convolution(self.params['W11'], self.params['b11'], con_p_11['stride'], con_p_11['pad']))
## 26
self.layers.append(Relu())
## 27 [512]
self.layers.append(Convolution(self.params['W12'], self.params['b12'], con_p_12['stride'], con_p_12['pad']))
## 28
self.layers.append(Relu())
## 29 [512]
self.layers.append(Convolution(self.params['W13'], self.params['b13'], con_p_13['stride'], con_p_13['pad']))
## 30
self.layers.append(Relu())
## 31
self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2))
## 32
self.layers.append(Affine(self.params['W14'], self.params['b14']))
## 33
self.layers.append(Relu())
## 34
self.layers.append(Dropout(0.5))
## 35
self.layers.append(Affine(self.params['W15'], self.params['b15']))
## 36
self.layers.append(Relu())
## 37
self.layers.append(Dropout(0.5))
## 38
self.layers.append(Affine(self.params['W16'], self.params['b16']))
## 39
self.last_layer = SoftmaxWithLoss()
# ------------------------------------------------------------------------------
def predict(self, x, train_flg=False):
for layer in self.layers:
if isinstance(layer, Dropout):
x = layer.forward(x, train_flg)
else:
x = layer.forward(x)
return x
# ------------------------------------------------------------------------------
def loss(self, x, t):
y = self.predict(x, train_flg=True)
return self.last_layer.forward(y, t)
# ------------------------------------------------------------------------------
## x : image
## t : label
def accuracy(self, x, t, batch_size=100):
if t.ndim != 1 : t = np.argmax(t, axis=1)
acc = 0.0
# for i in range(int(x.shape[0] / batch_size)):
for i in range(1):
# tx = x[i*batch_size:(i+1)*batch_size]
# tt = t[i*batch_size:(i+1)*batch_size]
batch_mask = np.random.choice(x.shape[0], batch_size)
print('accuracy batch mask :', batch_mask, end='\t| ')
tx = x[batch_mask]
tt = t[batch_mask]
print('answer label :', tt, end=' | ')
y = self.predict(tx)
y = np.argmax(y, axis=1)
print('y :', y, end=' | ')
acc += np.sum(y == tt)
print('accuracy :', np.sum(y == tt), '/ ', batch_size, flush=True)
# return acc / x.shape[0]
return acc
# ------------------------------------------------------------------------------
def gradient(self, x, t):
# forward
self.loss(x, t)
# backward
dout = 1
dout = self.last_layer.backward(dout)
tmp_layers = self.layers.copy()
tmp_layers.reverse()
for layer in tmp_layers:
dout = layer.backward(dout)
## Convolution, Affine
## 1, 3, 6, 8, 11, 13, 15, 18, 20, 22, 25, 27, 29, 32, 35, 38
## >> 0, 2, 5, 7, 10, 12, 14, 17, 19, 21, 24, 26, 28, 31, 34, 37
grads = {}
for i, layer_idx in enumerate((0, 2, 5, 7, 10, 12, 14, 17, 19, 21, 24, 26, 28, 31, 34, 37)):
grads['W' + str(i+1)] = self.layers[layer_idx].dW
grads['b' + str(i+1)] = self.layers[layer_idx].db
return grads
# ------------------------------------------------------------------------------
def save_params(self, file_name="params.pkl"):
params = {}
for key, val in self.params.items():
params[key] = val
with open(file_name, 'wb') as f:
pickle.dump(params, f)
# ------------------------------------------------------------------------------
def load_params(self, file_name="params.pkl"):
with open(file_name, 'rb') as f:
params = pickle.load(f)
for key, val in params.items():
self.params[key] = val
for i, layer_idx in enumerate((0, 2, 5, 7, 10, 12, 14, 17, 19, 21, 24, 26, 28, 31, 34, 37)):
self.layers[layer_idx].W = self.params['W' + str(i+1)]
self.layers[layer_idx].b = self.params['b' + str(i+1)]
まとめ
- 39層のネットワークを実装しました
- メモリ使用量を抑える実装をしました
本を読んで実装を進めていくと、全てが見えるVGG16(?)に辿り着くのは、良い本だな、と思いました。
次は、色々な値のハイパーパラメーターで動作させて結果を取得します。