U E D R , A S I H C RSS

머신러닝스터디/2016/2016_07_09 (rev. 1.3)

머신러닝스터디/2016/2016_07_09

내용

  • Embedding에는 word index가 필요함.
    • 초기에 Tokenizer로 word frequency를 input으로 썼는데 학습이 잘 안됨.
    • Embedding은 해당 단어의 유무를 one-hot coding하여 일차원 배열로 나타냄.
      • Question: 단어의 빈도는 고려되지 않나?

tokenizer = Tokenizer(nb_words=1000)
X_train = tokenizer.sequences_to_matrix(X_train, mode="freq")

  • optimizer
    • adamax 를 썼는데 accuracy가 50% 대에 머무름

코드

import keras
import numpy as np
from keras.datasets import imdb
from keras.preprocessing.text import Tokenizer
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM

(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=1000)

from keras.preprocessing.sequence import pad_sequences
X_train = pad_sequences(X_train, 1000)
X_test = pad_sequences(X_test, 1000)

model = Sequential()
model.add(Embedding(1000, 64, input_length=1000))
model.add(LSTM(output_dim=32, activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dense(16, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(8, activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation="sigmoid"))

model.compile(loss="binary_crossentropy", optimizer="adagrad", metrics=["accuracy"])

model.fit(X_train, y_train, batch_size=500, nb_epoch=100)
model.evaluate(X_test, y_test, batch_size=1000)
pred = model.predict(X_test, batch_size=20000)

print (pred[0], y_test[0])
print (pred[1], y_test[1])
print (pred[2], y_test[2])

다음 시간에는

  • Coursera 동영상 week 7 보기

더 보기

Valid XHTML 1.0! Valid CSS! powered by MoniWiki
last modified 2021-02-07 05:29:27
Processing time 0.0201 sec