venue
stringclasses 9
values | original_openreview_id
stringlengths 8
17
| revision_openreview_id
stringlengths 8
11
| content
stringlengths 2
620k
| time
stringdate 2016-11-04 05:38:56
2025-05-23 04:52:50
|
|---|---|---|---|---|
ICLR.cc/2018/Conference
|
H132zmZCW
|
BkKm77Z0b
|
[]
|
2017-10-27 21:20:33
|
ICLR.cc/2018/Conference
|
BkKm77Z0b
|
HJQDXX-AW
|
[]
|
2017-10-27 21:21:31
|
ICLR.cc/2018/Conference
|
HJQDXX-AW
|
BkhF47-Rb
|
[]
|
2017-10-27 21:26:27
|
ICLR.cc/2018/Conference
|
BkhF47-Rb
|
SypmMG-0Z
|
[]
|
2018-01-25 15:40:23
|
ICLR.cc/2018/Conference
|
S1Cfl7bAZ
|
HkixKv67z
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. Shi et al. (2016) and Belinkov et al. (2017) demonstrate that neural ma- ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'beneficial. Some recent work has addressed this by learning general-purpose sentence representations (Kiros ', 'modified_lines': 'et al., 2015; Wieting et al., 2015; Hill et al., 2016; Conneau et al., 2017; McCann et al., 2017; Jernite et al., 2017; Nie et al., 2017; Pagliardini et al., 2017). However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal. ', 'original_lines': 'et al., 2015; Hill et al., 2016; Conneau et al., 2017; McCann et al., 2017; Jernite et al., 2017; Nie et al., 2017). However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 MULTI-TASK TRAINING SETUP', 'after_section': None, 'context_after': 'Model details can be found in section 7 in the Appendix. ', 'paragraph_idx': 24, 'before_section': '4.1 MULTI-TASK TRAINING SETUP', 'context_before': 'perform αi ∗ N parameter updates on task i before selecting a new task at random proportional to the training ratios, where N is a predetermined constant. ', 'modified_lines': 'We take a simpler approach and pick a new sequence-to-sequence task to train on after every param- eter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below. ', 'original_lines': 'We take a simpler approach and pick a new task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to- sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 23}, {'section': '5.1 EVALUATION STRATEGY', 'after_section': '5.1 EVALUATION STRATEGY', 'context_after': 'The choice of transfer tasks and evaluation framework2 are borrowed largely from Conneau et al. (2017). We provide a condensed summary of the tasks in section 10 in the Appendix but refer readers to their paper for a more detailed description. Model Transfer approaches ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks (Faruqui & Dyer, 2014; Tsvetkov et al., 2015). MR CR SUBJ MPQA SST TREC MRPC SICK-R SICK-E STSB ∆ ', 'original_lines': 'updating the parameters our sentence representation model. We also consider such a transfer learn- ing evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks (Faruqui & Dyer, 2014; Tsvetkov et al., 2015). MR CR SUBJ MPQA SST TREC MRPC SICK-R SICK-E STSB ', 'after_paragraph_idx': 34, 'before_paragraph_idx': None}, {'section': '5.1 EVALUATION STRATEGY', 'after_section': '5.1 EVALUATION STRATEGY', 'context_after': 'Infersent (SST) Infersent (SNLI) Infersent (AllNLI) Our Models +STN +Fr +De +STN +Fr +De +NLI +STN +Fr +De +NLI +L +STN +Fr +De +NLI +L +STP +STN +Fr +De +NLI +2L +STP 70.8 71.8 64.7 76.5 79.4 - - - - (*) 79.9 81.1 80.3 81.2 81.7 82.2 82.4 Approaches trained from scratch on these tasks ', 'paragraph_idx': 36, 'before_section': '5.1 EVALUATION STRATEGY', 'context_before': 'DiscSent + BiGRU DiscSent + unigram DiscSent + embed ', 'modified_lines': 'Byte mLSTM +STN +STN +Fr +De +NLI +L +STP +Par 77.8 86.9 78.9 81.6 ', 'original_lines': ' +STN +Fr +De +NLI +L +STP +Par 77.5 81.6 ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 36}, {'section': '5.1 EVALUATION STRATEGY', 'after_section': None, 'context_after': '79.4 83.1 - - - 78.4 76.7 70.1 80.1 83.1 - - - - 83.7 84.6 86.3 85.1 86.4 87.3 87.8 88.6 81.8 86.3 - - - ', 'paragraph_idx': 36, 'before_section': None, 'context_before': 'TF-KLD Illinois LH Dependency tree LSTM ', 'modified_lines': 'Neural Semantic Encoder BLSTM-2DCNN - 82.3 82.1 91.4 85.8 87.6 - - ', 'original_lines': '80.8 87.6 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EVALUATION STRATEGY', 'after_section': None, 'context_after': 'SENNA NMT En-Fr Skipgram GoogleNews FastText Multilingual Our Model ', 'paragraph_idx': 36, 'before_section': None, 'context_before': 'Dim V143 ', 'modified_lines': 'SIMLEX WS353 YP130 MTurk771 RG65 MEN QVEC GloVe6B GloVe840B Charagram Attract-Repel ', 'original_lines': 'SIMLEX WS353 RW YP130 MTurk771 RG65 MEN QVEC Glove6B ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '0.62 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.59 0.66 ', 'modified_lines': '', 'original_lines': ' 0.54∗ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'after_section': None, 'context_after': 'Model ', 'paragraph_idx': 41, 'before_section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'context_before': 'Table 3: Evaluation of word embeddings. All results were computed using Faruqui & Dyer (2014) with the exception of the Skipgram and NMT embeddings which were obtained from Jastrzebski et al. (2017)4. We also ', 'modified_lines': 'report QVEC benchmarks (Tsvetkov et al., 2015). Result ', 'original_lines': 'report QVEC benchmarks (Tsvetkov et al., 2015). ∗ our embeddings have 1040 pairs out of 2034 for which atleast one of the words is OOV, so a comparison with other embeddings isn’t fair on RW. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 41}, {'section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'after_section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'context_after': 'Shen et al. (2017) and The last 4 rows are our experiments using Infersent (Conneau et al., 2017) and our models. layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.3% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent and smaller improvement of 0.5% on SST. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from In Table 4, we show that simply training an MLP on top of our fixed sentence representations out- performs several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model (Parikh et al., 2016). When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by Shen et al. (2017) in this low-resource setting. et al., 2016) on the benchmarks presented by Faruqui & Dyer (2014) and Tsvetkov et al. (2015). In Table 5, we probe our sentence representations to determine if certain sentence characteristics ', 'paragraph_idx': 42, 'before_section': None, 'context_before': '87.01 Table 4: Supervised & low-resource classification accuracies on the Quora duplicate question ', 'modified_lines': 'dataset. Accuracies are reported corresponding to the number of training examples used. The first 6 rows are taken from Wang et al. (2017), the next 4 are from Tomar et al. (2017), the next 5 from 5.2 EXPERIMENTAL RESULTS & DISCUSSION Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section 9 of the Appendix. It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional 7 Under review as a conference paper at ICLR 2018 Model Length Content Order Passive Tense TSS Majority Baseline Infersent (AllNLI) Skipthought Our Models Skipthought +Fr +De Skipthought +Fr +De +NLI Skipthought +Fr +De +NLI +Par Sentence characteristics Syntatic properties 22.0 75.8 - 86.8 88.1 93.7 50.0 75.8 - 75.7 75.7 75.5 50.0 75.1 - 81.1 81.5 83.1 81.3 92.1 94.0 97.0 96.7 98.0 77.1 93.3 96.5 97.0 96.8 97.6 56.0 70.4 74.7 77.1 77.1 80.7 Table 5: Evaluation of sentence representations by probing for certain sentence characteristics and syntactic properties. Sentence length, word content & word order from Adi et al. (2016) and sentence active/passive, tense and top level syntactic sequence (TSS) from Shi et al. (2016). Numbers reported are the accuracy with which the models were able to predict certain characteristics. scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK- R) and entailment (SICK-E) consistent with observations made by Bowman et al. (2016). Unlike Conneau et al. (2017), who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table 3 we observe that the learned word em- beddings are competitive with popular methods such as GloVe, word2vec, and fasttext (Bojanowski ', 'original_lines': 'dataset. Accuracies are reported corresponding to the number of training examples used. The first 6 rows are taken from Wang et al. (2017), the next 4 are from Tomar et al. (2017) and the next 5 from scratch. The addition of constituency parsing improvements performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by Bowman et al. (2016). 7 Under review as a conference paper at ICLR 2018 Model Length Content Order Passive Tense TSS Majority Baseline Infersent (AllNLI) Skipthought Our Models Skipthought +Fr +De Skipthought +Fr +De +NLI Skipthought +Fr +De +NLI +Par Sentence characteristics Syntatic properties 22.0 75.8 - 86.8 88.1 93.7 50.0 75.8 - 75.7 75.7 75.5 50.0 75.1 - 81.1 81.5 83.1 81.3 92.1 94.0 97.0 96.7 98.0 77.1 93.3 96.5 97.0 96.8 97.6 56.0 70.4 74.7 77.1 77.1 80.7 Table 5: Evaluation of sentence representations by probing for certain sentence characteristics and syntactic properties. Sentence length, word content & word order from Adi et al. (2016) and sentence active/passive, tense and top level syntactic sequence (TSS) from Shi et al. (2016). Numbers reported are the accuracy with which the models were able to predict certain characteristics. Unlike Conneau et al. (2017), who use pretrained glove word embeddings, we learn our word em- beddings from scratch. Somewhat surprisingly, in Table 3 we observe that the learned word em- beddings are competitive with popular methods such as glove, word2vec, and fasttext (Bojanowski ', 'after_paragraph_idx': 42, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'allow the application of state-of-the-art generative models of images such as that of Nguyen et al. (2016) to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'from unlabelled data. arXiv preprint arXiv:1602.03483, 2016. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8 VOCABULARY EXPANSION & REPRESENTATION POOLING In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EVALUATION STRATEGY', 'after_section': None, 'context_after': 'In tables 3 and 5 we do not concatenate the representations of multiple models. ', 'paragraph_idx': 36, 'before_section': None, 'context_before': 'German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par. ', 'modified_lines': '+STN +Fr +De : The sentence representation hx is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors. 12 Under review as a conference paper at ICLR 2018 +STN +Fr +De +NLI : The sentence representation hx is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI. +STN +Fr +De +NLI +L : The sentence representation hx is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI. +STN +Fr +De +NLI +L +STP : The sentence representation hx is the concatenation of the fi- nal hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP. +STN +Fr +De +NLI +2L +STP : The sentence representation hx is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP. +STN +Fr +De +NLI +L +STP +Par : The sentence representation hx is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par. ', 'original_lines': '+STN +Fr +De : A concatenation of the representations trained on these tasks with a unidirectional and bidirectional GRU with 1500 hidden units each. +STN +Fr +De +NLI : A concatenation of the representations trained on these tasks with a bidirec- tional GRU and one trained without NLI, each with 1500 hidden units. +STN +Fr +De +NLI +L : A concatenation of the representations trained on these tasks with a bidirectional GRU and one trained without NLI, each with 2048 hidden units. +STN +Fr +De +NLI +L +STP : A concatenation of the representations trained on these tasks with a bidirectional GRU and one trained without STP, each with 2048 hidden units. +STN +Fr +De +NLI +2L +STP : A concatenation of the representations trained on these tasks with a 2-layer bidirectional GRU and a 1-layer model trained without STP, each with 2048 hidden units. +STN +Fr +De +NLI +L +STP +Par : A concatenation of the representations trained on these tasks with a bidirectional GRU and a model trained without Par, each with 2048 hidden units. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'identify if two sentences are paraphrases of each other. The evaluation metric is classification accu- racy and F1. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We also evaluate on pairwise text classification tasks such as paraphrase identification on the Mi- crosoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 21:06:59
|
ICLR.cc/2018/Conference
|
HkixKv67z
|
HJvf5OTmM
|
[{'section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'after_section': None, 'context_after': 'Model ', 'paragraph_idx': 41, 'before_section': None, 'context_before': '0.54 Table 3: Evaluation of word embeddings. All results were computed using Faruqui & Dyer (2014) with the ', 'modified_lines': 'exception of the Skipgram, NMT, Charagram and Attract-Repel embeddings. Skipgram and NMT results were obtained from Jastrzebski et al. (2017)4. Charagram and Attract-Repel results were taken from Wieting et al. (2016) and Mrkˇsi´c et al. (2017) respectively. We also report QVEC benchmarks (Tsvetkov et al., 2015) ', 'original_lines': 'exception of the Skipgram and NMT embeddings which were obtained from Jastrzebski et al. (2017)4. We also report QVEC benchmarks (Tsvetkov et al., 2015). Result ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional ', 'modified_lines': '', 'original_lines': 'layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.3% on the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'after_section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'context_after': 'sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent and smaller improvement of 0.5% on SST. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'tense and top level syntactic sequence (TSS) from Shi et al. (2016). Numbers reported are the accuracy with which the models were able to predict certain characteristics. ', 'modified_lines': 'layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.3% on the ', 'original_lines': '', 'after_paragraph_idx': 38, 'before_paragraph_idx': None}]
|
2018-01-05 22:19:58
|
ICLR.cc/2018/Conference
|
HJvf5OTmM
|
B1EbxMWAW
|
[]
|
2018-01-25 15:40:28
|
ICLR.cc/2018/Conference
|
B1EbxMWAW
|
rkDAFb0DG
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'et al. (2016) also present evidence that sequence-to-sequence parsers (Vinyals et al., 2015) more strongly encode source language syntax. Similarly, Adi et al. (2016) probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded. To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, nat- ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. Shi et al. (2016) and Belinkov et al. (2017) demonstrate that neural ma- chine translation (NMT) systems appear to capture morphology and some syntactic properties. Shi ', 'modified_lines': ' ∗Work done while author was an intern at Microsoft Research Montreal 1 Published as a conference paper at ICLR 2018 ', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': 'their use of an attention mechanism prevents learning a fixed-length vector representation for a sen- tence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like McCann et al. (2017), ', 'modified_lines': '', 'original_lines': ' 2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks (Faruqui & ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We follow a similar evaluation protocol to those presented in Kiros et al. (2015); Hill et al. (2016); Conneau et al. (2017) which is to use our learned representations as features for a low complex- ity classifier (typically linear) on a novel supervised task/domain unseen during training without ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'after_section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'context_after': '+STN +Fr +De +NLI +L +STP +Par 70.8 71.8 ', 'paragraph_idx': 44, 'before_section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'context_before': '+STN +Fr +De +NLI +STN +Fr +De +NLI +L +STN +Fr +De +NLI +L +STP ', 'modified_lines': '+STN +Fr +De +NLI +2L +STP ', 'original_lines': '+STN +Fr +De +NLI +2L +STP ', 'after_paragraph_idx': 44, 'before_paragraph_idx': 44}, {'section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'after_section': '5.2 EXPERIMENTAL RESULTS & DISCUSSION', 'context_after': 'Embedding ', 'paragraph_idx': 44, 'before_section': None, 'context_before': 'performing transfer model on a given task. Underlines are used for each task to indicate both our best performing model as well as the best performing transfer model that isn’t ours. ', 'modified_lines': 'cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table 7). We also present qualitative analysis of our learned representations by visualizations using dimen- sionality reduction techniques (Figure 1) and nearest neighbor exploration (Appendix Table 8). Fig- ure 1 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by Zhang et al. (2015). Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of Kiros et al. (2015) on TREC and SUBJ. Appendix Table 8 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together. 7 Published as a conference paper at ICLR 2018 ', 'original_lines': '2https://www.github.com/facebookresearch/SentEval 4https://github.com/kudkudak/word-embeddings-benchmarks/wiki 6 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 44, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '6 CONCLUSION & FUTURE WORK We present a multi-task framework for learning general-purpose fixed-length sentence representa- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Shen et al. (2017) and The last 4 rows are our experiments using Infersent (Conneau et al., 2017) and our models. ', 'modified_lines': '', 'original_lines': '5.2 EXPERIMENTAL RESULTS & DISCUSSION Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section 9 of the Appendix. It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional 7 Under review as a conference paper at ICLR 2018 Model Length Content Order Passive Tense TSS Majority Baseline Infersent (AllNLI) Skipthought Our Models Skipthought +Fr +De Skipthought +Fr +De +NLI Skipthought +Fr +De +NLI +Par Sentence characteristics Syntatic properties 22.0 75.8 - 86.8 88.1 93.7 50.0 75.8 - 75.7 75.7 75.5 50.0 75.1 - 81.1 81.5 83.1 81.3 92.1 94.0 97.0 96.7 98.0 77.1 93.3 96.5 97.0 96.8 97.6 56.0 70.4 74.7 77.1 77.1 80.7 Table 5: Evaluation of sentence representations by probing for certain sentence characteristics and syntactic properties. Sentence length, word content & word order from Adi et al. (2016) and sentence active/passive, tense and top level syntactic sequence (TSS) from Shi et al. (2016). Numbers reported are the accuracy with which the models were able to predict certain characteristics. layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.3% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent and smaller improvement of 0.5% on SST. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK- R) and entailment (SICK-E) consistent with observations made by Bowman et al. (2016). In Table 4, we show that simply training an MLP on top of our fixed sentence representations out- performs several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model (Parikh et al., 2016). When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by Shen et al. (2017) in this low-resource setting. Unlike Conneau et al. (2017), who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table 3 we observe that the learned word em- beddings are competitive with popular methods such as GloVe, word2vec, and fasttext (Bojanowski et al., 2016) on the benchmarks presented by Faruqui & Dyer (2014) and Tsvetkov et al. (2015). In Table 5, we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by Adi et al. (2016) and Shi et al. (2016). We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing. In Appendix Table 6, we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table 7). We also present qualitative analysis of our learned representations by visualizations using dimen- sionality reduction techniques (Figure 1) and nearest neighbor exploration (Appendix Table 8). Fig- ure 1 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by Zhang et al. (2015). Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of Kiros et al. (2015) on TREC and SUBJ. Appendix Table 8 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together. 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 CONCLUSION & FUTURE WORK', 'after_section': None, 'context_after': 'REFERENCES Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. Fine-grained analysis ', 'paragraph_idx': 56, 'before_section': None, 'context_before': '(2016) to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model. ', 'modified_lines': 'ACKNOWLEDGEMENTS The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team (Paszke et al., 2017). We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Qu´ebec - Nature et technologies for funding. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2650–2658, 2015. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'language translation. In ACL (1), pp. 1723–1732, 2015. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Tsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 1, pp. 397. NIH Public Access, 2017. ', 'modified_lines': '', 'original_lines': '10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pp. 2048–2057, 2015. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '+STN +Fr +De : The sentence representation hx is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par. ', 'modified_lines': '', 'original_lines': '12 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '10.5 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation. ', 'modified_lines': '', 'original_lines': ' 13 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '+STN +Fr +De +NLI +L +STP Supervised Approaches ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'GloVe + WR (U) Charagram-phrase Infersent ', 'modified_lines': '', 'original_lines': '+STN +Fr +De +NLI +STN +Fr +De +NLI +L ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-23 21:38:23
|
ICLR.cc/2018/Conference
|
BkrZAugRW
|
SycP-KxRZ
|
[]
|
2017-10-27 09:50:25
|
ICLR.cc/2018/Conference
|
SycP-KxRZ
|
r1bWMFlA-
|
[]
|
2017-10-27 09:52:57
|
ICLR.cc/2018/Conference
|
r1bWMFlA-
|
rkr7kM-Cb
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Decades of research on the neural code underlying spatial navigation have re- ', 'modified_lines': 'vealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on veloc- ity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. To- gether, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits. ', 'original_lines': 'vealed a diverse set of neural response properties. The Entorhinal Cortex (EC) contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 ERROR CORRECTION AROUND THE BOUNDARY', 'after_section': None, 'context_after': '7 -20-10010Dimension 1-20-100Dimension 2Training iteration 100Training iteration 500Training iteration1000EarlyIntermediateLateabab Under review as a conference paper at ICLR 2018 4 DISCUSSION ', 'paragraph_idx': 28, 'before_section': None, 'context_before': '3.3 ERROR CORRECTION AROUND THE BOUNDARY ', 'modified_lines': 'One natural question is whether the trained RNNs are able to perform localization when the path length exceeds the typical length during training (500 steps), in particular given that noise in the network would gradually accumulate, leading to a decrease in localization performance. We test this by simulating paths of several orders of magnitude longer. Somewhat surprisingly, we find the RNNs still perform well (Figure 6b). In fact, the squared error (averaged over every 10000 steps) Figure 6: Error-correction happens at the boundary and the error is stable over time. At the boundary, the direction is re-sampled to avoid input velocities that lead to a path extending beyond the boundary of the environment. These changing input statistics at the boundary, termed a boundary interaction, are the only cue the RNN receives about the boundary. We find that the RNN uses the boundary interactions to correct the accumulated error between the true integrated input and its prediction based on the linear readout of equation (2). Panel a), the mean squared error increases when there are no boundary interactions, but then decreases after a boundary interaction, with more boundary interactions leading to greater error reduction. b) The network was trained using mini-batches of 500 timesteps but has stable error over a duration at least four orders of magnitude larger. The error of the RNN output (mean and standard deviation shown in black, computed based on 10000 timesteps) is compared to the error that would be achieved by an RNN outputting the best constant values (red). appears to be stable. The spatial response profiles of individual units also remain stable. This implies that the RNNs have acquired intrinsic error-correction mechanisms during training. As shown earlier, during training some of the RNN units develop boundary-related firing (Figure 2c), presumably by exploiting the change of input statistics around the boundary. We hypothesize that boundary interactions may enable error-correction through signals based on these boundary-related activities. Indeed, we find that boundary interactions can dramatically reduce the accumulated er- ror (Figure 6a). Figure 6a shows that, without boundary interactions, on average the squared error grows roughly linearly as expected, however, interactions with the boundaries substantially reduce the error, and more frequent boundary interactions can reduce the error further. Error-correction on grid cells via boundary interactions has been proposed (Hardcastle et al., 2015), however, we emphasize that the model proposed here develops the grid-like responses, boundary responses and the error-correction mechanisms all within the same neural network, thus potentially providing a unifying account of a diverse set of phenomena. ', 'original_lines': 'In our model, boundary interactions are crucial for correcting the error that accumulates over time during the spatial localization task (see Figure 6). The boundary also influences the shape of the border cells as shown in Figure 2c. During training, the possible change in input statistics around the boundary is the only cue the animal receives about the boundary, with no extra tactile cues being fed into the network. This points to an interesting Figure 6: Error correction happens at the boundary and leads to stable error over time. At the boundary the heading direction is resampled to avoid input velocities that lead to a path extending beyond the boundary of the environment. These changing input statistics at the boundary, termed a boundary interaction, are the only cue the RNN receives about the boundary. The RNN uses this to correct the accumulated error between the true integrated input and its prediction based on the linear readout of equation 2. As show in panel a) the mean squared error increases when there are no boundary interactions, in agreement with Hardcastle et al. (2015), but then decreases after a boundary interaction, with more boundary interactions leading to greater error correction. b) The network was trained using minibatches of 500 timesteps but has stable error over a duration at least four orders of magnitude larger (the longest trajectories we tested). The error of the RNN output (mean and standard deviation shown in black) is compared to the error that would be achieved by an RNN outputting the best constant values (red). hypothesis, that the emergence of border cells relies on the movement statistics of the animal, a quite different mechanism than what has been suggested previously (Solstad et al., 2008; Savelli et al., 2008; Lever et al., 2009). Furthermore, it appears that interactions with the boundary are important for the development of grid-like representations. We did not observe grids when there were no interactions with the bound- ary, and an already formed representation would become unstable if training continued in a larger environment where there were no boundary interactions. These observations suggest a potential role of boundary in the formation of grid responses- a hypothesis that has not been seriously explored. An interesting experimental direction would be to raise an animal in a large environment so the animal never encounters a boundary, and then see if the grids still develop in the brain. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Kiah Hardcastle, Surya Ganguli, and Lisa M Giocomo. Environmental boundaries as an error cor- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Torkel Hafting, Marianne Fyhn, Sturla Molden, May-Britt Moser, and Edvard I Moser. Microstruc- ture of a spatial map in the entorhinal cortex. Nature, 436(7052):801–806, 2005. ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'dynamics observed during cognitive tasks. eLife, 6:e20899, 2017. ', 'modified_lines': '', 'original_lines': '10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 19:55:08
|
ICLR.cc/2018/Conference
|
rkr7kM-Cb
|
HJndrzZCW
|
[{'section': '2.1 MODEL DESCRIPTION', 'after_section': '2.1 MODEL DESCRIPTION', 'context_after': 'connected units (or neurons) which receive two external inputs, representing the animal’s speed and heading direction. The two outputs linearly weight the neurons in the RNN. The goal of training is to make the responses of the two output neurons accurately represent the animal’s physical location. ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'for the brain to solve the navigation task efficiently. More generally, it suggests that RNNs can be a powerful tool for understanding the neural mechanisms of certain high-level cognitive functions. ', 'modified_lines': 'Figure 1: a) Example neural data showing different kinds of neural correlates underlying spatial nav- igation in EC. All figures are replotted from previous publications. From left to right: a “grid cell” recorded when an animal navigates in a square environment, replotted from Krupic et al. (2012), with the heat map representing the firing rate of this neuron as a function of the animal’s location (red corresponds to high firing rate); a “band-like” cell, from Krupic et al. (2012); a border cell, from Solstad et al. (2008); an irregular spatially tuned cell, from Diehl et al. (2017); a “speed cell” from Kropff et al. (2015), which exhibits roughly linear dependence on the rodent’s running speed; a “heading direction cell” from Sargolini et al. (2006), which shows systematic change of firing rate depending on animal’s heading direction. b) The network consists of N = 100 recurrently ', 'original_lines': 'a) Example neural data showing different kinds of neural correlates underlying spa- Figure 1: tial navigation in EC. All figures are replotted from previous publications. From left to right: a “grid cell” recorded when an animal navigates in a square environment, replotted from Krupic et al. (2012), with the heat map representing the firing rate of this neuron as a function of the animal’s location (red corresponds to high firing rate); a “band-like” cell, from (Krupic et al., 2012); a border cell, from (Solstad et al., 2008); an irregular spatially tuned cell, from (Diehl et al., 2017); a “speed cell” from (Kropff et al., 2015), which exhibits roughly linear dependence on the rodent’s running speed; a “heading direction cell” from (Sargolini et al., 2006), which shows systematic change of fir- ing rate depending on animal’s heading direction. b) The network consists of N = 100 recurrently ', 'after_paragraph_idx': 9, 'before_paragraph_idx': None}]
|
2017-10-27 20:22:12
|
ICLR.cc/2018/Conference
|
HJndrzZCW
|
Skm0rf-0b
|
[]
|
2017-10-27 20:23:39
|
ICLR.cc/2018/Conference
|
Skm0rf-0b
|
ByvyJv6mM
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '(2015); Kietzmann et al. (2017); Yamins & DiCarlo (2016)). 1 Under review as a conference paper at ICLR 2018 Figure 1: a) Example neural data showing different kinds of neural correlates underlying spatial nav- igation in EC. All figures are replotted from previous publications. From left to right: a “grid cell” ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'tures, starting from Hubel and Wiesel’s famous proposal on the origin of orientation selectivity in primary visual cortex (Hubel & Wiesel, 1962). Inspired by the recent development in deep learn- ing (Krizhevsky et al., 2012; LeCun et al., 2015; Hochreiter & Schmidhuber, 1997; Mnih et al., ', 'modified_lines': '2015), there has been a burst of interest in applying deep feed-forward models, in particular convo- lutional neural networks (CNN) (LeCun et al., 1998), to study the sensory systems, which hierar- chically extract useful features from the sensory inputs (see e.g., Yamins et al. (2014); Kriegeskorte For more cognitive tasks, neural systems often need to maintain certain internal representations of relevant variables in the absence of external stimuli- a process that requires more than feature extraction. We will focus on spatial navigation, which typically requires the brain to maintain a representation of self-location and update it according to the animal’s movements and landmarks of the environment. Physiological studies done in rodents and other mammals (including humans, non-human primates and bats) have revealed a variety of neural correlates of space in Hippocampus and Entorhinal Cortex (EC), including place cells (O’Keefe, 1976), grid cells (Fyhn et al., 2004; Hafting et al., 2005; Fyhn et al., 2008; Yartsev et al., 2011; Killian et al., 2012; Jacobs et al., 2013), along with border cells (Solstad et al., 2008), band-like cells (Krupic et al., 2012) and others (see Figure 1a). In particular, each grid cell only fires when the animal occupies a distinct set of physical locations, and strikingly these locations lie on a lattice. The study of the neural underpinning of spatial cognition has provided an important window into how high-level cognitive functions are supported in the brain (Moser et al., 2008; Aronov et al., 2017). How might the spatial navigation task be solved using a network of neurons? Recurrent neural net- works (RNNs) (Hochreiter & Schmidhuber, 1997; Graves et al., 2013; Oord et al., 2016; Theis & Bethge, 2015; Gregor et al., 2015; Sussillo et al., 2015) seem particularly useful for these tasks. Indeed, recurrent-based continuous attractor networks have been one popular type of models pro- posed for the formation of grid cells (McNaughton et al., 2006; Burak & Fiete, 2009; Couey et al., 2013) and place cells (Samsonovich & McNaughton, 1997). Such models have provided valuable insights into one set of possible mechanisms that could support the formation of the grids. How- ever, these models typically rely on fine-tuned connectivity patterns, in particular the models need a subtle yet systematic asymmetry in the connectivity pattern to move the attractor state according to the animal’s own movement. The existence of such a specific 2D connectivity in rodent EC remains unclear. Additionally, previous models have mainly focused on grid cells, while other types of re- sponses that co-exist in the Entorhinal Cortex have been largely ignored. It would be useful to have a unified model that can simultaneously explain different types of neural responses in EC. Motivated by these considerations, here we present an alternative modeling approach for under- standing the representation of space in the neural system. Specifically, we trained a RNN to perform some spatial navigation tasks. By leveraging the recent development in RNN training and knowl- edge of the navigation system in the brain, we show that training a RNN with biologically relevant constraints naturally gives rise to a variety of spatial response profiles as observed in EC, including grid-like responses. To our knowledge, this is the first study to show that grid-like responses could emerge from training a RNN to perform navigation. Our result implies that the neural representation in EC may be seen as a natural way for the brain to solve the navigation task efficiently(Wei et al., 2015). More generally, it suggests that RNNs can be a powerful tool for understanding the neural mechanisms of certain high-level cognitive functions. ', 'original_lines': '2015), there has been a burst of interest in applying deep feed-forward models, in particular con- volutional neural networks (CNN) (LeCun et al., 1998), to study the sensory system, which hierar- chically extracts useful features from the sensory input (see e.g., Yamins et al. (2014); Kriegeskorte For more cognitive tasks, neural systems often need to maintain an internal representation of relevant variables without external stimuli- a process that requires more than feature extraction. We will fo- cus on spatial navigation, which typically requires the brain to maintain a representation of location and update it according to the animal’s movements and landmarks of the environment. Physiolog- ical studies done in rodents and other mammals (including humans, non-human primates and bats) have revealed a variety of neural correlates of space in Hippocampus and Entorhinal Cortex (EC), including place cells (O’Keefe, 1976), grid cells (Fyhn et al., 2004; Hafting et al., 2005; Fyhn et al., 2008; Yartsev et al., 2011; Killian et al., 2012; Jacobs et al., 2013), along with border cells (Solstad et al., 2008), band-like cells (Krupic et al., 2012) and others (see Figure 1a). The study of the neural underpinning of spatial cognition has provided an important window into how high-level cognitive functions are supported in the brain (Moser et al., 2008; Aronov et al., 2017). How might the spatial navigation task be solved using a network of neurons? Recurrent neural networks (RNNs) (Hochre- iter & Schmidhuber, 1997; Graves et al., 2013; Oord et al., 2016; Theis & Bethge, 2015; Gregor et al., 2015) seem particularly useful for these tasks. Indeed, recurrent-based continuous attractor networks have been one of the main types of models proposed for the formation of grid cells (Mc- Naughton et al., 2006; Burak & Fiete, 2009; Couey et al., 2013) and place cells (Samsonovich & McNaughton, 1997). However, these models require hand-crafted and fined tuned connectivity pat- terns, and the evidence of such specific 2D connectivity patterns has been largely absent. Here we present a new model for understanding the representation of space in the neural system. Specifically, we trained a RNN to perform spatial navigation tasks. By leveraging the recent devel- opment in RNN training and knowledge of the navigation system in the brain, we show that training a RNN with biologically relevant constraints naturally gives rise to a variety of spatial response profiles as observed in Entorhinal Cortex (EC), including grid-like responses. To our knowledge, this is the first study to show how grid-like responses could emerge from training a RNN to perform navigation. Our result implies that the neural representation in EC may be seen as a natural way for the brain to solve the navigation task efficiently. More generally, it suggests that RNNs can be a powerful tool for understanding the neural mechanisms of certain high-level cognitive functions. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '2.1 MODEL DESCRIPTION', 'after_section': None, 'context_after': 'input from other units through the recurrent weight matrix W rec and also receives external input, I(t), that enters the network through the weight matrix W in. Each unit has two sources of bias, bi which is learned and ξi(t) which represents noise intrinsic to the network and is taken to be Gaussian with zero mean and constant variance. The network was simulated using the Euler method for T = 500 timesteps of duration τ /10. yj(t) = ', 'paragraph_idx': 8, 'before_section': '2.1 MODEL DESCRIPTION', 'context_before': 'for i = 1, . . . , N . The activity of each unit, ui(t), is related to the activation of that unit, xi(t), through a nonlinearity which in this study we take to be ui(t) = tanh(xi(t)). Each unit receives ', 'modified_lines': 'To perform a 2D navigation task with the RNN, we linearly combine the firing rates of units in the network to estimate the current location of the animal. The responses of the two linear readout neurons, y1(t) and y2(t), are given by the following equation: ', 'original_lines': ' 2 InputOutputspeeddirectionx-positiony-positionbaOutput: targetRNNcOutput: RNNPerformancespeedfiring ratedirectionagrid cellband cellborder cellirregular cell Under review as a conference paper at ICLR 2018 To perform a 2D navigation task with the RNN we linearly combine the firing rates of units in the network. The two linear readout neurons, y1(t) and y2(t), are given by the following equation: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 7}, {'section': '2.1 MODEL DESCRIPTION', 'after_section': None, 'context_after': '2.3 TRAINING ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'INPUT TO THE NETWORK ', 'modified_lines': 'The network inputs and outputs were inspired by simple spatial navigation tasks in 2-d open envi- ronment. The task basically resembles dead-reckoning(or sometimes referred as path integration), which are ecologically-relevant for many animal species (Darwin, 1873; Mittelstaedt & Mittelstaedt, 1980; Etienne & Jeffery, 2004; McNaughton et al., 2006). To be more specific, the inputs to the net- work were the animal’s speed and direction at each time step. Experimentally, it has been shown that the velocity signals exist in EC(Sargolini et al., 2006; Kropff et al., 2015; Hinman et al., 2016), and there is also evidence that such signals are necessary for grid formation (Winter et al., 2015a;b). Throughout the paper, we adopt the common assumption that the head direction of the animal coin- cides with the actual moving direction. The outputs were the x- and y-coordinates of the integrated position. The direction of the animal is modeled by modified Brownian motion to increase the probability of straight-runs, in order to be consistent with the typical rodent’s behavior in an open environment. The usage of such simple movement statistics has the advantage of having full control of the simulated trajectories. However, for future work it would be very interesting to test the model using different animals’ real movement trajectories to see how the results might change. Special care is taken when the animal is close to the boundary. The boundary of the environment will affect the statistics of the movement, as the animal cannot cross the boundary. This fact was reflected in the model by re-sampling the angular input variable until the input angle did not lead the animal outside the boundary. In the simulations shown below, the animal always starts from the center of the arena, but we verified that the results are insensitive to the starting locations. ', 'original_lines': 'The network inputs and outputs were inspired by simple spatial navigation tasks in open arena. The inputs to the network were chosen to be speed and direction because cells tuned for speed and direction are observed experimentally and these are necessary for grid formation (Winter et al., 2015a;b). Note that throughout the paper, we adopt the common assumption that the head direction of the animal coincides with the actual moving direction. The outputs were the x- and y-coordinates of the integrated position. The direction of the animal is modeled by modified Brownian motion to increase the probability of straight-runs, in order to be consistent with the typical rodent’s behavior in an open environment. Special care is taken when the animal is close to the boundary. The boundary of the environment will affect the statistics of the movement, as the animal cannot cross the boundary. This fact was reflected in the model by re-sampling the angular input variable until the input angle did not lead the animal outside the boundary. In the simulations shown below, the animal always starts from the center of the arena, but we verified that the results are insensitive to the starting locations. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.3 TRAINING', 'after_section': None, 'context_after': 'Figure 2: Different types of spatial selective responses of units in the trained RNN. Example sim- ulation results for three different environments (square, triangular, hexagon) are presented. Blue ', 'paragraph_idx': 17, 'before_section': '2.3 TRAINING', 'context_before': '(5) ', 'modified_lines': 'We find that the results are qualitatively insensitive to the initialization schemes used for the re- current weight matrix W rec. For the results presented in this paper, simulations in the hexagonal environment were obtained by initializing the elements of W rec to be zero mean Gaussian random variables with variance 1.52/N , and simulations in the square and triangular environments were initialized with an orthogonal W rec (Saxe et al., 2014). We initialized the bias b and output weights W out to be zero. The elements of W in were zero mean Gaussian variables with variance 1/Nin. ', 'original_lines': 'The results are qualitatively insensitive to the initialization scheme used for the recurrent weight matrix W rec. Simulations in the hexagonal environment were obtained by initializing the elements of W rec to be zero mean Gaussian random variables with variance 1.52/N , and simulations in the square and triangular environments were initialized with an orthogonal W rec (Saxe et al., 2014). We initialized the bias b and output weights W out to be zero. The elements of W in were zero mean Gaussian variables with variance 1/Nin. 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 17}, {'section': '3 RESULTS', 'after_section': None, 'context_after': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS ', 'paragraph_idx': 18, 'before_section': None, 'context_before': '3 RESULTS ', 'modified_lines': 'We run simulation experiments in arenas with different boundary shapes, including square, triangu- lar and hexagonal. Figure 1c shows a typical example of the model performance after training; the network (red trace) accurately tracks the animal’s actual path (black). ', 'original_lines': 'We run simulation experiments in arenas with different boundary shapes, including square, trian- gular and hexagonal. Figure 1c shows a typical example of the model performance after training, which shows the network (red trace) can accurately track the animal’s actual path (black). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'after_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_after': 'Grid-like responses Most interestingly, we find some of the units in the RNN exhibit clear grid-like field exhibiting roughly circular symmetric or ellipse shape. Furthermore, the firing fields are highly response lattice depends on the shape of the boundary. In particular, training the network to perform Experimentally, it is shown that (medial) EC contains so-called grid cells which exhibit multiple firing fields that lie on a regular grid (Fyhn et al., 2004; Hafting et al., 2005). The grid-like firing patterns in our simulation are reminiscent of the grid cells in rodents and other mammals. However, we also notice that the the grid-like model responses typically exhibit few periods, not as many as experimental data (see Figure 1a). It is possible that using a larger network might reveal finer Border responses Many neurons in the RNN exhibit selectivity to the boundary (Figure 2c). Typi- cally, they only encode a portion of the boundary, e.g. one piece of wall in a square shaped environ- ', 'paragraph_idx': 19, 'before_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_before': '3.1.1 SPATIAL TUNING ', 'modified_lines': 'To test whether the trained RNN developed location-selective representations, we plot individual neurons’ mean activity level as a function of the animal’s location during spatial exploration. Note 4 grid-likeband-likeborderirregularabcd Under review as a conference paper at ICLR 2018 that these average response profiles should not be confused with the linear filters typically shown in feedforward networks. Surprisingly, we find neurons in the trained RNN show a range of interesting spatial response profiles. Examination of these response profiles suggests they can be classified into distinct functional types. Importantly, as we will show, these distinct spatial response profiles can be mapped naturally to known physiology in EC. The spatial responses of all units in trained networks are shown in the Appendix. responses (Figure 2a). These firing patterns typically exhibit multiple firing fields, with each firing structured, i.e., when combined, are arranged on a regular lattice. Furthermore, the structure of the self-localization in a square environment tends to give rectangular grids. In hexagonal and triangular environments, the grids are closer to triangular. grid-patterns in our model. Nonetheless, it is surprising that the gird-like spatial representations can develop in our model, given there is no periodicity in the input. Another potential concern is that, experimentally it is reported that the grids are often on the corners of a triangular lattice (Hafting et al., 2005) even in square environments (see Figure 1a), though the grids are somewhat influenced by the shape of the environment. However, the rats in these experiments presumable had spatial experience in other environment with various boundary shapes. Experimentally, it would be inter- esting to see if grid cells would lie on a square lattice instead if the rats are raised in a single square environment- a situation we are simulating here. ', 'original_lines': 'To test whether the trained RNN developed such location-selective representations, we plot individ- ual neurons’ mean activity level as a function of the animal’s location during spatial exploration. Note that these average response profiles should not be confused with the linear filters typically shown in feedforward networks. Surprisingly, we find neurons in the trained RNN show a range of interesting spatial response profiles. Examination of these response profiles suggests they can be classified into distinct functional types. Importantly, as we will show, these distinct spatial response profiles can be mapped naturally to known physiology in EC. The spatial responses of all units in trained networks from triangular and hexagonal arenas are shown in the Appendix. response (Figure 2a). These firing patterns typically exhibit multiple firing fields, with each firing structured, i.e.,, when combined, are arranged on a regular lattice. Furthermore, the structure of the self-localization in a square environment tend to give rectangular grids. In hexagonal environment and triangular environment, the grids are more close to hexagonal. 4 grid-likeband-likeborderirregularabcd Under review as a conference paper at ICLR 2018 grid-patterns in our model. Nonetheless, it is surprising that the gird-like spatial representations can develop in our model, given there is no periodicity in the input. Another potential concern is that, experimentally it is reported that the grids are often hexagonal (Hafting et al., 2005) even in square environment (see Figure 1a), though the grids are somewhat influenced by the shape of the environment. However, the rats in these experiments presumable had spatial experience in other environment with various boundary shapes. Experimentally, it would be interesting to see if grid cells would lie on a square lattice instead if the rats are raised in a single square environment- a situation we are simulating here. ', 'after_paragraph_idx': 20, 'before_paragraph_idx': 19}, {'section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'after_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_after': 'Spatially-stable but non-regular responses Besides the units described above, most of the remain- ing units also exhibit stable spatial responses, but they do not belong to the above categories. These response profiles can exhibit either one large irregular firing field; or multiple circular firing fields, but these firing fields do not show a regular pattern. Experimentally these type of cells have also been observed. In fact, it is recently reported that the non-grid spatial cells constitute a large portion Figure 3: Direction tuning and speed tuning for nine example units in an RNN trained in a triangular arena. For each unit, we show the spatial tuning, (head) directional tuning, speed tuning respectively, from left to right. a,b,c) The three model neurons show strong directional tuning, but the spatial tuning is weak and irregular. The three neurons also exhibit linear speed tuning. d,e,f) The three neurons exhibit grid-like firing patterns, and clear speed tuning. The strength of their direction speed tuning. i) This band cell shows weak directional tuning, but strong speed tuning. response profiles are shown in Figure 3. Interestingly, we observe that the model border cells tend to have almost zero speed-tuning (e.g., see Figure 3g,h). and their preferred direction. Example tuning curves are shown in Figure 3, and the direction tuning Experimentally, the heading direction tuning in EC is well-known (e.g., Sargolini et al. (2006)). Both the grid and non-grid cells in EC exhibit head direction tuning (Sargolini et al., 2006). Furthermore, the linear speed dependence of the model neurons is similar to the properties of speed cells reported recently in EC (Kropff et al., 2015). Our result is also consistent with another recent study reporting that the majority of neurons in EC exhibit some amount of speed tuning (Hinman et al., 2016). 3.1.3 DEVELOPMENT OF THE TUNING PROPERTIES ', 'paragraph_idx': 24, 'before_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_before': 'statistics along the boundary. Band-like responses Interestingly, some neurons in the RNN exhibit band-like responses (Figure ', 'modified_lines': 'In most of our simulations, these bands tend to be parallel to one of the boundaries. For 2b). some of the units, one of the bands overlaps the boundary, but for others, that is not the case. Experimentally, neurons with periodic-like firing patterns have been recently reported in rodent EC. In one study, it has been reported that a substantial portion of cells in EC exhibit band-like firing characteristics (Krupic et al., 2012). However, we note that based on the reported data in Krupic et al. (2012), the band pattern is not as clear as in our model. of the neurons in Layer II and III of rodent EC (Diehl et al., 2017). 3.1.2 SPEED TUNING AND HEAD DIRECTION TUNING Speed tuning We next ask how the neurons in the RNN are tuned to the inputs. It turns out that many of the model neurons exhibit linear responses to the running speed of the animal, while some neurons show no selectivity to speed, as suggested by the near-flat response functions. Example 5 Under review as a conference paper at ICLR 2018 tuning differ. g,h) Border cells exhibit weak and a bit complex directional tuning and almost no Head direction tuning Furthermore, a substantial portion of the model neurons show direction tuning. There are a diversity of direction tuning profiles, both in terms of the strength of the tuning curves of a complete population are shown in the Appendix. Interestingly, in general model neurons which show the strongest head direction tuning are do not show a clear spatial firing pattern(see Figure 3a,b,c). This suggests that there are a group of neurons which are mostly responsible for encoding the direction. We also notice that neurons with clear grid-like firing can exhibit a variety of direction tuning strengths, from weak to strong (Figure 3d,e,f). In the Appendix, we quantify the relation between these different tuning properties at the whole population level, which show somewhat complex dependence. It remains an open question experimentally, at a population level, how different types of tuning characteristics in EC relate to each other. ', 'original_lines': '2b). In most of our simulations, these bands tend to be parallel to one of the boundaries. For some of the units, one of the bands overlaps the boundary, but for others, that’s not the case. Experimentally, neurons with periodic-like firing pattern have been recently reported in rodent EC. In one study, it has been reported that a substantial portion of cells in EC exhibit band-like firing characteristics (Krupic et al., 2012). However, we note that based on the reported data in Krupic et al. (2012), the band pattern is not as clear as in our model. of the neurons in Layer II and III of rodent EC(Diehl et al., 2017). tuning differ.g,h) Border cells exhibit weak and a bit complex directional tuning and almost no 3.1.2 SPEED TUNING AND HEAD DIRECTION TUNING Speed tuning We next ask how the neurons in the RNN are tuned to the inputs. It turns out that many of the model neurons exhibit linear responses to the running speed of the animal, while some 5 directionspeeddirectionspeeddirectionspeedabcdefghi Under review as a conference paper at ICLR 2018 neurons show no selectivity to speed, as suggested by the near-flat response functions. Example Head direction tuning Furthermore, a substantial portion of the model neurons show direction tun- ing. There are a diversity of direction tuning profiles, both in terms of the strength of the tuning curves of a complete population are shown in the Appendix. Interestingly, in general model neu- rons which show the strongest head direction tuning are only weakly spatially selective (see Figure 3a,b,c). This suggests that there are a group of neurons which are mostly responsible for encoding the direction. We also notice that neurons with more grid-like firing can exhibit a variety of direction tuning strengths, from weak to strong (Figure 3d,e,f). It is possible that the direction tuning of these cells comes from the spatially weakly tuned neurons. ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 23}, {'section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'after_section': None, 'context_after': '3.2 THE IMPORTANCE OF REGULARIZATION We find appropriate regularizations of the RNN to be crucial for the emergence of grid-like repre- sentations. We only observed grid-like representations when the network was encouraged to store ', 'paragraph_idx': 20, 'before_section': None, 'context_before': 'intermediate, and late) into a two-dimensional space according to the similarity of their temporal responses. Here the similarity metric is taken to be firing rate correlation. In this 2D space as shown in Figure 4a, border cell representations appear early and stably persist through the end of training. ', 'modified_lines': 'Furthermore, early during training all responses are similar to the border related responses. In con- trast, grid-like cells typically undergo a substantial change in firing pattern during training before settling into their final grid-like representation (Figure 4b). The developmental time line of the grid-like cells and border cells is roughly consistent with de- velopmental studies in rodents. Experimentally, it is known that border cells emerge earlier in de- velopment, and they exist at about 2 weeks after the rat is born (Bjerknes et al., 2014). The grid 6 directionspeeddirectionspeeddirectionspeedabcdefghi Under review as a conference paper at ICLR 2018 cells mature only at about 4 weeks after birth (Langston et al., 2010; Wills et al., 2010; Bjerknes et al., 2014). Furthermore, our simulations suggest the reason why border cells emerge earlier in development may be that computationally it may be easier to wire-up a network that gives rise to border cell responses. Figure 4: Development of border cells and grid-like cells. Early during training all responses are similar to the border related responses, and only as training continues do the grid-like cells emerge. We perform dimensionality reduction using the t-SNE algorithm on the firing rates of the neu- rons. Each dot represents one neuron (N = 100), and the color represents different training stages (early/intermediate/late shown in blue/cyan/yellow). Each line shows the trajectory of a single high- lighted neuron as its firing responses evolve during training. In panel a), we highlight the border representation. It appears there are four clusters of border cells, each responding to one wall of a square environment (spatial responses from four neurons are inset). These cells’ response profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells change their tuning pro- files substantially during learning. As a natural consequence, they need to travel a long distance in this space between the early and late phase of the training. Figure 5: Complete set of spatial response profiles for 100 neurons in a RNN trained in a square environment. a) Without proper regularization, complex and periodic spatial response patterns do not emerge. b) With proper regularization, a rich set of periodic response patterns emerge, including grid-like responses. 7 -20-10010Dimension 1-20-100Dimension 2Training iteration 100Training iteration 500Training iteration1000EarlyIntermediateLateabab Under review as a conference paper at ICLR 2018 ', 'original_lines': 'In contrast, grid-like cells typically undergo a substantial change in firing pattern during training before settling into their final grid-like representation (Figure 4b). The developmental time line of the grid-like cells and border cells is consistent with developmental studies in rodents. Experimentally, it is known that border cells emerge earlier in development, and they exist at about 2 weeks after the rat is born (Bjerknes et al., 2014). The grid cells mature only at about 4 weeks after birth (Langston et al., 2010; Wills et al., 2010; Bjerknes et al., 2014). Furthermore, our simulations suggest the reason why border cells emerge earlier in development is that computationally it may be easier to wire-up a network that gives rise to border cell responses. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 THE IMPORTANCE OF REGULARIZATION', 'after_section': '3.2 THE IMPORTANCE OF REGULARIZATION', 'context_after': 'Our results are consistent with the general notion on the importance of incorporating proper con- straint for learning useful representations in neural networks (Bengio et al., 2013). Furthermore, it suggests that, to learn a model with response properties similar to neural systems it may be necessary to incorporate the relevant constraints, e.g., noise and metabolic cost. 3.3 ERROR CORRECTION AROUND THE BOUNDARY Figure 6: Error-correction happens at the boundary and the error is stable over time. At the boundary, the direction is re-sampled to avoid input velocities that lead to a path extending beyond the boundary ', 'paragraph_idx': 31, 'before_section': '3.2 THE IMPORTANCE OF REGULARIZATION', 'context_before': 'zero speed 90% of the time, and adding Gaussian noise to the network (ξi(t) in equation (1)); the precise method for setting the speed input to zero and the value of the noise variance is not crucial for our simulations to develop grid-like representations. The cost function which aims to capture ', 'modified_lines': 'the penalization on the metabolic cost of the neural activity also acts as an important regulariza- tion. Our simulations show that the grid-like representation did not emerge without this metabolic cost. In Figure 5, we show typical simulation results for a square environment, with and without proper metabolic regularization. In the Appendix, we illustrate the effect of regularization further, in particular the role of injecting noise into the RNN units. ', 'original_lines': 'the penalization on the metabolic cost of the neural activity also acts as an important regularization. Our simulations show that the grid-like representation did not emerge without this metabolic cost. Instead, most of the units in the network exhibit border-like responses. In Figure 5, we show typical simulation results for a square environment, with and without proper regularization. 6 Under review as a conference paper at ICLR 2018 Figure 4: Development of border cells and grid-like cells. We perform dimensionality reduction using the t-SNE algorithm on the firing rates of the neurons. Each dot represents one neuron (N = 100), and the color represents different training stages (early/intermediate/late shown in blue/cyan/yellow). Each line shows the trajectory of a single highlighted neuron as its firing re- sponses evolve during training. In panel a), we highlight the border representation. It appears there are four clusters of border cells, each responding to one wall of a square environment (spatial re- sponses from four neurons are inset). Importantly, these cells’ response profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells change their tuning profiles substantially during learning, as demonstrated by the long distance they travel in the space. Figure 5: Complete set of spatial response profiles for 100 neurons in RNN trained in a square environment. a) Without proper regularization, complex and periodic spatial response patterns do not emerge. b) With proper regularization, a rich set of periodic response patterns emerge, including grid-like responses. One natural question is whether the trained RNNs are able to perform localization when the path length exceeds the typical length during training (500 steps), in particular given that noise in the network would gradually accumulate, leading to a decrease in localization performance. We test this by simulating paths of several orders of magnitude longer. Somewhat surprisingly, we find the RNNs still perform well (Figure 6b). In fact, the squared error (averaged over every 10000 steps) 7 -20-10010Dimension 1-20-100Dimension 2Training iteration 100Training iteration 500Training iteration1000EarlyIntermediateLateabab Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 31}, {'section': '3.3 ERROR CORRECTION AROUND THE BOUNDARY', 'after_section': '3.3 ERROR CORRECTION AROUND THE BOUNDARY', 'context_after': 'As shown earlier, during training some of the RNN units develop boundary-related firing (Figure 2c), presumably by exploiting the change of input statistics around the boundary. We hypothesize that ', 'paragraph_idx': 36, 'before_section': '3.3 ERROR CORRECTION AROUND THE BOUNDARY', 'context_before': 'interactions to correct the accumulated error between the true integrated input and its prediction based on the linear readout of equation (2). Panel a), the mean squared error increases when there are no boundary interactions, but then decreases after a boundary interaction, with more boundary ', 'modified_lines': 'interactions leading to greater error reduction. In the absence of further boundary interaction, the squared error would gradually increase again (blue curve) at roughly a constant rate. b) The network was trained using mini-batches of 500 timesteps but has stable error over a duration at least four orders of magnitude larger. The error of the RNN output (mean and standard deviation shown in black, computed based on 10000 timesteps) is compared to the error that would be achieved by an RNN outputting the best constant values (red). One natural question is whether the trained RNNs are able to perform localization when the path length exceeds the typical length used during training (500 steps), in particular given that noise in the network would gradually accumulate, leading to a decrease in localization performance. We test this by simulating paths that are several orders of magnitude longer. Somewhat surprisingly, we find the RNNs still perform well (Figure 6b). In fact, the squared error (averaged over every 10000 steps) is stable. The spatial response profiles of individual units also remain stable. This implies that the RNNs have acquired intrinsic error-correction mechanisms during training. ', 'original_lines': 'interactions leading to greater error reduction. b) The network was trained using mini-batches of 500 timesteps but has stable error over a duration at least four orders of magnitude larger. The error of the RNN output (mean and standard deviation shown in black, computed based on 10000 timesteps) is compared to the error that would be achieved by an RNN outputting the best constant values (red). appears to be stable. The spatial response profiles of individual units also remain stable. This implies that the RNNs have acquired intrinsic error-correction mechanisms during training. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 36}, {'section': '4 DISCUSSION', 'after_section': '4 DISCUSSION', 'context_after': 'location (Moser et al., 2008). The general agreement between the different responses properties in our model and the neurophysiology provide strong evidence supporting the hypothesis that the neural population in EC may provide an efficient code for representation self-locations based on the velocity input. Recently, there has been increased interest in using complex neural network models to understand Cun et al., 1998). Given the abundant recurrent connections in the brain, it seems a particular is that, it is unclear how place cells acquire spatial tuning in the first place. To the contrary, our model takes the animal’s velocity as the input, and addresses the question of how the spatial tuning can be generated from such input, which are known to exist in EC (Sargolini et al., 2006; Kropff LSTM units (Hochreiter & Schmidhuber, 1997) to perform different navigation tasks. However, no Although our model shows a qualitative match to the neural responses observed in the EC, nonethe- less it has several major limitations, with each offering interesting future research directions. First, REFERENCES Dmitriy Aronov, Rhino Nevers, and David W Tank. Mapping of a non-spatial dimension by the hippocampalentorhinal circuit. Nature, 2017. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, ', 'paragraph_idx': 39, 'before_section': '4 DISCUSSION', 'context_before': 'found that after training RNNs with appropriate regularization, the model neurons exhibit a variety of spatial and velocity tuning profiles that match neurophysiology in EC. What’s more, there is also similarity in terms of when these distinct neuron types emerge during training/development. ', 'modified_lines': 'The EC has long been thought to be involved in path integration and localization of the animal’s the neural code. But the focus has been on using feed-forward architectures, in particular CNNs (Le- fruitful avenue to take advantage of the recent development in RNNs to help with neuroscience questions (Mante et al., 2013; Song et al., 2016; Miconi, 2017; Sussillo et al., 2015). Here, we only show one instance following this approach. However, the insight from this work could be general, and potentially useful for other cognitive functions as well. The finding that metabolic constraints lead to the emergence of grid-like responses may be seen as conceptually related to the efficient coding hypothesis in visual processing (Barlow, 1961), in particular the seminal work on the emergence of the V1-like Gabor filters in a sparse coding model by Olshausen & Field (1996). Indeed, our work is partly inspired by these results. While there are conceptual similarities, however, we should also note there are differences between the sparse coding work and ours. First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way. Second, the grid-like responses are not the most sparse solution one could imagine. In fact, they are still quite dense comparing to a more spatially localized representation. Third, the grid-like patterns emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations. Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells (Wei et al., 2015). It has been shown that efficient coding considerations could explain the particular set of grid scales observed in rodents (Stensola et al., 2012). However, in that work, the firing patterns of the neurons are assumed to have a lattice structure to start with. Furthermore, our work is related to the study by Sussillo and others (Sussillo et al., 2015), in which they show that regularization of RNN models are important for generating solutions that are similar to the neural activity observed in motor cortex. In Sussillo et al., a smoothness constraint together with others lead to simple oscillatory neural dynamics that well matches the neural data. We have not incorporated a smoothness constraint into our network. Additionally, we note that there are a few recent studies which use place cells as the input to generate grid cells (Dordek et al., 2016; Stachenfeld et al., 2016), which are fundamentally different from our work. In these feed-forward network models, the grid cells essentially perform dimensionality reduction based on the spatial input from place cells. However, the main issue with these models et al., 2015). In another related study (Kanitscheider & Fiete, 2016), the authors train a RNN with grid-like spatial firing patterns are reported. the learning rule we use seems to be biologically implausible. We are interested in exploring how a 9 Under review as a conference paper at ICLR 2018 more biologically plausible learning rule could give rise to similar results (Lillicrap et al., 2016; Mi- coni, 2017; Guerguiev et al., 2017). Second, the simulation results do not show a variety of spatial scales in grid-like cells. Experimentally, it is known that grid cells have multiple spatial scales, that scale geometrically with a ratio 1.4 (Stensola et al., 2012), and this particular scale ratio is predicted by efficient coding of space (Wei et al., 2015). We are investigating how to modify the model to get a hierarchy of spatial scales, perhaps by incorporating more neurons or modifying the regular- ization. Last but not least, we have focused on the representation produced by the trained RNN. An equally important set of questions concern how the networks actually support the generation of such a representation. As a preliminary effort, we have examined the connectivity patterns of the trained network, and they do not seem to resemble the connectivity patterns required by standard attractor network models. Maybe this should not be seen as too surprising. After all, the trained networks can produce a diverse set of neural responses, while the previous models only led to grid responses. It would be interesting for future work to systematically examine the questions related to the underlying mechanisms. Horace B Barlow. Possible principles underlying the transformation of sensory messages. Sensory communication, pp. 217–234, 1961. ', 'original_lines': 'The EC has been long thought to be involved in path integration and localization of the animal’s the neural code. But the focus has been on using feed-forward architectures, in particular CNN (Le- fruitful avenue to take advantage of the recent development in RNN to help with neuroscience ques- tions (Mante et al., 2013; Song et al., 2016; Miconi, 2017). Here, we only show one instance fol- 8 -40-2002040Timesteps relative to first boundary interaction0.0080.0120.0160.02Sqaured error 0 boundary interaction 1 boundary interaction 2 to 5 interactions 6 to 50 interactions(in the next 50 time steps)boundary interaction noboundary interaction0246810Timesteps10500.40.8Squared error RNN outputconstant outputab Under review as a conference paper at ICLR 2018 lowing this approach. However, the insight from this work could be general, and potentially useful for other cognitive functions as well. We note that there are a few recent studies which use place cells as the input to generate grid cells (Dordek et al., 2016; Stachenfeld et al., 2016), which are fundamentally different from our work. In these feed-forward network models, the grid cells essentially perform dimensionality re- duction based on the spatial input from place cells. However, the main issue with these models et al., 2015). In another related study (Kanitscheider & Fiete, 2016), the authors train RNN with grid-like spatial tuning patterns are reported. the learning rule we used seems to be biologically implausible. We are interested in figuring out how a more biologically plausible learning rule could give rise to a similar results (Miconi, 2017). Second, the simulation results do not show a variety of spatial scales in grid-like cells. Experimen- tally, it is known that grid cells have multiple spatial scales, that scale geometrically with a ratio 1.4 (Stensola et al., 2012). We are investigating how to modify the model to get a hierarchy of spatial scales, perhaps by incorporating more neurons or modifying the regularization. Finally, the dynamics of the trained network is not well-understood so far. A better understanding would likely help identify the connectivity structure and dynamical rules that could support robust integration of the inputs. ', 'after_paragraph_idx': 39, 'before_paragraph_idx': 39}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo. Performance-optimized hierarchical models predict neural responses in higher visual ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Daniel LK Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nature neuroscience, 19(3):356–365, 2016. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 20:23:59
|
ICLR.cc/2018/Conference
|
ByvyJv6mM
|
Hypj4iCQG
|
[{'section': '3.2 THE IMPORTANCE OF REGULARIZATION', 'after_section': None, 'context_after': '3.2 THE IMPORTANCE OF REGULARIZATION We find appropriate regularizations of the RNN to be crucial for the emergence of grid-like repre- sentations. We only observed grid-like representations when the network was encouraged to store ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'development may be that computationally it may be easier to wire-up a network that gives rise to border cell responses. ', 'modified_lines': 'Figure 4: Development of border cells and grid-like cells. Early during training all responses are similar to the border related responses, and only as training continues do the grid-like cells emerge. We perform dimensionality reduction using the t-SNE algorithm on the firing rates of the neurons. Each dot represents one neuron (N = 100), and the color represents different training stages (early/intermediate/late shown in blue/cyan/yellow). Each line shows the trajectory of a sin- gle highlighted neuron as its firing responses evolve during training. In panel a), we highlight the border representation. It appears there are four clusters of border cells, each responding to one wall of a square environment (spatial responses from four of these border cells are inset). These cells’ response profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells change their tuning profiles substantially during learning. As a natural consequence, they need to travel a long distance in this space between the early and late phase of the training. Spatial responses from four of these grid-like cells are inset. ', 'original_lines': 'Figure 4: Development of border cells and grid-like cells. Early during training all responses are similar to the border related responses, and only as training continues do the grid-like cells emerge. We perform dimensionality reduction using the t-SNE algorithm on the firing rates of the neu- rons. Each dot represents one neuron (N = 100), and the color represents different training stages (early/intermediate/late shown in blue/cyan/yellow). Each line shows the trajectory of a single high- lighted neuron as its firing responses evolve during training. In panel a), we highlight the border representation. It appears there are four clusters of border cells, each responding to one wall of a square environment (spatial responses from four neurons are inset). These cells’ response profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells change their tuning pro- files substantially during learning. As a natural consequence, they need to travel a long distance in this space between the early and late phase of the training. Figure 5: Complete set of spatial response profiles for 100 neurons in a RNN trained in a square environment. a) Without proper regularization, complex and periodic spatial response patterns do not emerge. b) With proper regularization, a rich set of periodic response patterns emerge, including grid-like responses. 7 -20-10010Dimension 1-20-100Dimension 2Training iteration 100Training iteration 500Training iteration1000EarlyIntermediateLateabab Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'this by simulating paths that are several orders of magnitude longer. Somewhat surprisingly, we find the RNNs still perform well (Figure 6b). In fact, the squared error (averaged over every 10000 steps) is stable. The spatial response profiles of individual units also remain stable. This implies that the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'black, computed based on 10000 timesteps) is compared to the error that would be achieved by an RNN outputting the best constant values (red). ', 'modified_lines': '', 'original_lines': 'One natural question is whether the trained RNNs are able to perform localization when the path length exceeds the typical length used during training (500 steps), in particular given that noise in the network would gradually accumulate, leading to a decrease in localization performance. We test ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 DISCUSSION', 'after_section': '4 DISCUSSION', 'context_after': 'Additionally, we note that there are a few recent studies which use place cells as the input to generate grid cells (Dordek et al., 2016; Stachenfeld et al., 2016), which are fundamentally different from ', 'paragraph_idx': 41, 'before_section': '4 DISCUSSION', 'context_before': 'The finding that metabolic constraints lead to the emergence of grid-like responses may be seen as conceptually related to the efficient coding hypothesis in visual processing (Barlow, 1961), in particular the seminal work on the emergence of the V1-like Gabor filters in a sparse coding model ', 'modified_lines': 'by Olshausen & Field (1996). Indeed, our work is partly inspired by these results. While there are conceptual similarities, however, we should also note there are differences between the sparse coding work and ours. First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way. Second, the grid-like responses are not the most sparse solution one could imagine. In fact, they are still quite dense comparing to a more spatially localized representation. Third, the grid-like patterns that emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations. Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells (Wei et al., 2015). It has been shown that efficient coding considerations could explain the particular set of grid scales observed in rodents (Stensola et al., 2012). However, in that work, the firing patterns of the neurons are assumed to have a lattice structure to start with. Furthermore, our work is related to the study by Sussillo and others (Sussillo et al., 2015), in which they show that regularization of RNN models are important for generating solutions that are similar to the neural activity observed in motor cortex. In Sussillo et al., a smoothness constraint together with others lead to simple oscillatory neural dynamics that well matches the neural data. We have not incorporated a smoothness constraint into our network. ', 'original_lines': 'by Olshausen & Field (1996). Indeed, our work is partly inspired by these results. While there are conceptual similarities, however, we should also note there are differences between the sparse coding work and ours. First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way. Second, the grid-like responses are not the most sparse solution one could imagine. In fact, they are still quite dense comparing to a more spatially localized representation. Third, the grid-like patterns emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations. Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells (Wei et al., 2015). It has been shown that efficient coding considerations could explain the particular set of grid scales observed in rodents (Stensola et al., 2012). However, in that work, the firing patterns of the neurons are assumed to have a lattice structure to start with. Furthermore, our work is related to the study by Sussillo and others (Sussillo et al., 2015), in which they show that regularization of RNN models are important for generating solutions that are similar to the neural activity observed in motor cortex. In Sussillo et al., a smoothness constraint together with others lead to simple oscillatory neural dynamics that well matches the neural data. We have not incorporated a smoothness constraint into our network. ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 41}]
|
2018-01-06 19:33:56
|
ICLR.cc/2018/Conference
|
Hypj4iCQG
|
rk-kYCzEz
|
[]
|
2018-01-10 00:05:44
|
ICLR.cc/2018/Conference
|
rk-kYCzEz
|
rkfJpdeA-
|
[]
|
2018-01-25 15:41:56
|
ICLR.cc/2018/Conference
|
rkfJpdeA-
|
SJq6W9fPf
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'How might the spatial navigation task be solved using a network of neurons? Recurrent neural net- works (RNNs) (Hochreiter & Schmidhuber, 1997; Graves et al., 2013; Oord et al., 2016; Theis & Bethge, 2015; Gregor et al., 2015; Sussillo et al., 2015) seem particularly useful for these tasks. Indeed, recurrent-based continuous attractor networks have been one popular type of models pro- posed for the formation of grid cells (McNaughton et al., 2006; Burak & Fiete, 2009; Couey et al., 2013) and place cells (Samsonovich & McNaughton, 1997). Such models have provided valuable insights into one set of possible mechanisms that could support the formation of the grids. How- ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'spatial cognition has provided an important window into how high-level cognitive functions are supported in the brain (Moser et al., 2008; Aronov et al., 2017). ', 'modified_lines': '∗equal contribution 1 Published as a conference paper at ICLR 2018 ', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}]
|
2018-02-15 04:51:14
|
ICLR.cc/2018/Conference
|
SJq6W9fPf
|
SkL0Wg4Dz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Alex Graves, Abdel-Rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur- rent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pp. 6645–6649. IEEE, 2013. ', 'modified_lines': '', 'original_lines': ' 10 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579–2605, 2008. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'feedback weights support error backpropagation for deep learning. Nature communications, 7, 2016. ', 'modified_lines': '', 'original_lines': '11 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-16 05:53:18
|
ICLR.cc/2018/Conference
|
SkL0Wg4Dz
|
B1lxwERwG
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'For more cognitive tasks, neural systems often need to maintain certain internal representations of relevant variables in the absence of external stimuli- a process that requires more than feature ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'tures, starting from Hubel and Wiesel’s famous proposal on the origin of orientation selectivity in primary visual cortex (Hubel & Wiesel, 1962). Inspired by the recent development in deep learn- ing (Krizhevsky et al., 2012; LeCun et al., 2015; Hochreiter & Schmidhuber, 1997; Mnih et al., ', 'modified_lines': '2015), there has been a burst of interest in applying deep feedforward models, in particular convolu- tional neural networks (CNN) (LeCun et al., 1998), to study the sensory systems, which hierarchi- cally extract useful features from sensory inputs (see e.g., Yamins et al. (2014); Kriegeskorte (2015); Kietzmann et al. (2017); Yamins & DiCarlo (2016)). ', 'original_lines': '2015), there has been a burst of interest in applying deep feed-forward models, in particular convo- lutional neural networks (CNN) (LeCun et al., 1998), to study the sensory systems, which hierar- chically extract useful features from the sensory inputs (see e.g., Yamins et al. (2014); Kriegeskorte (2015); Kietzmann et al. (2017); Yamins & DiCarlo (2016)). ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '2015). More generally, it suggests that RNNs can be a powerful tool for understanding the neural mechanisms of certain high-level cognitive functions. ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'constraints naturally gives rise to a variety of spatial response profiles as observed in EC, including grid-like responses. To our knowledge, this is the first study to show that grid-like responses could emerge from training a RNN to perform navigation. Our result implies that the neural representation ', 'modified_lines': 'in EC may be seen as a natural way for the brain to solve the navigation task efficiently (Wei et al., ', 'original_lines': 'in EC may be seen as a natural way for the brain to solve the navigation task efficiently(Wei et al., ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '2.1 MODEL DESCRIPTION', 'after_section': '2.1 MODEL DESCRIPTION', 'context_after': 'heading direction. The two outputs linearly weight the neurons in the RNN. The goal of training is to make the responses of the two output neurons accurately represent the animal’s physical location. c) Typical trajectory after training. As shown, the output of the RNN can accurately, though not ', 'paragraph_idx': 9, 'before_section': '2.1 MODEL DESCRIPTION', 'context_before': 'igation in EC. All figures are replotted from previous publications. From left to right: a “grid cell” recorded when an animal navigates in a square environment, replotted from Krupic et al. (2012), with the heat map representing the firing rate of this neuron as a function of the animal’s location ', 'modified_lines': '(red corresponds to high firing rate); a “band-like” cell from Krupic et al. (2012); a border cell from Solstad et al. (2008); an irregular spatially tuned cell from Diehl et al. (2017); a “speed cell” from Kropff et al. (2015), which exhibits roughly linear dependence on the rodent’s running speed; a “heading direction cell” from Sargolini et al. (2006), which shows systematic change of firing rate depending on animal’s heading direction. b) The network consists of N = 100 recurrently con- nected units (or neurons) which receive two external inputs, representing the animal’s speed and ', 'original_lines': '(red corresponds to high firing rate); a “band-like” cell, from Krupic et al. (2012); a border cell, from Solstad et al. (2008); an irregular spatially tuned cell, from Diehl et al. (2017); a “speed cell” from Kropff et al. (2015), which exhibits roughly linear dependence on the rodent’s running speed; a “heading direction cell” from Sargolini et al. (2006), which shows systematic change of firing rate depending on animal’s heading direction. b) The network consists of N = 100 recurrently connected units (or neurons) which receive two external inputs, representing the animal’s speed and ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'after_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_after': 'esting to see if grid cells would lie on a square lattice instead if the rats are raised in a single square Border responses Many neurons in the RNN exhibit selectivity to the boundary (Figure 2c). Typi- cally, they only encode a portion of the boundary, e.g. one piece of wall in a square shaped environ- ', 'paragraph_idx': 22, 'before_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_before': 'experimentally it is reported that the grids are often on the corners of a triangular lattice (Hafting et al., 2005) even in square environments (see Figure 1a), though the grids are somewhat influenced by the shape of the environment. However, the rats in these experiments presumable had spatial ', 'modified_lines': 'experience in other environments with various boundary shapes. Experimentally, it would be inter- environment - a situation we are simulating here. ', 'original_lines': 'experience in other environment with various boundary shapes. Experimentally, it would be inter- environment- a situation we are simulating here. ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'after_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_after': 'been observed. In fact, it is recently reported that the non-grid spatial cells constitute a large portion of the neurons in Layer II and III of rodent EC (Diehl et al., 2017). 3.1.2 SPEED TUNING AND HEAD DIRECTION TUNING 5 ', 'paragraph_idx': 25, 'before_section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'context_before': 'Spatially-stable but non-regular responses Besides the units described above, most of the remain- ing units also exhibit stable spatial responses, but they do not belong to the above categories. These response profiles can exhibit either one large irregular firing field; or multiple circular firing fields, ', 'modified_lines': 'but these firing fields do not show a regular pattern. Experimentally these types of cells have also Speed tuning We next ask how neurons in the RNN are tuned to the inputs. Many of the model neurons exhibit linear responses to the running speed of the animal, while some neurons show no selectivity to speed, as suggested by the near-flat response functions. Example response profiles are ', 'original_lines': 'but these firing fields do not show a regular pattern. Experimentally these type of cells have also Speed tuning We next ask how the neurons in the RNN are tuned to the inputs. It turns out that many of the model neurons exhibit linear responses to the running speed of the animal, while some neurons show no selectivity to speed, as suggested by the near-flat response functions. Example ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '3.1 TUNING PROPERTIES OF THE MODEL NEURONS', 'after_section': None, 'context_after': 'the grid and non-grid cells in EC exhibit head direction tuning (Sargolini et al., 2006). Furthermore, the linear speed dependence of the model neurons is similar to the properties of speed cells reported recently in EC (Kropff et al., 2015). Our result is also consistent with another recent study reporting ', 'paragraph_idx': 24, 'before_section': None, 'context_before': 'tuning differ. g,h) Border cells exhibit weak and a bit complex directional tuning and almost no speed tuning. i) This band cell shows weak directional tuning, but strong speed tuning. ', 'modified_lines': 'shown in Figure 3. Interestingly, we observe that the model border cells tend to have almost zero speed-tuning (e.g., see Figure 3g,h). Head direction tuning A substantial portion of the model neurons show direction tuning. There are a diversity of direction tuning profiles, both in terms of the strength of the tuning and their pre- ferred direction. Example tuning curves are shown in Figure 3, and the direction tuning curves of a complete population are shown in the Appendix. Interestingly, in general model neurons which show the strongest head direction tuning do not show a clear spatial firing pattern (see Figure 3a,b,c). This suggests that there are a group of neurons which are mostly responsible for encoding the direc- tion. We also notice that neurons with clear grid-like firing can exhibit a variety of direction tuning strengths, from weak to strong (Figure 3d,e,f). In the Appendix, we quantify the relation between these different tuning properties at the whole population level, which show somewhat complex de- pendence. Experimentally, the heading direction tuning in EC is well-known, e.g., Sargolini et al. (2006). Both ', 'original_lines': 'response profiles are shown in Figure 3. Interestingly, we observe that the model border cells tend to have almost zero speed-tuning (e.g., see Figure 3g,h). Head direction tuning Furthermore, a substantial portion of the model neurons show direction tuning. There are a diversity of direction tuning profiles, both in terms of the strength of the tuning and their preferred direction. Example tuning curves are shown in Figure 3, and the direction tuning curves of a complete population are shown in the Appendix. Interestingly, in general model neurons which show the strongest head direction tuning are do not show a clear spatial firing pattern(see Figure 3a,b,c). This suggests that there are a group of neurons which are mostly responsible for encoding the direction. We also notice that neurons with clear grid-like firing can exhibit a variety of direction tuning strengths, from weak to strong (Figure 3d,e,f). In the Appendix, we quantify the relation between these different tuning properties at the whole population level, which show somewhat complex dependence. Experimentally, the heading direction tuning in EC is well-known (e.g., Sargolini et al. (2006)). Both ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3.2 THE IMPORTANCE OF REGULARIZATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'cells mature only at about 4 weeks after birth (Langston et al., 2010; Wills et al., 2010; Bjerknes et al., 2014). Furthermore, our simulations suggest the reason why border cells emerge earlier in ', 'modified_lines': 'development may be that computationally it is easier to wire-up a network that gives rise to border cell responses. Figure 4: Development of border cells and grid-like cells. Early during training all responses are similar to the border related responses, and only as training continues do the grid-like cells emerge. We perform dimensionality reduction using the t-SNE algorithm on the firing rates of the neu- rons. Each dot represents one neuron (N = 100), and the color represents different training stages (early/intermediate/late shown in blue/cyan/yellow). Each line shows the trajectory of a single high- lighted neuron as its firing responses evolve during training. In panel a), we highlight the border representation. It appears there are four clusters of border cells, each responding to one wall of a square environment (spatial responses from four of these border cells are inset). These cells’ re- sponse profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells initially have tuning profiles similar to the border cells but then change their tuning substantially during learning. As a natural consequence, they need to travel a long distance in this space between the early and late phase of the training. Spatial responses are shown for four of these grid-like cells during the late phase of training. ', 'original_lines': 'development may be that computationally it may be easier to wire-up a network that gives rise to border cell responses. Figure 4: Development of border cells and grid-like cells. Early during training all responses are similar to the border related responses, and only as training continues do the grid-like cells emerge. We perform dimensionality reduction using the t-SNE algorithm on the firing rates of the neurons. Each dot represents one neuron (N = 100), and the color represents different training stages (early/intermediate/late shown in blue/cyan/yellow). Each line shows the trajectory of a sin- gle highlighted neuron as its firing responses evolve during training. In panel a), we highlight the border representation. It appears there are four clusters of border cells, each responding to one wall of a square environment (spatial responses from four of these border cells are inset). These cells’ response profiles appear early and stably persist through training, illustrated by the short distance they travel in this space. In b), we show that the neurons which eventually become grid cells change their tuning profiles substantially during learning. As a natural consequence, they need to travel a long distance in this space between the early and late phase of the training. Spatial responses from four of these grid-like cells are inset. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'One natural question is whether the trained RNNs are able to perform localization when the path length exceeds the typical length used during training (500 steps), in particular given that noise in ', 'modified_lines': '', 'original_lines': 'the network would gradually accumulate, leading to a decrease in localization performance. We test ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 THE IMPORTANCE OF REGULARIZATION', 'after_section': None, 'context_after': 'Figure 6: Error-correction happens at the boundary and the error is stable over time. At the boundary, of the environment. These changing input statistics at the boundary, termed a boundary interaction, are the only cue the RNN receives about the boundary. We find that the RNN uses the boundary interactions to correct the accumulated error between the true integrated input and its prediction ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'Figure 5: Complete set of spatial response profiles for 100 neurons in a RNN trained in a square environment. a) Without proper regularization, complex and periodic spatial response patterns do ', 'modified_lines': 'not emerge. b) With proper regularization, a rich set of periodic response patterns emerge, includ- ing grid-like responses. Physiological responses and regularization likely lie between the extremes illustrated in these two examples. the direction is resampled to avoid input velocities that lead to a path extending beyond the boundary ', 'original_lines': 'not emerge. b) With proper regularization, a rich set of periodic response patterns emerge, including grid-like responses. the direction is re-sampled to avoid input velocities that lead to a path extending beyond the boundary ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 MODEL DESCRIPTION', 'after_section': None, 'context_after': 'this by simulating paths that are several orders of magnitude longer. Somewhat surprisingly, we find the RNNs still perform well (Figure 6b). In fact, the squared error (averaged over every 10000 steps) is stable. The spatial response profiles of individual units also remain stable. This implies that the ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'black, computed based on 10000 timesteps) is compared to the error that would be achieved by an RNN outputting the best constant values (red). ', 'modified_lines': 'the network would gradually accumulate, leading to a decrease in localization performance. We test ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 DISCUSSION', 'after_section': '4 DISCUSSION', 'context_after': 'in our model and the neurophysiology provide strong evidence supporting the hypothesis that the neural population in EC may provide an efficient code for representation self-locations based on the velocity input. Recently, there has been increased interest in using complex neural network models to understand fruitful avenue to take advantage of the recent development in RNNs to help with neuroscience questions (Mante et al., 2013; Song et al., 2016; Miconi, 2017; Sussillo et al., 2015). Here, we only show one instance following this approach. However, the insight from this work could be general, ', 'paragraph_idx': 39, 'before_section': '4 DISCUSSION', 'context_before': 'of spatial and velocity tuning profiles that match neurophysiology in EC. What’s more, there is also similarity in terms of when these distinct neuron types emerge during training/development. The EC has long been thought to be involved in path integration and localization of the animal’s ', 'modified_lines': 'location (Moser et al., 2008). The general agreement between the different response properties the neural code. But the focus has been on using feedforward architectures, in particular CNNs (Le- Cun et al., 1998). Given the abundant recurrent connections in the brain, it seems a particularly ', 'original_lines': 'location (Moser et al., 2008). The general agreement between the different responses properties the neural code. But the focus has been on using feed-forward architectures, in particular CNNs (Le- Cun et al., 1998). Given the abundant recurrent connections in the brain, it seems a particular ', 'after_paragraph_idx': 39, 'before_paragraph_idx': 39}, {'section': '4 DISCUSSION', 'after_section': '4 DISCUSSION', 'context_after': 'that emerged in our network are not filters based on the raw input, rather the velocity inputs need to be integrated first in order to encode spatial locations. Our work is also inspired by recent work using the efficient coding idea to explain the functional architecture of the grid cells (Wei et al., 2015). It ', 'paragraph_idx': 41, 'before_section': '4 DISCUSSION', 'context_before': 'coding work and ours. First, the sparsity constraint in sparse coding can be naturally viewed as a particular prior while in the context of the recurrent network, it is difficult to interpret that way. Second, the grid-like responses are not the most sparse solution one could imagine. In fact, they are ', 'modified_lines': 'still quite dense compared to a more spatially localized representation. Third, the grid-like patterns ', 'original_lines': 'still quite dense comparing to a more spatially localized representation. Third, the grid-like patterns ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': '4 DISCUSSION', 'after_section': '4 DISCUSSION', 'context_after': 'reduction based on the spatial input from place cells. However, the main issue with these models is that, it is unclear how place cells acquire spatial tuning in the first place. To the contrary, our model takes the animal’s velocity as the input, and addresses the question of how the spatial tuning can be generated from such input, which are known to exist in EC (Sargolini et al., 2006; Kropff et al., 2015). In another related study (Kanitscheider & Fiete, 2016), the authors train a RNN with LSTM units (Hochreiter & Schmidhuber, 1997) to perform different navigation tasks. However, no grid-like spatial firing patterns are reported. Although our model shows a qualitative match to the neural responses observed in the EC, nonethe- less it has several major limitations, with each offering interesting future research directions. First, ', 'paragraph_idx': 42, 'before_section': '4 DISCUSSION', 'context_before': 'Additionally, we note that there are a few recent studies which use place cells as the input to generate grid cells (Dordek et al., 2016; Stachenfeld et al., 2016), which are fundamentally different from ', 'modified_lines': 'our work. In these feedforward network models, the grid cells essentially perform dimensionality 9 Published as a conference paper at ICLR 2018 ', 'original_lines': 'our work. In these feed-forward network models, the grid cells essentially perform dimensionality 9 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 42}]
|
2018-02-24 00:50:47
|
ICLR.cc/2018/Conference
|
B1lxwERwG
|
Hk19LFCPf
|
[]
|
2018-02-24 06:30:31
|
ICLR.cc/2018/Conference
|
Hk19LFCPf
|
Hk_hd2Jdf
|
[]
|
2018-02-25 04:16:47
|
ICLR.cc/2018/Conference
|
Hkr4S-3Mz
|
S1FVBW3zM
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 π = −k0(π − θw1,w2) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:1)2 ', 'modified_lines': '', 'original_lines': '(π − θw1,w2 )wT ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 = 0 and w1, w2 ∈ span{w∗', 'after_section': None, 'context_after': '1 − 1 π ', 'paragraph_idx': 26, 'before_section': None, 'context_before': '1 π ', 'modified_lines': '(π − θw1,w2 )wT ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-23 16:32:49
|
ICLR.cc/2018/Conference
|
S1FVBW3zM
|
rkQO1MbAb
|
[]
|
2018-01-25 15:40:29
|
ICLR.cc/2018/Conference
|
SyjA4fbAZ
|
ByYxJXW0Z
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'state-of-the-art. 2 THE MODEL ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'inference, compared to the RNN counterparts. • We propose a novel data augmentation technique to enrich the training data by paraphras- ', 'modified_lines': 'ing. It allows the model to achieve significantly higher performance that is on par with the ', 'original_lines': 'ing. It allows the model to achieve significantly higher performance that is on par with ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '5 RELATED WORK', 'after_section': '5 RELATED WORK', 'context_after': 'et al., 2016), Children Book Test (Hill et al., 2015), etc. A plethora of effective end-to-end neural network models exist to approach these challenges, including BiDaf (Seo et al., 2016), r-net (Wang et al., 2017), DCN (Xiong et al., 2016), ReasoNet (Shen et al., 2017b), Document Reader (Chen ', 'paragraph_idx': 42, 'before_section': None, 'context_before': '5 RELATED WORK ', 'modified_lines': 'Reading comprehension and question answering has become an important topic in the NLP do- main. Its popularity can be attributed to an increase in publicly available annotated datasets, such as SQuAD (Rajpurkar et al., 2016), CNN/Daily News (Hermann et al., 2015), WikiReading (Hewlett ', 'original_lines': 'Reading comprehension and question answering has become a very important topic in the NLP domain. Its popularity can be attributed to an increase in publicly available annotated datasets, such as SQuAD (Rajpurkar et al., 2016), CNN/Daily News (Hermann et al., 2015), WikiReading (Hewlett ', 'after_paragraph_idx': 42, 'before_paragraph_idx': None}, {'section': '5 RELATED WORK', 'after_section': '5 RELATED WORK', 'context_after': 'To our knowledge, our paper is the first work on both fast and accurate reading comprehension model, by discarding the recurrent networks in favor of feed forward architectures. Note that Raiman ', 'paragraph_idx': 43, 'before_section': None, 'context_before': '2017). Recurrent Neural Networks (RNNs) have been used extensively in the natural language processing ', 'modified_lines': 'area in the past few years. The sequential nature of the text coincides with the design philosophy of RNNs, and hence RNNs are generally the default choice for modeling text. In fact, all the reading comprehension models mentioned above are based on RNNs. While effective, the sequential na- ture of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another drawback of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Units (Chung et al., 2014) or Long Short Term Memory cells. The reading comprehension task considered in this paper always needs to deal with long text, as the context paragraphs may be hundreds of words long. Recently, attempts have been made to replace the recurrent networks by fully convolution or attention based architectures (Kim, 2014; Gehring et al., 2017; Vaswani et al., 2017b; Shen et al., 2017a). Those models have been shown to be not only faster than the RNN based ones, but also effective in different tasks, such as text classification, machine translation or sentiment analysis. ', 'original_lines': 'area in the past few years. The sequential nature of the text coincides with the design philosophy of RNNs, and hence RNN is generally the default choice for modeling text. In fact, all the reading comprehension models mentioned above are based on RNNs. While effective, the sequential nature of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another draw- back of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Units (Chung et al., 2014) or Long Short Term Memory cells. Reading comprehension considered in this paper always needs to deal with long text, as the context para- graphs may be hundreds of words long. Recently, attempts have been made to replace the recurrent networks by fully convolution or attention based architectures (Kim, 2014; Gehring et al., 2017; Vaswani et al., 2017b; Shen et al., 2017a). Those models have been shown to be not only faster than the RNN based ones, but also effective in different tasks, such as text classification, machine translation or sentiment analysis. ', 'after_paragraph_idx': 44, 'before_paragraph_idx': None}]
|
2017-10-27 21:02:41
|
ICLR.cc/2018/Conference
|
ByYxJXW0Z
|
ByQzxmbRZ
|
[]
|
2017-10-27 21:07:23
|
ICLR.cc/2018/Conference
|
ByQzxmbRZ
|
rJW3ooOXz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION in real-time applications. 1 Under review as a conference paper at ICLR 2018 2 THE MODEL 2.1 PROBLEM FORMULATION ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'Current end-to-end machine reading and question answering (Q&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their ', 'modified_lines': 'success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q&A model that does not re- quire recurrent networks: It consists exclusively of attention and convolutions, yet achieves equivalent or better performance than existing models. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model. This data augmentation technique not only enhances the train- ing examples but also diversifies the phrasing of the sentences, which results in immediate accuracy improvements. Our single model achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8. There is growing interest in the tasks of machine reading comprehension and automated question answering. Over the past few years, significant progress has been made with end-to-end models showing promising results on many challenging datasets. The most successful models generally employ two key ingredients: (1) a recurrent model to process sequential inputs, and (2) an attention component to cope with long term interactions. A successful combination of these two ingredients is the Bidirectional Attention Flow (BiDAF) model by Seo et al. (2016), which achieve strong re- sults on the SQuAD dataset (Rajpurkar et al., 2016). A weakness of these models is that they are often slow for both training and inference due to their recurrent nature, especially for long texts. For instance, it usually takes a day to train the BiDAF model to achieve competitive accuracy on SQuAD. The expensive training not only leads to high turnaround time for experimentation and lim- its researchers from rapid iteration but also prevents the models from being used for larger dataset. Meanwhile the slow inference prevents the machine comprehension systems from being deployed In this paper, aiming to make the machine comprehension fast, we propose to remove the recurrent nature of these models. We instead exclusively use convolutions and self-attentions everywhere as the building blocks of encoders that separately encodes the query and context. Then we learn the interactions between context and question by standard attentions (Xiong et al., 2016; Seo et al., 2016; Bahdanau et al., 2015). The resulting representation is encoded again with our recurrency- free encoder before finally decoding to the probability of each position being the start or end of the answer span. This architecture is shown in Figure 1. The key motivation behind the design of our model is the following: the convolutions captures the local structure of the text, while the self-attention mechanism learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subse- quent modeling layers. Notice that by using convolutions and self-attention, our model is no longer recurrent. The feed-forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a simple comparison, our model can achieve the same accuracy (77.0 F1 score) as BiDAF model (Seo et al., 2016) within 3 hours training that otherwise should have taken 15 hours. The speed-up gain also allows us to train the model with more iterations to achieve better results than competitive models. For instance, if we allow the model to train for 18 hours, it achieves an F1 score of 82.7 on the dev set, which is much better than (Seo et al., 2016), and is on par with best published results. As our model is fast, we can train it with much more data than other models. To further improve the model, we propose a complementary data augmentation technique to enhance the training data. This technique paraphrases the examples by translating the original sentences from English to another language and then back to English, which not only enhances the number of training instances but also diversifies the phrasing. On the SQuAD dataset, our model trained with the augmented data achieves 84.6 F1 score on the test set, which is significantly better than the best published result of 81.8 by Hu et al. (2017).1 We also conduct ablation test to justify the usefulness of each component of our model. In summary, the contribution of this paper are as follows: • We propose an efficient reading comprehension model that exclusively built upon convo- lutions and self-attentions. To the best of our knowledge, we are the first to do so. This combination maintains good accuracy, while achieving up to 13x speedup in training and 9x per training iteration, compared to the RNN counterparts. The speedup gain makes our model the most promising candidate for scaling up to larger datasets. • To improve our result on SQuAD, we propose a novel data augmentation technique to enrich the training data by paraphrasing. It allows the model to achieve higher accuracy that is better than the state-of-the-art. In this section, we first formulate the reading comprehension problem and then describe the proposed model: it is a feedforward model that consists of only convolutions and self-attention, a combination that is empirically effective, and is also a novel contribution of our work. ', 'original_lines': 'success, these models are often slow for both training and inference due to the se- quential nature of RNNs. We propose a novel Q&A model that does not require recurrent networks yet achieves equivalent or better performance than existing models. Our model is simple in that it consists exclusively of attention and con- volutions. We also propose a novel data augmentation technique by paraphrasing. It not only enhances the training examples but also diversifies the phrasing of the sentences, which results in immediate accuracy improvements. This technique is of independent interest because it can be readily applied to other natural language processing tasks. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. Our single model achieves 82.2 F1 score on the development set, which is on par with best documented result of 81.8. There is growing interest within natural language processing and machine learning communities in the tasks of machine comprehension and question answering. In the past few years, significant progress has been made in this area, with end-to-end models showing promising results on many challenging datasets. The most successful models generally employ two key components: (1) a recurrent model to process sequential inputs, and (2) an attention component to cope with long term interactions. For instance, the use of bi-attention model (Seo et al., 2016) for the SQuAD dataset (Rajpurkar et al., 2016). A weakness of these models is that they are often slow for both training and inference due to their recurrent nature, especially for long texts. The expensive training leads to high turnaround time for experimentation and limits researchers from rapid iteration, while the slow inference prevents the machine comprehension systems from scaling up and being deployed In this paper, aiming to make the machine comprehension both fast and accurate, we propose to remove the recurrent nature of these models. We instead exclusively use convolutions and self- attentions everywhere as the building blocks of encoders, while we learn the interactions between context and question by normal attentions(Bahdanau et al., 2015). Our model separately encodes the query and context, uses standard context-to-query attention to combine the two streams, then encodes the result before finally decoding to the probability of each position being the start or end of the answer span. Each encoder block is composed of convolutions and self attention. This architecture is illustrated in 1. The underlying principle of our model is the following: The convolutions captures the local structure of the text, while the self-attention mechanisms learns the global interaction between each pair of words. The additional context-to-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subsequent modeling layers. The lack of recurrence speeds up our model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 10x faster in inference. In addition to the speed increase, the model is also competitive with previous models in terms of prediction quality. Besides, we propose a new data augmentation methodology to enhance the training data, which further improves the testing accuracy of our model. On the SQuAD dataset, our model achieves 82.2 F1 score on the development set 1, which is on par with the best published result of 81.8 by Hu et al. (2017). We also conduct ablation test to justify the usefulness of each component of our model. To the best of our knowledge, this is the first work on speeding up the machine comprehension tasks by removing recurrent components, while simultaneously boosting the prediction quality. The contribution of this paper can be summarized as follows: • We propose a simple, efficient, and effective reading comprehension model that exclusively built upon convolutions and attentions. It achieves up to 13x speedup in training and 9x in inference, compared to the RNN counterparts. • We propose a novel data augmentation technique to enrich the training data by paraphras- ing. It allows the model to achieve significantly higher performance that is on par with the state-of-the-art. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': '• For both the embedding and modeling encoders, we only use convolutional and self- 1. Input Embedding Layer. We adopt the standard techniques to obtain the embedding of each word w by concatenating its word embedding and character embedding. The word embedding is mapped to an <UNK> token, whose embedding is trainable with random initialization. The character embedding is obtained as follows: Each character is represented as a trainable vector of dimension 2. Embedding Encoder Layer. The encoder layer is a stack of the following basic building block: [convolution-layer × # + self-attention-layer + feed-forward-layer], as illustrated in the upper right comprehension models such as Weissenborn et al. (2017) and Chen et al. (2017). We use C and Q to denote the encoded context and query. The context-to-query attention is constructed as follows: We first computer the similarities between each pair of context and query words, rendering a similarity matrix S ∈ Rn×m. We then normalize each row of S by applying the softmax function, getting a function used here is the trilinear function (Seo et al., 2016): f (q, c) = W0[q, c, q (cid:12) c], where (cid:12) is the element-wise multiplication and W0 is a trainable variable. 4. Model Encoder Layer. Similar to Seo et al. (2016), the input of this layer at each position is 5. Output layer. This layer is task-specific. Each example in SQuAD is labeled with a span in the context containing the answer. We adopt the strategy of Seo et al. (2016) to predict the probability ', 'paragraph_idx': 11, 'before_section': None, 'context_before': '2.2 MODEL OVERVIEW ', 'modified_lines': 'The high level structure of our model is similar to most existing models that contain five major components: an embedding layer, an embedding encoder layer, a context-query attention layer, a model encoder layer and an output layer, as shown in Figure 1. These are the standard building blocks for most, if not all, existing reading comprehension models. However, the major differences between our approach and other methods are as follow: attention mechanism, completely discarding RNNs, which are used by most of the existing reading comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. Note that even though self-attention has already been used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. The use of convolutions also allows us to take advantage of common regular- ization methods in ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. 1Concurrently there are on par unpublished results either on the leaderboard or arxiv. For example, the current best documented model, SAN Liu et al. (2017b), achieves 84.4 F1 score which is on par with our method. 2 Under review as a conference paper at ICLR 2018 Figure 1: An overview of our model architecture (left) which has several Encoder Blocks. We use the same Encoder Block (right) throughout the model, only varying the number of convolutional layers for each block. We use layernorm and residual connection between every layer in the Encoder Block. We also share weights of the context and question encoder, and of the three output encoders. A positional encoding is added to the input at the beginning of each encoder layer consisting of sin and cos functions at varying wavelengths, as defined in (Vaswani et al., 2017a). Each sub-layer after the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. • Our model does not rely on the additional hand-crafted features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017a); Hu et al. (2017), nor multi-hop reading techniques (Hu et al., 2017; Shen et al., 2017b; Gong & Bowman, 2017). In detail, our model consists of the following five layers: fixed during training and initialized from the p1 = 300 dimensional pre-trained GloVe (Pennington et al., 2014) word vectors, which are fixed during training. All the out-of-vocabulary words are p2 = 200, meaning each word can be viewed as the concatenation of the embedding vectors for each of its characters. The length of each word is either truncated or padded to 16. We take maximum value of each row of this matrix to get a fixed-size vector representation of each word. Finally, the output of a given word x from this layer is the concatenation [xw; xc] ∈ Rp1+p2, where xw and xc are the word embedding and the convolution output of character embedding of x respectively. Following Seo et al. (2016), we also adopt a two-layer highway network (Srivastava et al., 2015) on top of this representation. For simplicity, we also use x to denote the output of this layer. of Figure 1. We use depthwise separable convolutions (Chollet, 2016) (Kaiser et al., 2017) rather than traditional ones, as we observe that it is memory efficient and has better generalization. The kernel size is 7, the number of filters is d = 128 and the number of conv layers within a block is 4. For the self-attention-layer, we adopt the multi-head attention mechanism defined in (Vaswani et al., 2017a) which, for each position in the input, called the query, computes a weighted sum of all positions, or keys, in the input based on the similarity between the query and key as measured by 3 Under review as a conference paper at ICLR 2018 the dot product. The number of heads is 8 throughout all the layers. Each of these basic operations (conv/self-attention/ffn) is placed inside a residual block, shown lower-right in Figure 1. For an input x and a given operation f , the output is f (layernorm(x)) + x, meaning there is a full identity path from the input to output of each block, where layernorm indicates layer-normalization proposed in (Ba et al., 2016). The total number of encoder blocks is 1. Note that the input of this layer is a vector of dimension p1 + p2 = 500 for each individual word, which is immediately mapped to d = 128 by a one-dimensional convolution. The output of this layer is a also of dimension d = 128. 3. Context-Query Attention Layer. This module is standard in almost every previous reading matrix S. Then the context-to-query attention is computed as A = S · QT ∈ Rn×d. The similarity Most high performing models additionally use some form of query-to-context attention, such as BiDaF (Seo et al., 2016) and DCN (Xiong et al., 2016). Empirically, we find that, the DCN attention can provide a little benefit over simply applying context-to-query attention, so we adopt this strategy. More concretely, we compute the column normalized matrix S of S by softmax function, and the query-to-context attention is B = S · S T · C T . [c, a, c (cid:12) a, c (cid:12) b], where a and b are respectively a row of attention matrix A and B. The layer parameters are the same as the Embedding Encoder Layer except that convolution layer number is 2 within a block and the total number of blocks are 7. We share weights between each of the 3 repetitions of the model encoder. ', 'original_lines': 'The high level structure of our model is similar to the existing simple models such as FastQAExt (Weissenborn et al., 2017), which contains three major components: an embedding en- coder, a context-to-question attention module, and a modeling encoder, as illustrated in Figure 1. These are the standard building blocks for most, if not all, existing reading comprehension models. However, the major differences between our approach and the existing ones are the following: attention mechanism, completely discarding RNNs, which are used by ALL existing read- ing comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. • Our model is simpler than most in that it does not contain a complicated interactive attention layer between context and query, such as bi-attention (Seo et al., 2016), coattention (Xiong et al., 2016), gated attention (Wang et al., 2017; Yang et al., 2016), multi-hop reading (Hu et al., 2017; Shen et al., 2017b; Gong & Bowman, 2017). • Our model does not rely on the addition of syntactic features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017); Hu et al. (2017). More specifically, our model consists of the following layers: fixed during training and initialized from the p = 200 dimensional pre-trained GloVe (Penning- ton et al., 2014) word vectors, which are fixed during training. All the out-of-vocabulary words are p, meaning each word can be viewed as the concatenation of the embedding vectors for each of its characters. We take maximum value of each row of this matrix to get a fixed-size vector repre- sentation of each word. Finally, the output of a given word x from this layer is the concatenation [xw; xc] ∈ R2p, where p = 200, xw and xc are the word embedding and the convolution output of character embedding of x respectively. For simplicity, we also use x to denote this embedded vector. 1At the time of submission, our model is being submitted for evaluation on the official test set 2 Under review as a conference paper at ICLR 2018 Figure 1: An illustration of our model architecture. We use the same encoder structure throughout the model, only varying the number of convolutional layers for the embedding encoders and model encoders. We additionally share weights of the context and question encoder, and of the three output encoders. A positional encoding is added to the input at the beginning of each encoder layer consisting of sin and cos functions at varying wavelengths, as defined in (Vaswani et al., 2017a). Each sub-layer after the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. of figure 1. We use depthwise separable convolutions (Chollet, 2016) (Kaiser et al., 2017) rather than traditional ones, as we observe that it is memory efficient, has better generalization, and allows us to use large kernel sizes. The kernel size is 7, the number of filters is d = 128 and the number of conv layers within a block is 4. For the self-attention-layer, we adopt the multi-head attention mech- anism defined in (Vaswani et al., 2017a) which, for each position in the input, called the query, com- putes a weighted sum of all positions, or keys, in the input based on the similarity between the query and key as measured by the dot product. Each of these basic operations (conv/self-attention/ffn) is placed inside a residual block, shown lower-right in figure 1. For an input x and a given operation f , the output is f (layernorm(x)) + x, meaning there is a full identity path from the input to output of each block, where layernorm indicates layer-normalization proposed in (Ba et al., 2016). The total number of encoder blocks is 1. Note that the output of this layer is a vector of dimension d = 128, for each individual input word. 3. Context-to-Query Attention Layer. This module is standard in almost every previous reading matrix ¯S. Then the context-to-query attention is computed as A = ¯S · Q(cid:48) ∈ Rn×d. The similarity 3 ModelEncoder BlockResidual BlockEmb. EncoderEmb. Encodercontext-to-query attentionModel EncoderModel EncoderModel Encoderconv conv self-attention feed forward netconv/self-attention/ffnlayernorm+concatStart ProbabilityconcatsoftmaxContextQuestionEmbeddingEmbeddinglinearEnd ProbabilitysoftmaxlinearPositional Encoding Under review as a conference paper at ICLR 2018 Most high performing existing models additionally use some form of query to context attention. However, we found that, when self attention layers are used, the addition of query-to-context atten- tion provide little benefit over simply applying context-to-query attention. [c, a, c (cid:12) a], where a is a row of attention matrix A. This layer is similar to the Embedding Encoder Layer except that convolution layer number is 2 within a block and the total number of blocks are 7. We share weights between each of the 3 repetitions of the model encoder. ', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where W1 and W2 are two trainable variables and M0, M1, M2 are respectively the outputs of the three model encoders, from bottom to top. The score of a span is then the product of its start position and end position probabilities. Finally, the objective function is defined as the negative sum of the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'probabilities of the starting and ending position are modeled as p1 = sof tmax(W1[M0; M1]), p2 = sof tmax(W2[M0; M2]), ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'after_section': None, 'context_after': 'Table 1: Comparison between answers in original sentence and paraphrased sentence. ', 'paragraph_idx': 24, 'before_section': None, 'context_before': 'Pre- ', 'modified_lines': 'Department of Preparatory Studies ', 'original_lines': 'Department of Prepara- tory Studies ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 THE MODEL', 'after_section': None, 'context_after': '4 EXPERIMENTS Wikipedia. Each training example consists of a context paragraph of those articles and an associated query, and the answer must be a span from the paragraph. SQuAD contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing. The typical length 6 Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'ment in terms of accuracy. We believe this technique is also applicable to other supervised natural language processing tasks, especially when the training data is insufficient. ', 'modified_lines': 'In this section, we conduct experimental studies to test the performance of our model and the data augmentation technique. We will primarily benchmark our model on the SQuAD dataset (Rajpurkar et al., 2016), which is considered to be one of the most competitive datasets in Q&A. We also conduct similar studies on TriviaQA (Joshi et al., 2017), another Q&A dataset, to show that the effectiveness and efficiency of our model are general. 4.1 EXPERIMENTS ON SQUAD 4.1.1 DATASET We consider the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) for ma- chine comprehension.7 The dataset is constructed using 536 articles randomly sampled from of the paragraphs is around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of (Rajpurkar et al., 2016) to retrieve the final test score. In our experiments, we report the test set result of our best single model.8 For further analysis, we only report the performance on the validation set, as we do not 6We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase s(cid:48) from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence. 7SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/ 8On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high EM/F1 scores. It is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. Under review as a conference paper at ICLR 2018 want to probe the unseen test set by frequent submissions. According to the observations from our experiments and previous works, such as (Seo et al., 2016; Xiong et al., 2016; Wang et al., 2017; Chen et al., 2017), the validation score is well correlated with the test score. 4.1.2 EXPERIMENTAL SETTINGS Data Preprocessing. We use the NLTK tokenizer to preprocess the data.9 The maximum context length is set to 400 and any paragraph longer than that would be discarded. During training, we batch the examples by length and dynamically pad the short sentences with special symbol <PAD>. The maximum answer length is set to 30. We use the pretrained 300-D word vectors GLoVe (Penning- ton et al., 2014), and all the out-of-vocabulary words are replace with <UNK>, whose embedding is updated during training. Each character embedding is randomly initialized as a 200-D vector, which is updated in training as well. We generate two additional augmented datasets obtained from Section 3, which contain 140K and 240K examples and are denoted as “data augmentation × 2” and “data augmentation × 3” respectively, including the original data. Training details. We employ two type of standard regularizations. First, we use L2 weight decay on all the trainable variables, with parameter λ = 3 × 10−7. We additionally use dropout on word, character embeddings and between layers, where the word and character dropout rates are 0.1 and 0.05 respectively, and the dropout rate between every two layers is 0.1. We also adopt the stochastic depth method (layer dropout) (Huang et al., 2016) within each embedding or model encoder layer, where sublayer l has survival probability pl = 1 − l L (1 − pL) where L is the last layer and pL = 0.9. The hidden size and the convolution filter number are all 128, the batch size is 32, training steps are 150K for original data, 250K for “data augmentation × 2”, and 340K for “data augmentation × 3”. The numbers of convolution layers in the embedding and modeling encoder are 4 and 2, kernel sizes are 7 and 5, and the block numbers for the encoders are 1 and 7, respectively. We use the ADAM optimizer (Kingma & Ba, 2014) with β1 = 0.8, β2 = 0.999, (cid:15) = 10−7. We use a learning rate warm-up scheme with an inverse exponential increase from 0.0 to 0.001 in the first 1000 steps, and then maintain a constant learning rate for the remainder of training. Exponential moving average is applied on all trainable variables with a decay rate 0.9999. Finally, we implemented our model in Python using Tensorflow (Abadi et al., 2016) and carried out our experiments on an NVIDIA p100 GPU.10 4.1.3 RESULTS Accuracy. The F1 and Exact Match (EM) are two evaluation metrics of accuracy for the model performance. F1 measures the portion of overlap tokens between the predicted answer and groundtruth, while exact match score is 1 if the prediction is exactly the same as groundtruth or 0 otherwise. We show the results in comparison with other methods in Table 2. To make a fair and thorough comparison, we both report the published results in their latest papers/preprints and the updated but not documented results on the leaderboard. We deem the latter as the unpublished results. The accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In particular, our model trained on the original data set outperforms all the documented results in the literature, in terms of both EM and F1 scores (see second column of Table 2). When trained with the augmented data with proper sampling scheme, our model can get significant gain 1.5/1.1 on EM/F1. Finally, our result on the official test set is 76.2/84.6, which significantly outperforms the best documented result 73.2/81.8. Speedup over RNN counterparts. To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer 9NLTK implementation: http://www.nltk.org/ 10TensorFlow implementation: https://www.tensorflow.org/ 11The scores are collected from the latest version of the documented related work on Oct 27, 2017. 12The scores are collected from the leaderboard on Oct 27, 2017. 7 ', 'original_lines': '5 English to French NMTFrench to English NMTAutrefois, le thé avait été utilisé surtout pour les moines bouddhistes pour rester éveillé pendant la méditation.In the past, tea was used mostly for Buddhist monks to stay awake during the meditation.Previously, tea had been used primarily for Buddhist monks to stay awake during meditation.(input sentence)(paraphrased sentence)(translation sentence)k translationsk^2 paraphrases Under review as a conference paper at ICLR 2018 In this section, we conduct experimental studies to test the effectiveness and efficiency of our model. 4.1 DATASET The test bed is the Stanford Question Answering Dataset (SQuAD)4 (Rajpurkar et al., 2016) for machine comprehension. The dataset is constructed using 536 articles randomly sampled from of the paragraphs is around 250 while the question is of 10 tokens. However, there are exceptionally long cases. Only the training and validation data are publicly available, while the testing data is hidden that one has to submit the code to a colab and cope with the authors of (Rajpurkar et al., 2016) to retrieve the final testing score. For the time being, we only show the performance on the validation set, as our code is being submitted for official evaluation. According to the previous works, such as (Seo et al., 2016; Xiong et al., 2016; Wang et al., 2017; Chen et al., 2017), the testing score is very close to and always a bit higher than the validation score, so we believe the comparison is convincing. 4.2 BASIC SETUP Data Preprocessing We use the NLTK tokenizer5 to preprocess the data. During training, the context length is set to 400. Any paragraph longer than that would be discarded and the short ones are pad with special symbol <PAD>. The maximum answer length is set to 30. We use the pre- trained 200-D word vectors GLoVe (Pennington et al., 2014), and all the out-of-vocabulary words are replace with <UNK>, whose embedding is updated during training. Each character embedding is randomly initialized as a 200-D vector, which is updated in training as well. We choose two aug- mented datasets obtained from Section 3, which contain 140K and 240K examples and are denoted as “data aug x2” and “data aug x3” respectively, including the original data. Model Parameters The hidden size and the convolution filter number are all 128, the batch size is 32, training steps are 120K for original data, 200K for data aug x2, and 250K for data aug x3. The numbers of convolution layers in the embedding and modeling encoder are 4 and 2, kernel sizes are 5 and 7, and the block numbers for the encoders are 1 and 7, respectively. The word and character dropout are 0.1 and 0.05 respectively, and we apply dropout between every two layers of 0.1. Optimization Details We adopt the ADAM (Kingma & Ba, 2014) optimizer with β1 = 0.8, β2 = 0.999, (cid:15) = 10−7. We use a learning rate warm-up scheme with an inverse exponential increase from 0.0 to 0.001 in the first 1000 steps, and then maintain a constant learning rate for the remainder of training. Exponential moving average is applied on all trainable variables with a decay rate 0.9999. Platform All the codes are written with python using the toolkit Tensorflow6 (Abadi et al., 2016) and run on a NVIDIA p100 GPU. 4.3 THE RESULTS AND ANALYSIS Accuracy The F1 and Exact Match (EM) are two evaluation metrics of accuracy for the model per- formance. F1 measures the portion of overlap tokens between the predicted answer and groundtruth, while exact match score is 1 if the prediction is exactly the same as groundtruth or 0 otherwise. We show the result in Table 2. To make a fair and thorough comparison, we both report the published results in their latest papers/preprints and the updated but not documented results on the leaderboard. We deem the latter as the unpublished results. The accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In particular, our model outperforms all the documented results in the literature, in terms of the F1 score (see second column of Table 2). 4https://rajpurkar.github.io/SQuAD-explorer/ 5http://www.nltk.org/ 6https://www.tensorflow.org/ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': 'RaSoR (Lee et al., 2016) FastQAExt (Weissenborn et al., 2017) ReasoNet (Shen et al., 2017b) ', 'paragraph_idx': 36, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Dynamic Coattention Networks (Xiong et al., 2016) FastQA (Weissenborn et al., 2017) BiDAF (Seo et al., 2016) ', 'modified_lines': 'SEDT (Liu et al., 2017a) ', 'original_lines': 'SEDT (Liu et al., 2017) ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 36}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'EM / F1 40.4 / 51.0 62.5 / 71.0 ', 'paragraph_idx': 36, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'R-Net (Wang et al., 2017) BiDAF + Self Attention + ELMo Reinforced Mnemonic Reader (Hu et al., 2017) ', 'modified_lines': 'Dev set: Our Model Dev set: Our Model + data augmentation ×2 Dev set: Our Model + data augmentation ×3 Test set: Our Model + data augmentation ×3 Published11 ', 'original_lines': 'Our Model + data aug x3 Published7 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 36}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'EM / F1 40.4 / 51.0 62.5 / 71.0 ', 'paragraph_idx': 36, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': '72.3 / 80.7 N/A 73.2 / 81.8 ', 'modified_lines': '73.6 / 82.7 74.5 / 83.2 75.1 / 83.8 76.2 / 84.6 LeaderBoard12 ', 'original_lines': '73.0 / 82.2 LeaderBoard8 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 36}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': '- convolution in encoders - self-attention in encoders EM / F1 5 RELATED WORK In this paper, we propose a fast and accurate end-to-end model for machine reading comprehension. Our core innovation is to completely remove the recurrent networks in the base model. The resulting model is fully feedforward, composed entirely of separable convolutions, attention, linear layers, and REFERENCES ', 'paragraph_idx': 42, 'before_section': None, 'context_before': '13.3x 8.8x ', 'modified_lines': 'Table 3: Speed comparison between the proposed model and RNN-based models on SQuAD dataset, all with batch size 32. RNN-x-y indicates an RNN with x layers each containing y hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by batches/second, so higher is faster. Speedup over BiDAF model. In addition, on the same hardware (a NVIDIA p100 GPU), we also compare the training time of getting the same performance between ours and the BiDAF model13(Seo et al., 2016), a classic RNN-based model on SQuAD. We adopt the default settings in the original code to get its best performance, where the batch sizes for training and testing are both 60. The result is shown in Table 4. In a nutshell, our model is respectively 4.3 and 7.0 times faster than BiDAF in training and testing speed. Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. 4.1.4 ABALATION STUDY AND ANALYSIS We conduct ablation studies on components of the proposed model, and investigate the effect of augmented data. The validation scores on the development set are shown in Table 5. As can be seen 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Under review as a conference paper at ICLR 2018 Our model BiDAF Speedup Train time to get 77.0 F1 3 hours 15 hours 5.0x Train speed 102 samples/s 24 samples/s 4.3x Inference speed 259 samples/s 37samples/s 7.0x Table 4: Speed comparison between our model and BiDAF (Seo et al., 2016) on SQuAD dataset. Base Model replace sep convolution with normal convolution + data augmentation ×2 (1:1:0) + data augmentation ×3 (1:1:1) + data augmentation ×3 (1:2:1) + data augmentation ×3 (2:2:1) + data augmentation ×3 (2:1:1) + data augmentation ×3 (3:1:1) + data augmentation ×3 (4:1:1) + data augmentation ×3 (5:1:1) 73.6 / 82.7 70.8 / 80.0 72.2 / 81.4 72.9 / 82.0 74.5 / 83.2 74.8 / 83.4 74.3 / 83.1 74.9 / 83.6 75.0 / 83.6 75.1 / 83.8 75.0 / 83.6 74.9 / 83.5 Difference to Base Model EM / F1 -2.8 / -2.7 -1.4 / -1.3 - 0.7 / -0.7 +0.9 / +0.5 +1.2 / +0.7 +0.7 / +0.4 +1.3 / +0.9 +1.4 / +0.9 +1.5 / +1.1 +1.4 / +0.9 +1.3 / +0.8 Table 5: An ablation study on data augmentation and other aspects of our model. The reported results are obtained on the development set. For rows containing entry “data augmentation”, “×N ” means the data is enhanced to N times as large as the original size, while the ratio in the bracket indicates the sampling ratio among the original, English-French-English and English-German-English data during training. from the table, the use of convolutions in the encoders is crucial: both F1 and EM drop drastically by almost 3 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.3 gain of EM/F1 to the ultimate performance. We interpret these phenomena as follows: the convolutions capture the local structure of the context while the self-attention is able to model the global interactions between text. Hence they are complimentary to but cannot replace each other. The use of separable convolutions in lieu of tradition convolutions also has a prominent contribution to the performance, which can be seen by the slightly worse accuracy caused by replacing separable convolution with normal convolution. The Effect of Data Augmentation. We additionally perform experiments to understand the val- ues of augmented data as their amount increases. As the last block of rows in the table shows, data augmentation proves to be helpful in further boosting performance. Making the training data twice as large by adding the En-Fr-En data only (ratio 1:1 between original training data and augmented data, as indicated by row “data augmentation × 2 (1:1:0)”) yields an increase in the F1 by 0.5 per- cent. While adding more augmented data with French as a pivot does not provide performance gain, injecting additional augmented data En-De-En of the same amount brings another 0.2 improvement in F1, as indicated in entry “data augmentation × 3 (1:1:1)”. We may attribute this gain to the diversity of the new data, which is produced by the translator of the new language. The Effect of Sampling Scheme. Although injecting more data beyond × 3 does not benefit the model, we observe that a good sampling ratio between the original and augmented data during training can further boost the model performance. In particular, when we increase the sampling weight of augmented data from (1:1:1) to (1:2:1), the EM/F1 performance drops by 0.5/0.3. We conjecture that it is due to the fact that augmented data is noisy because of the back-translation, so it should not be the dominant data of training. We confirm this point by increasing the ratio of the original data from (1:2:1) to (2:2:1), where 0.6/0.5 performance gain on EM/F1 is obtained. Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation. 9 Under review as a conference paper at ICLR 2018 4.2 EXPERIMENTS ON TRIVIAQA In this section, we test our model on another dataset TriviaQA (Joshi et al., 2017), which consists of 650K context-query-answer triples. There are 95K distinct question-answer pairs, which are au- thored by Trivia enthusiasts, with 6 evidence documents (context) per question on average, which are either crawled from Wikipedia or Web search. Compared to SQuAD, TriviaQA is more chal- lenging in that: 1) its examples have much longer context (2895 tokens per context on average) and may contain several paragraphs, 2) it is much noisier than SQuAD due to the lack of human labeling, 3) it is possible that the context is not related to the answer at all, as it is crawled by key words. In this paper, we focus on testing our model on the subset consisting of answers from Wikipedia. According to the previous work (Joshi et al., 2017; Hu et al., 2017; Pan et al., 2017), the same model would have similar performance on both Wikipedia and Web, but the latter is five time larger. Hence, without loss of generality, we omit the experiment on Web data to make the training time manageable. Due to the multi-paragraph nature of the context, researchers recently find that simple hierarchi- cal or multi-step reading tricks, such as first predicting which paragraph to read and then apply models like BiDAF to pinpoint the answer within that paragraph (Clark & Gardner, 2017), can sig- nificantly boost the performance on TriviaQA. However, in this paper, we focus on comparing with the single-paragraph reading baselines only. We believe that our model can be plugged to those multi-paragraph reading approaches to achieve the similar or better performance, but it is out of the scope of this paper. The Wikipedia sub-dataset contains around 92K training and 11K development examples. The aver- age context and question lengths are 495 and 15 respectively. In addition to the full development set, the authors of Joshi et al. (2017) also pick a verified subset that all the contexts inside can answer the associated questions. As the text could be long, we adopt the data processing similar to Hu et al. (2017); Joshi et al. (2017). In particular, for training and validation, we randomly select a window of length 256 and 400 encapsulating the answer respectively. All the remaining setting are the same as SQuAD experiment, except that the training steps are set to 120K. Accuracy. The accuracy performance on the development set is shown in Table 6. Again, we can see that our model outperforms the baselines in terms of F1 and EM on Full development set, and is on par with the state-of-the-art on the Verified dev set. Single Model Random (Joshi et al., 2017) Classifier (Joshi et al., 2017) BiDAF (Seo et al., 2016) MEMEN (Pan et al., 2017) M-Reader (Hu et al., 2017)∗ Our Model Full EM / F1 12.7 / 22.5 23.4 / 27.7 40.3 / 45.7 43.2/ 46.9 46.9/ 52.9∗ 51.1 / 56.6 Verified EM / F1 13.8 / 23.4 23.6 / 27.9 46.5 /52.8 49.3 / 55.8 54.5/ 59.5∗ 53.3/ 59.2 Table 6: The development set performances of different single-paragraph reading models on the Wikipedia domain of TriviaQA dataset. Note that ∗ indicates the result on test set. Speedup over RNN counterparts. In addition to accuracy, we also benchmark the speed of our model against the RNN counterparts. As Table 7 shows, not surprisingly, our model has 3 to 11 times speedup in training and 3 to 9 times acceleration in inference, similar to the finding in SQuAD dataset. Machine reading comprehension and automated question answering has become an important topic in the NLP domain. Their popularity can be attributed to an increase in publicly available anno- tated datasets, such as SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), CNN/Daily News (Hermann et al., 2015), WikiReading (Hewlett et al., 2016), Children Book Test (Hill et al., 10 Under review as a conference paper at ICLR 2018 Training Inference Ours RNN-1-128 1.8 3.2 0.41 0.89 Speedup RNN-2-128 Speedup RNN-3-128 4.4x 3.6x 0.20 0.47 9.0x 6.8x 0.11 0.26 Speedup 16.4x 12.3x Table 7: Speed comparison between the proposed model and RNN-based models on TriviaQA Wikipedia dataset, all with batch size 32. RNN-x-y indicates an RNN with x layers each containing y hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by batches/second, so higher is faster. 2015), etc. A great number of end-to-end neural network models have been proposed to tackle these challenges, including BiDAF (Seo et al., 2016), r-net (Wang et al., 2017), DCN (Xiong et al., 2016), ReasoNet (Shen et al., 2017b), Document Reader (Chen et al., 2017), Interactive AoA Reader (Cui et al., 2017) and Reinforced Mnemonic Reader (Hu et al., 2017). Recurrent Neural Networks (RNNs) have featured predominatnly in Natural Language Processing in the past few years. The sequential nature of the text coincides with the design philosophy of RNNs, and hence their popularity. In fact, all the reading comprehension models mentioned above are based on RNNs. Despite being common, the sequential nature of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another drawback of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Unit (Chung et al., 2014) or Long Short Term Memory architectures (Hochreiter & Schmidhuber, 1997). For simple tasks such as text classification, with reinforcement learning techniques, models (Yu et al., 2017) have been proposed to skip irrelevant tokens to both further address the long dependencies issue and speed up the procedure. However, it is not clear if such methods can handle complicated tasks such as Q&A. The reading comprehension task considered in this paper always needs to deal with long text, as the context paragraphs may be hundreds of words long. Recently, attempts have been made to replace the recurrent networks by full convolution or full attention architectures (Kim, 2014; Gehring et al., 2017; Vaswani et al., 2017b; Shen et al., 2017a). Those models have been shown to be not only faster than the RNN architectures, but also effective in other tasks, such as text classification, machine translation or sentiment analysis. To the best of our knowledge, our paper is the first work to achieve both fast and accurate reading comprehension model, by discarding the recurrent networks in favor of feed forward architectures. Our paper is also the first to mix self-attention and convolutions, which proves to be empirically effective and achieves a significant gain of 2.7 F1. Note that Raiman & Miller (2017) recently pro- posed to accelerate reading comprehension by avoiding bi-directional attention and making compu- tation conditional on the search beams. Nevertheless, their model is still based on the RNNs and the accuracy is not competitive, with an EM 68.4 and F1 76.2. Weissenborn et al. (2017) also tried to build a fast Q&A model by deleting the context-query attention module. However, it again relied on RNN and is thus intrinsically slower than ours. The elimination of attention further has sacrificed the performance (with EM 68.4 and F1 77.1). Data augmentation has also been explored in natural language processing. For example, Zhang et al. (2015) proposed to enhance the dataset by replacing the words with their synonyms and showed its effectiveness in text classification. Raiman & Miller (2017) suggested using type swap to augment the SQuAD dataset, which essentially replaces the words in the original paragraph with others with the same type. While it was shown to improve the accuracy, the augmented data has the same syn- tactic structure as the original data, so they are not sufficiently diverse. Zhou et al. (2017) improved the diversity of the SQuAD data by generating more questions. However, as reported by Wang et al. (2017), their method did not help improve the performance. The data augmentation technique pro- posed in this paper is based on paraphrasing the sentences by translating the original text back and forth. The major benefit is that it can bring more syntactical diversity to the enhanced data. 6 CONCLUSION 11 Under review as a conference paper at ICLR 2018 layer normalization, which is suitable for parallel computation. The resulting model is both fast and accurate: It surpasses the best published results on SQuAD dataset while up to 13/9 times faster than a competitive recurrent models for a training/inference iteration. Additionally, we find that we are able to achieve significant gains by utilizing data augmentation consisting of translating context and passage pairs to and from another language as a way of paraphrasing the questions and contexts. ', 'original_lines': 'Table 3: Speed comparison between the proposed model and RNN-based models, all with batch size 32. RNN-x-y indicates an RNN with x layers each containing y hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by steps/second, so higher is faster. Ablation Study We conduct ablation studies on components of the proposed model, and inves- tigate the effect of augmented data. The validation scores on the development set are shown in Table 4. The first take-home message is that the convolutions in the encoders are crucial: both F1 and EM drop drastically by 3 percent if it is removed. Self-attention in the encoders is also a nec- 7The scores are collected from the latest version of the documented related work on Oct 27, 2017. 8The scores are collected from the leaderboard on Oct 27, 2017. 7 Under review as a conference paper at ICLR 2018 Single Model Full Model replaced with normal convolution + data aug ×2 + data aug ×3 72.5 / 81.4 69.4 / 78.5 70.9 / 80.0 71.8 / 80.7 72.8 / 82.0 73.0 / 82.2 Table 4: Ablation study on different components and data augmentation. The result is based on model trained with original data unless stated otherwise. essary component that contributes 1.4/1.6 gain of F1/EM to the ultimate performance. We interpret these phenomena as follows: the convolutions capture the local structure of the context while the self-attention is able to model the global interactions between text. Hence they are complimentary to but cannot replace each other. The use of separable convolutions in lieu of tradition convolutions also has a prominent contribution to the performance, which can be seen by the slightly worse ac- curacy caused by replacing with normal convolution. As the last two rows of the table show, data augmentation proves to be helpful in further boosting performance. × 2 data augmentation yields an increase in the F1 by 0.6 percent. The performance gain lessens with more data, as generating an additional 100K examples yields only another 0.2 improvement in F1. We observe that injecting more data beyond this amount does not benefit the model. Reading comprehension and question answering has become an important topic in the NLP do- main. Its popularity can be attributed to an increase in publicly available annotated datasets, such as SQuAD (Rajpurkar et al., 2016), CNN/Daily News (Hermann et al., 2015), WikiReading (Hewlett et al., 2016), Children Book Test (Hill et al., 2015), etc. A plethora of effective end-to-end neural network models exist to approach these challenges, including BiDaf (Seo et al., 2016), r-net (Wang et al., 2017), DCN (Xiong et al., 2016), ReasoNet (Shen et al., 2017b), Document Reader (Chen et al., 2017), Interactive AoA Reader (Cui et al., 2017) and Reinforced Mnemonic Reader (Hu et al., 2017). Recurrent Neural Networks (RNNs) have been used extensively in the natural language processing area in the past few years. The sequential nature of the text coincides with the design philosophy of RNNs, and hence RNNs are generally the default choice for modeling text. In fact, all the reading comprehension models mentioned above are based on RNNs. While effective, the sequential na- ture of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another drawback of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Units (Chung et al., 2014) or Long Short Term Memory cells. The reading comprehension task considered in this paper always needs to deal with long text, as the context paragraphs may be hundreds of words long. Recently, attempts have been made to replace the recurrent networks by fully convolution or attention based architectures (Kim, 2014; Gehring et al., 2017; Vaswani et al., 2017b; Shen et al., 2017a). Those models have been shown to be not only faster than the RNN based ones, but also effective in different tasks, such as text classification, machine translation or sentiment analysis. To our knowledge, our paper is the first work on both fast and accurate reading comprehension model, by discarding the recurrent networks in favor of feed forward architectures. Note that Raiman & Miller (2017) recently proposed to accelerate reading comprehension by avoiding bi-directional attention and making computation conditional on the search beams. Nevertheless, their model is still based on the RNNs and the accuracy is not competitive, with an EM 68.4 and F1 76.2. Weissenborn et al. (2017) also tried to build a fast Q&A model by deleting the context-to-query attention module. However, it again relied on RNN and is thus intrinsically slower than ours. The elimination of attention further has sacrificed the performance (with EM 68.4 and F1 77.1). Data augmentation has also been explored in different scenarios of natural language processing. For example, Zhang et al. (2015) proposed to enhance the dataset by replacing the words with their 8 Under review as a conference paper at ICLR 2018 synonyms, which was shown to be effective in text classification. Recently, Raiman & Miller (2017) suggested using type swap to augment the SQuAD dataset, which essentially replaces the words in the original paragraph with others of exactly the same type. While it was shown to improve the accuracy, the augmented data has the same syntactic structure as the originals, so they are not sufficiently diverse. On the other hand, Zhou et al. (2017) aimed at improving the diversity of the SQuAD data by generating more questions. However, as reported by Wang et al. (2017), it did not help improve the performance. The data augmentation technique proposed in this paper is based on paraphrasing the sentences by translating the original text back and forth. The major benefit is that it can bring more syntactical diversity to the enhanced data. 6 CONCLUSION AND DISCUSSION layer normalization, which is suitable for parallel computation. We find that this results in a model that matches the best published results while up to 9 times faster than a comparable recurrent model during inference. Additionally, we find that we are able to achieve significant gains by utilizing data augmentation consisting of translating context and passage pairs to and from another language as a way of paraphrasing the underlying ideas. The mechanisms of our models are orthogonal to many existing variants. We believe that, when combined with more sophisticated techniques, it can obtain even higher performance. The data augmentation technique is general and of independent interest. We are convinced that many the NLP tasks can leverage it to obtain improvement in performance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Yichen Gong and Samuel R. Bowman. Ruminating reader: Reasoning with gated multi-hop atten- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning, 2017. ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'after_section': None, 'context_after': 'Caiming Xiong, Victor Zhong, and Richard Socher. Dynamic coattention networks for question answering. CoRR, abs/1611.01604, 2016. URL http://arxiv.org/abs/1611.01604. Yang Yu, Wei Zhang, Kazi Saidul Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end reading comprehension with dynamic answer chunk ranking. CoRR, abs/1610.09996, 2016. URL http: ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural ', 'modified_lines': 'machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Adams Wei Yu, Hongrae Lee, and Quoc V. Le. Learning to skim text. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pp. 1880–1890, 2017. ', 'original_lines': 'machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144. Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, and Ruslan Salakhutdinov. Words or characters? fine-grained gating for reading comprehension. CoRR, abs/1611.01724, 2016. URL http://arxiv.org/abs/1611.01724. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-02 06:50:17
|
ICLR.cc/2018/Conference
|
rJW3ooOXz
|
rkp8T3OQG
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 As our model is fast, we can train it with much more data than other models. To further improve the model, we propose a complementary data augmentation technique to enhance the training data. This ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'free encoder before finally decoding to the probability of each position being the start or end of the answer span. This architecture is shown in Figure 1. ', 'modified_lines': 'The key motivation behind the design of our model is the following: convolution captures the local structure of the text, while the self-attention learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subsequent modeling layers. Notice that by using convolutions and self-attention, our model is no longer recurrent. The feed- forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a simple comparison, our model can achieve the same accuracy (77.0 F1 score) as BiDAF model (Seo et al., 2016) within 3 hours training that otherwise should have taken 15 hours. The speed-up gain also allows us to train the model with more iterations to achieve better results than competitive models. For instance, if we allow our model to train for 18 hours, it achieves an F1 score of 82.7 on the dev set, which is much better than (Seo et al., 2016), and is on par with best published results. ', 'original_lines': 'The key motivation behind the design of our model is the following: the convolutions captures the local structure of the text, while the self-attention mechanism learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subse- quent modeling layers. Notice that by using convolutions and self-attention, our model is no longer recurrent. The feed-forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a simple comparison, our model can achieve the same accuracy (77.0 F1 score) as BiDAF model (Seo et al., 2016) within 3 hours training that otherwise should have taken 15 hours. The speed-up gain also allows us to train the model with more iterations to achieve better results than competitive models. For instance, if we allow the model to train for 18 hours, it achieves an F1 score of 82.7 on the dev set, which is much better than (Seo et al., 2016), and is on par with best published results. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': '2 ', 'paragraph_idx': 10, 'before_section': '2.2 MODEL OVERVIEW', 'context_before': 'ization methods in ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. ', 'modified_lines': '• Our model does not rely on the additional hand-crafted features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017a); 1Concurrently there are other unpublished results either on the leaderboard or arxiv. For example, the current best documented model, SAN Liu et al. (2017b), achieves 84.4 F1 score which is on par with our method. ', 'original_lines': '1Concurrently there are on par unpublished results either on the leaderboard or arxiv. For example, the current best documented model, SAN Liu et al. (2017b), achieves 84.4 F1 score which is on par with our method. ', 'after_paragraph_idx': 10, 'before_paragraph_idx': 10}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hu et al. (2017), nor multi-hop reading techniques (Hu et al., 2017; Shen et al., 2017b; Gong & Bowman, 2017). ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. ', 'modified_lines': '', 'original_lines': '• Our model does not rely on the additional hand-crafted features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017a); ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 THE MODEL', 'after_section': None, 'context_after': '4.1 EXPERIMENTS ON SQUAD 4.1.1 DATASET We consider the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) for ma- 6We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase s(cid:48) from our sampling process. If all paraphrases of a sentence are ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '4 EXPERIMENTS ', 'modified_lines': 'In this section, we conduct experiments to study the performance of our model and the data aug- mentation technique. We will primarily benchmark our model on the SQuAD dataset (Rajpurkar et al., 2016), considered to be one of the most competitive datasets in Q&A. We also conduct similar studies on TriviaQA (Joshi et al., 2017), another Q&A dataset, to show that the effectiveness and efficiency of our model are general. chine reading comprehension.7 SQuAD contains 107.7K query-answer pairs, with 87.5K for train- ing, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of (Rajpurkar et al., 2016) to retrieve the final test score. In our experiments, we report the test set result of our best single model.8 For further analysis, we only report the performance on the validation set, as we do not want to probe the unseen test set by frequent submissions. According to the observations from our experiments and previous works, such as (Seo et al., 2016; Xiong et al., 2016; Wang et al., 2017; Chen et al., 2017), the validation score is well correlated with the test score. ', 'original_lines': 'In this section, we conduct experimental studies to test the performance of our model and the data augmentation technique. We will primarily benchmark our model on the SQuAD dataset (Rajpurkar et al., 2016), which is considered to be one of the most competitive datasets in Q&A. We also conduct similar studies on TriviaQA (Joshi et al., 2017), another Q&A dataset, to show that the effectiveness and efficiency of our model are general. chine comprehension.7 The dataset is constructed using 536 articles randomly sampled from Wikipedia. Each training example consists of a context paragraph of those articles and an associated query, and the answer must be a span from the paragraph. SQuAD contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of (Rajpurkar et al., 2016) to retrieve the final test score. In our experiments, we report the test set result of our best single model.8 For further analysis, we only report the performance on the validation set, as we do not ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': '6 Under review as a conference paper at ICLR 2018 4.1.2 EXPERIMENTAL SETTINGS ', 'paragraph_idx': 30, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': '7SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/ 8On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high ', 'modified_lines': 'EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. ', 'original_lines': 'EM/F1 scores. It is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. want to probe the unseen test set by frequent submissions. According to the observations from our experiments and previous works, such as (Seo et al., 2016; Xiong et al., 2016; Wang et al., 2017; Chen et al., 2017), the validation score is well correlated with the test score. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 30}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': 'on all the trainable variables, with parameter λ = 3 × 10−7. We additionally use dropout on word, character embeddings and between layers, where the word and character dropout rates are 0.1 and 0.05 respectively, and the dropout rate between every two layers is 0.1. We also adopt the stochastic ', 'paragraph_idx': 31, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Section 3, which contain 140K and 240K examples and are denoted as “data augmentation × 2” and “data augmentation × 3” respectively, including the original data. ', 'modified_lines': 'Training details. We employ two types of standard regularizations. First, we use L2 weight decay ', 'original_lines': 'Training details. We employ two type of standard regularizations. First, we use L2 weight decay ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 30}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': '4.1.3 RESULTS Accuracy. The F1 and Exact Match (EM) are two evaluation metrics of accuracy for the model performance. F1 measures the portion of overlap tokens between the predicted answer and groundtruth, while exact match score is 1 if the prediction is exactly the same as groundtruth or the updated but not documented results on the leaderboard. We deem the latter as the unpublished 9NLTK implementation: http://www.nltk.org/ 10TensorFlow implementation: https://www.tensorflow.org/ ', 'paragraph_idx': 32, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': '1000 steps, and then maintain a constant learning rate for the remainder of training. Exponential moving average is applied on all trainable variables with a decay rate 0.9999. ', 'modified_lines': 'Finally, we implement our model in Python using Tensorflow (Abadi et al., 2016) and carry out our experiments on an NVIDIA p100 GPU.10 0 otherwise. We show the results in comparison with other methods in Table 2. To make a fair and thorough comparison, we both report both the published results in their latest papers/preprints and results. As can be seen from the table, the accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In particular, our model trained on the original dataset outperforms all the documented results in the literature, in terms of both EM and F1 scores (see second column of Table 2). When trained with the augmented data with proper sampling scheme, our model can get significant gain 1.5/1.1 on EM/F1. Finally, our result on the official test set is 76.2/84.6, which significantly outperforms the best documented result 73.2/81.8. Speedup over RNNs. To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. ', 'original_lines': 'Finally, we implemented our model in Python using Tensorflow (Abadi et al., 2016) and carried out our experiments on an NVIDIA p100 GPU.10 0 otherwise. We show the results in comparison with other methods in Table 2. To make a fair and thorough comparison, we both report the published results in their latest papers/preprints and results. The accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In particular, our model trained on the original data set outperforms all the documented results in the literature, in terms of both EM and F1 scores (see second column of Table 2). When trained with the augmented data with proper sampling scheme, our model can get significant gain 1.5/1.1 on EM/F1. Finally, our result on the official test set is 76.2/84.6, which significantly outperforms the best documented result 73.2/81.8. Speedup over RNN counterparts. To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer ', 'after_paragraph_idx': None, 'before_paragraph_idx': 31}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Training Inference ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Table 2: The performances of different models on SQuAD dataset. ', 'modified_lines': '', 'original_lines': 'numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can readily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Speedup over BiDAF model. Our model BiDAF ', 'paragraph_idx': 38, 'before_section': None, 'context_before': '13.3x 8.8x ', 'modified_lines': 'Table 3: Speed comparison between our model and RNN-based models on SQuAD dataset, all with batch size 32. RNN-x-y indicates an RNN with x layers each containing y hidden units. Here, we use bidirectional LSTM as the RNN. The speed is measured by batches/second, so higher is faster. In addition, we also use the same hardware (a NVIDIA p100 GPU) and compare the training time of getting the same performance between our model and the BiDAF model13(Seo et al., 2016), a classic RNN-based model on SQuAD. We adopt the default settings in the original code to get its best performance, where the batch sizes for training and inference are both 60. The result is shown in Table 4 which shows that our model is 4.3 and 7.0 times faster than BiDAF in training and inference speed. Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. ', 'original_lines': 'Table 3: Speed comparison between the proposed model and RNN-based models on SQuAD dataset, all with batch size 32. RNN-x-y indicates an RNN with x layers each containing y hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by batches/second, so higher is faster. In addition, on the same hardware (a NVIDIA p100 GPU), we also compare the training time of getting the same performance between ours and the BiDAF model13(Seo et al., 2016), a classic RNN-based model on SQuAD. We adopt the default settings in the original code to get its best performance, where the batch sizes for training and testing are both 60. The result is shown in Table 4. In a nutshell, our model is respectively 4.3 and 7.0 times faster than BiDAF in training and testing speed. Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. 4.1.4 ABALATION STUDY AND ANALYSIS We conduct ablation studies on components of the proposed model, and investigate the effect of augmented data. The validation scores on the development set are shown in Table 5. As can be seen 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Base Model ', 'paragraph_idx': 39, 'before_section': None, 'context_before': '7.0x Table 4: Speed comparison between our model and BiDAF (Seo et al., 2016) on SQuAD dataset. ', 'modified_lines': ' 4.1.4 ABALATION STUDY AND ANALYSIS We conduct ablation studies on components of the proposed model, and investigate the effect of augmented data. The validation scores on the development set are shown in Table 5. As can be seen 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Under review as a conference paper at ICLR 2018 from the table, the use of convolutions in the encoders is crucial: both F1 and EM drop drastically by almost 3 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.3 gain of EM/F1 to the ultimate performance. We interpret these phenomena as follows: the convolutions capture the local structure of the context while the self-attention is able to model the global interactions between text. Hence they are complimentary to but cannot replace each other. The use of separable convolutions in lieu of tradition convolutions also has a prominent contribution to the performance, which can be seen by the slightly worse accuracy caused by replacing separable convolution with normal convolution. The Effect of Data Augmentation. We additionally perform experiments to understand the val- ues of augmented data as their amount increases. As the last block of rows in the table shows, data augmentation proves to be helpful in further boosting performance. Making the training data twice as large by adding the En-Fr-En data only (ratio 1:1 between original training data and augmented data, as indicated by row “data augmentation × 2 (1:1:0)”) yields an increase in the F1 by 0.5 per- cent. While adding more augmented data with French as a pivot does not provide performance gain, injecting additional augmented data En-De-En of the same amount brings another 0.2 improvement in F1, as indicated in entry “data augmentation × 3 (1:1:1)”. We may attribute this gain to the diversity of the new data, which is produced by the translator of the new language. The Effect of Sampling Scheme. Although injecting more data beyond × 3 does not benefit the model, we observe that a good sampling ratio between the original and augmented data during training can further boost the model performance. In particular, when we increase the sampling weight of augmented data from (1:1:1) to (1:2:1), the EM/F1 performance drops by 0.5/0.3. We conjecture that it is due to the fact that augmented data is noisy because of the back-translation, so it should not be the dominant data of training. We confirm this point by increasing the ratio of the original data from (1:2:1) to (2:2:1), where 0.6/0.5 performance gain on EM/F1 is obtained. Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 EXPERIMENTS ON TRIVIAQA', 'after_section': '4.2 EXPERIMENTS ON TRIVIAQA', 'context_after': 'are obtained on the development set. For rows containing entry “data augmentation”, “×N ” means the data is enhanced to N times as large as the original size, while the ratio in the bracket indicates the sampling ratio among the original, English-French-English and English-German-English data during training. 4.2 EXPERIMENTS ON TRIVIAQA ', 'paragraph_idx': 47, 'before_section': None, 'context_before': '+1.4 / +0.9 +1.3 / +0.8 ', 'modified_lines': 'Table 5: An ablation study of data augmentation and other aspects of our model. The reported results ', 'original_lines': 'Table 5: An ablation study on data augmentation and other aspects of our model. The reported results from the table, the use of convolutions in the encoders is crucial: both F1 and EM drop drastically by almost 3 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.3 gain of EM/F1 to the ultimate performance. We interpret these phenomena as follows: the convolutions capture the local structure of the context while the self-attention is able to model the global interactions between text. Hence they are complimentary to but cannot replace each other. The use of separable convolutions in lieu of tradition convolutions also has a prominent contribution to the performance, which can be seen by the slightly worse accuracy caused by replacing separable convolution with normal convolution. The Effect of Data Augmentation. We additionally perform experiments to understand the val- ues of augmented data as their amount increases. As the last block of rows in the table shows, data augmentation proves to be helpful in further boosting performance. Making the training data twice as large by adding the En-Fr-En data only (ratio 1:1 between original training data and augmented data, as indicated by row “data augmentation × 2 (1:1:0)”) yields an increase in the F1 by 0.5 per- cent. While adding more augmented data with French as a pivot does not provide performance gain, injecting additional augmented data En-De-En of the same amount brings another 0.2 improvement in F1, as indicated in entry “data augmentation × 3 (1:1:1)”. We may attribute this gain to the diversity of the new data, which is produced by the translator of the new language. The Effect of Sampling Scheme. Although injecting more data beyond × 3 does not benefit the model, we observe that a good sampling ratio between the original and augmented data during training can further boost the model performance. In particular, when we increase the sampling weight of augmented data from (1:1:1) to (1:2:1), the EM/F1 performance drops by 0.5/0.3. We conjecture that it is due to the fact that augmented data is noisy because of the back-translation, so it should not be the dominant data of training. We confirm this point by increasing the ratio of the original data from (1:2:1) to (2:2:1), where 0.6/0.5 performance gain on EM/F1 is obtained. Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation. 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 47, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Training Inference ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'Table 6: The development set performances of different single-paragraph reading models on the Wikipedia domain of TriviaQA dataset. Note that ∗ indicates the result on test set. ', 'modified_lines': 'Speedup over RNNs. In addition to accuracy, we also benchmark the speed of our model against the RNN counterparts. As Table 7 shows, not surprisingly, our model has 3 to 11 times speedup in training and 3 to 9 times acceleration in inference, similar to the finding in SQuAD dataset. ', 'original_lines': 'Speedup over RNN counterparts. In addition to accuracy, we also benchmark the speed of our model against the RNN counterparts. As Table 7 shows, not surprisingly, our model has 3 to 11 times speedup in training and 3 to 9 times acceleration in inference, similar to the finding in SQuAD dataset. 5 RELATED WORK Machine reading comprehension and automated question answering has become an important topic in the NLP domain. Their popularity can be attributed to an increase in publicly available anno- tated datasets, such as SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), CNN/Daily News (Hermann et al., 2015), WikiReading (Hewlett et al., 2016), Children Book Test (Hill et al., 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-02 08:05:41
|
ICLR.cc/2018/Conference
|
rkp8T3OQG
|
SkRoFGcXG
|
[]
|
2018-01-03 08:51:49
|
ICLR.cc/2018/Conference
|
SkRoFGcXG
|
Sy_4WspmM
|
[{'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': '4.2 EXPERIMENTS ON TRIVIAQA In this section, we test our model on another dataset TriviaQA (Joshi et al., 2017), which consists ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'the sampling ratio among the original, English-French-English and English-German-English data during training. ', 'modified_lines': '4.1.5 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences 9 Under review as a conference paper at ICLR 2018 that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).14 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. The injected noise in the training data might not only improve the generalization of the model but also make it robust to the adversarial sentences. Single Model Logistic (Rajpurkar et al., 2016) Match (Wang & Jiang, 2016) SEDT (Liu et al., 2017a) DCR (Yu et al., 2016) BiDAF (Seo et al., 2016) jNet (Zhang et al., 2017) Ruminating (Gong & Bowman, 2017) RaSOR (Lee et al., 2016) MPCM (Wang et al., 2016) ReasoNet (Shen et al., 2017b) Mnemonic (Hu et al., 2017) Our Model AddSent AddOneSent 23.2 27.3 33.9 37.8 34.3 37.9 37.4 39.5 40.3 39.4 46.6 45.2 30.4 39.0 44.8 45.1 45.7 47.0 47.7 49.5 50.0 50.3 56.0 55.7 Table 6: The F1 scores on the adversarial SQuAD test set. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'may contain several paragraphs, 2) it is much noisier than SQuAD due to the lack of human labeling, 3) it is possible that the context is not related to the answer at all, as it is crawled by key words. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'thored by Trivia enthusiasts, with 6 evidence documents (context) per question on average, which are either crawled from Wikipedia or Web search. Compared to SQuAD, TriviaQA is more chal- lenging in that: 1) its examples have much longer context (2895 tokens per context on average) and ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 EXPERIMENTS ON TRIVIAQA', 'after_section': '4.2 EXPERIMENTS ON TRIVIAQA', 'context_after': 'see that our model outperforms the baselines in terms of F1 and EM on Full development set, and is on par with the state-of-the-art on the Verified dev set. ', 'paragraph_idx': 54, 'before_section': '4.2 EXPERIMENTS ON TRIVIAQA', 'context_before': 'of length 256 and 400 encapsulating the answer respectively. All the remaining setting are the same as SQuAD experiment, except that the training steps are set to 120K. ', 'modified_lines': '14Only F1 scores are reported in Jia & Liang (2017) 10 Under review as a conference paper at ICLR 2018 Accuracy. The accuracy performance on the development set is shown in Table 7. Again, we can ', 'original_lines': 'Accuracy. The accuracy performance on the development set is shown in Table 6. Again, we can ', 'after_paragraph_idx': 55, 'before_paragraph_idx': 53}, {'section': '4.2 EXPERIMENTS ON TRIVIAQA', 'after_section': '4.2 EXPERIMENTS ON TRIVIAQA', 'context_after': 'Wikipedia domain of TriviaQA dataset. Note that ∗ indicates the result on test set. Speedup over RNNs. In addition to accuracy, we also benchmark the speed of our model against training and 3 to 9 times acceleration in inference, similar to the finding in SQuAD dataset. Training ', 'paragraph_idx': 57, 'before_section': None, 'context_before': '54.5/ 59.5∗ 53.3/ 59.2 ', 'modified_lines': 'Table 7: The development set performances of different single-paragraph reading models on the the RNN counterparts. As Table 8 shows, not surprisingly, our model has 3 to 11 times speedup in ', 'original_lines': 'Table 6: The development set performances of different single-paragraph reading models on the the RNN counterparts. As Table 7 shows, not surprisingly, our model has 3 to 11 times speedup in ', 'after_paragraph_idx': 57, 'before_paragraph_idx': None}, {'section': '5 RELATED WORK', 'after_section': '5 RELATED WORK', 'context_after': 'Wikipedia dataset, all with batch size 32. RNN-x-y indicates an RNN with x layers each containing y hidden units. The RNNs used here are bidirectional LSTM. The processing speed is measured by batches/second, so higher is faster. ', 'paragraph_idx': 61, 'before_section': None, 'context_before': '16.4x 12.3x ', 'modified_lines': 'Table 8: Speed comparison between the proposed model and RNN-based models on TriviaQA ', 'original_lines': 'Table 7: Speed comparison between the proposed model and RNN-based models on TriviaQA ', 'after_paragraph_idx': 61, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'News (Hermann et al., 2015), WikiReading (Hewlett et al., 2016), Children Book Test (Hill et al., 2015), etc. A great number of end-to-end neural network models have been proposed to tackle these challenges, including BiDAF (Seo et al., 2016), r-net (Wang et al., 2017), DCN (Xiong et al., 2016), ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Machine reading comprehension and automated question answering has become an important topic in the NLP domain. Their popularity can be attributed to an increase in publicly available anno- tated datasets, such as SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), CNN/Daily ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'REFERENCES Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gre- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'able to achieve significant gains by utilizing data augmentation consisting of translating context and passage pairs to and from another language as a way of paraphrasing the questions and contexts. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. Disan: Direc- tional self-attention network for rnn/cnn-free language understanding. CoRR, abs/1709.04696, 2017a. URL http://arxiv.org/abs/1709.04696. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'flow for machine comprehension. CoRR, abs/1611.01603, 2016. URL http://arxiv.org/ abs/1611.01603. ', 'modified_lines': '', 'original_lines': '13 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-06 01:06:55
|
ICLR.cc/2018/Conference
|
Sy_4WspmM
|
B1X6lMZ0b
|
[]
|
2018-01-25 15:40:25
|
ICLR.cc/2018/Conference
|
B1X6lMZ0b
|
BJx_E3RPf
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In this paper, aiming to make the machine comprehension fast, we propose to remove the recurrent The key motivation behind the design of our model is the following: convolution captures the local structure of the text, while the self-attention learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subsequent modeling layers. simple comparison, our model can achieve the same accuracy (77.0 F1 score) as BiDAF model (Seo et al., 2016) within 3 hours training that otherwise should have taken 15 hours. The speed-up gain also allows us to train the model with more iterations to achieve better results than competitive models. For instance, if we allow our model to train for 18 hours, it achieves an F1 score of 82.7 on the dev set, which is much better than (Seo et al., 2016), and is on par with best published results. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'answering. Over the past few years, significant progress has been made with end-to-end models showing promising results on many challenging datasets. The most successful models generally employ two key ingredients: (1) a recurrent model to process sequential inputs, and (2) an attention ', 'modified_lines': 'component to cope with long term interactions. A successful combination of these two ingredients is the Bidirectional Attention Flow (BiDAF) model by Seo et al. (2016), which achieve strong results on the SQuAD dataset (Rajpurkar et al., 2016). A weakness of these models is that they are often slow for both training and inference due to their recurrent nature, especially for long texts. The expensive training not only leads to high turnaround time for experimentation and limits researchers from rapid iteration but also prevents the models from being used for larger dataset. Meanwhile the slow inference prevents the machine comprehension systems from being deployed in real-time applications. nature of these models. We instead exclusively use convolutions and self-attentions as the building blocks of encoders that separately encodes the query and context. Then we learn the interactions between context and question by standard attentions (Xiong et al., 2016; Seo et al., 2016; Bahdanau et al., 2015). The resulting representation is encoded again with our recurrency-free encoder before finally decoding to the probability of each position being the start or end of the answer span. This architecture is shown in Figure 1. ∗Work performed while AWY was with Google Brain. 1 Published as a conference paper at ICLR 2018 The feed-forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a ', 'original_lines': 'component to cope with long term interactions. A successful combination of these two ingredients is the Bidirectional Attention Flow (BiDAF) model by Seo et al. (2016), which achieve strong re- sults on the SQuAD dataset (Rajpurkar et al., 2016). A weakness of these models is that they are often slow for both training and inference due to their recurrent nature, especially for long texts. For instance, it usually takes a day to train the BiDAF model to achieve competitive accuracy on SQuAD. The expensive training not only leads to high turnaround time for experimentation and lim- its researchers from rapid iteration but also prevents the models from being used for larger dataset. Meanwhile the slow inference prevents the machine comprehension systems from being deployed in real-time applications. nature of these models. We instead exclusively use convolutions and self-attentions everywhere as the building blocks of encoders that separately encodes the query and context. Then we learn the interactions between context and question by standard attentions (Xiong et al., 2016; Seo et al., 2016; Bahdanau et al., 2015). The resulting representation is encoded again with our recurrency- free encoder before finally decoding to the probability of each position being the start or end of the answer span. This architecture is shown in Figure 1. Notice that by using convolutions and self-attention, our model is no longer recurrent. The feed- forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '2.2 MODEL OVERVIEW', 'after_section': None, 'context_after': '2 Figure 1: An overview of our model architecture (left) which has several Encoder Blocks. We use the same Encoder Block (right) throughout the model, only varying the number of convolutional ', 'paragraph_idx': 12, 'before_section': '2.2 MODEL OVERVIEW', 'context_before': 'the input tokens in parallel. Note that even though self-attention has already been used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our ', 'modified_lines': ' 1After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. For example, the current (as of Dec 19, 2017) best documented model, SAN Liu et al. (2017b), achieves 84.4 F1 score which is on par with our method. Published as a conference paper at ICLR 2018 ', 'original_lines': 'experiments. The use of convolutions also allows us to take advantage of common regular- ization methods in ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. • Our model does not rely on the additional hand-crafted features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017a); 1Concurrently there are other unpublished results either on the leaderboard or arxiv. For example, the current best documented model, SAN Liu et al. (2017b), achieves 84.4 F1 score which is on par with our method. Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Hu et al. (2017), nor multi-hop reading techniques (Hu et al., 2017; Shen et al., 2017b; Gong & Bowman, 2017). ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. ', 'modified_lines': 'experiments. The use of convolutions also allows us to take advantage of common regular- ization methods in ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. • Our model does not rely on the additional hand-crafted features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017a); ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The answer extraction addresses the aforementioned issue. Let s be the original sentence that con- tains the original answer a and s(cid:48) be its paraphrase. We identify the newly-paraphrased answer with ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'independently. We use k = 5, so each sentence has 25 paraphrase choices. A new document d(cid:48) is formed by simply replacing each sentence in d with a randomly-selected paraphrase. An obvious issue with this na¨ıve approach is that the original answer a might no longer be present in d(cid:48). ', 'modified_lines': '', 'original_lines': ' 2https://github.com/tensorflow/nmt 3http://www.statmt.org/wmt14/ 4http://www.statmt.org/wmt16/ 5https://github.com/tensorflow/nmt/blob/master/nmt/standard_hparams/ wmt16_gnmt_4_layer.json 5 English to French NMTFrench to English NMTAutrefois, le thé avait été utilisé surtout pour les moines bouddhistes pour rester éveillé pendant la méditation.In the past, tea was used mostly for Buddhist monks to stay awake during the meditation.Previously, tea had been used primarily for Buddhist monks to stay awake during meditation.(input sentence)(paraphrased sentence)(translation sentence)k translationsk^2 paraphrases Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': 'around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of (Rajpurkar et al., 2016) to retrieve the final test score. In our experiments, we report the test set result of our best single model.8 For further analysis, we only report the performance on the validation set, as we do not want to probe the unseen test set by frequent submissions. According to the observations from our experiments and previous works, such as (Seo et al., 2016; Xiong et al., 2016; Wang et al., 2017; Chen et al., 2017), the validation score is well correlated with the test score. Data Preprocessing. We use the NLTK tokenizer to preprocess the data.9 The maximum context length is set to 400 and any paragraph longer than that would be discarded. During training, we batch ', 'paragraph_idx': 31, 'before_section': None, 'context_before': '4.1 EXPERIMENTS ON SQUAD ', 'modified_lines': '4.1.1 DATASET AND EXPERIMENTAL SETTINGS Dataset. We consider the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) for machine reading comprehension.7 SQuAD contains 107.7K query-answer pairs, with 87.5K for training, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is 6We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase s(cid:48) from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence. 7SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/ 6 Published as a conference paper at ICLR 2018 ', 'original_lines': '4.1.1 DATASET We consider the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) for ma- chine reading comprehension.7 SQuAD contains 107.7K query-answer pairs, with 87.5K for train- ing, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is 6We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase s(cid:48) from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence. 7SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/ 8On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. 6 Under review as a conference paper at ICLR 2018 4.1.2 EXPERIMENTAL SETTINGS ', 'after_paragraph_idx': 31, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': '9NLTK implementation: http://www.nltk.org/ 10TensorFlow implementation: https://www.tensorflow.org/ ', 'paragraph_idx': 36, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Speedup over RNNs. To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional ', 'modified_lines': ' 8On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. ', 'original_lines': 'LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 36}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Training Inference ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'Table 2: The performances of different models on SQuAD dataset. ', 'modified_lines': 'LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Our model BiDAF Speedup 3 hours 15 hours 5.0x ', 'paragraph_idx': 41, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Speedup over BiDAF model. In addition, we also use the same hardware (a NVIDIA p100 GPU) and compare the training time of getting the same performance between our model and the BiDAF ', 'modified_lines': 'model13(Seo et al., 2016), a classic RNN-based model on SQuAD. We mostly adopt the default settings in the original code to get its best performance, where the batch sizes for training and inference are both 60. The only part we changed is the optimizer, where Adam with learning 0.001 is used here, as with Adadelta we got a bit worse performance. The result is shown in Table 4 which shows that our model is 4.3 and 7.0 times faster than BiDAF in training and inference speed. Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Published as a conference paper at ICLR 2018 Train time to get 77.0 F1 on Dev set ', 'original_lines': 'model13(Seo et al., 2016), a classic RNN-based model on SQuAD. We adopt the default settings in the original code to get its best performance, where the batch sizes for training and inference are both 60. The result is shown in Table 4 which shows that our model is 4.3 and 7.0 times faster than BiDAF in training and inference speed. Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. Train time to get 77.0 F1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 41}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': 'We conduct ablation studies on components of the proposed model, and investigate the effect of augmented data. The validation scores on the development set are shown in Table 5. As can be seen from the table, the use of convolutions in the encoders is crucial: both F1 and EM drop drastically by almost 3 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.3 gain of EM/F1 to the ultimate performance. We interpret these phenomena ', 'paragraph_idx': 43, 'before_section': None, 'context_before': 'Table 4: Speed comparison between our model and BiDAF (Seo et al., 2016) on SQuAD dataset. ', 'modified_lines': '4.1.3 ABALATION STUDY AND ANALYSIS ', 'original_lines': '4.1.4 ABALATION STUDY AND ANALYSIS 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 43, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Base Model ', 'paragraph_idx': 45, 'before_section': None, 'context_before': 'Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation. ', 'modified_lines': ' 4.1.4 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).14 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. 14Only F1 scores are reported in Jia & Liang (2017) 9 Published as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The injected noise in the training data might not only improve the generalization of the model but also make it robust to the adversarial sentences. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the sampling ratio among the original, English-French-English and English-German-English data during training. ', 'modified_lines': '', 'original_lines': '4.1.5 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences 9 Under review as a conference paper at ICLR 2018 that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).14 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Accuracy. The accuracy performance on the development set is shown in Table 7. Again, we can see that our model outperforms the baselines in terms of F1 and EM on Full development set, and is ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(2017); Joshi et al. (2017). In particular, for training and validation, we randomly select a window of length 256 and 400 encapsulating the answer respectively. All the remaining setting are the same as SQuAD experiment, except that the training steps are set to 120K. ', 'modified_lines': '', 'original_lines': ' 14Only F1 scores are reported in Jia & Liang (2017) 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'To the best of our knowledge, our paper is the first work to achieve both fast and accurate reading comprehension model, by discarding the recurrent networks in favor of feed forward architectures. Our paper is also the first to mix self-attention and convolutions, which proves to be empirically ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'shown to be not only faster than the RNN architectures, but also effective in other tasks, such as text classification, machine translation or sentiment analysis. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Christopher Clark and Matt Gardner. Simple and effective multi-paragraph reading comprehension. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. ', 'modified_lines': '', 'original_lines': ' 12 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Kenton Lee, Tom Kwiatkowski, Ankur P. Parikh, and Dipanjan Das. Learning recurrent span repre- sentations for extractive question answering. CoRR, abs/1611.01436, 2016. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pp. 881–893, 2017. ', 'modified_lines': '', 'original_lines': '13 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-24 09:46:16
|
ICLR.cc/2018/Conference
|
BJx_E3RPf
|
H1-o1a0DG
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25- 29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1746– ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'machine translation. arXiv preprint arXiv:1706.03059, 2017. ', 'modified_lines': '', 'original_lines': '13 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
|
2018-02-24 10:34:00
|
ICLR.cc/2018/Conference
|
H1-o1a0DG
|
HJl_DJk_z
|
[{'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': '1After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. For example, the current (as of Dec 19, 2017) best documented model, SAN Liu et al. (2017b), achieves 84.4 ', 'paragraph_idx': 12, 'before_section': '2.2 MODEL OVERVIEW', 'context_before': 'components: an embedding layer, an embedding encoder layer, a context-query attention layer, a model encoder layer and an output layer, as shown in Figure 1. These are the standard building blocks for most, if not all, existing reading comprehension models. However, the major differences ', 'modified_lines': 'between our approach and other methods are as follow: For both the embedding and modeling encoders, we only use convolutional and self-attention mechanism, discarding RNNs, which are used by most of the existing reading comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. Note that even though self-attention has already been used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. The use of convolutions also allows us to take advantage of common regularization methods in ', 'original_lines': 'between our approach and other methods are as follow: • For both the embedding and modeling encoders, we only use convolutional and self- attention mechanism, completely discarding RNNs, which are used by most of the existing reading comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. Note that even though self-attention has already been used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our ', 'after_paragraph_idx': 12, 'before_paragraph_idx': 12}, {'section': '2.2 MODEL OVERVIEW', 'after_section': None, 'context_after': 'In detail, our model consists of the following five layers: ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. ', 'modified_lines': 'ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. ', 'original_lines': 'experiments. The use of convolutions also allows us to take advantage of common regular- ization methods in ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. • Our model does not rely on the additional hand-crafted features such as POS tagging or name entity recognition, which have been used in Chen et al. (2017); Liu et al. (2017a); Hu et al. (2017), nor multi-hop reading techniques (Hu et al., 2017; Shen et al., 2017b; Gong & Bowman, 2017). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'after_section': None, 'context_after': '2https://github.com/tensorflow/nmt 3http://www.statmt.org/wmt14/ ', 'paragraph_idx': 22, 'before_section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'context_before': 'ple of SQuAD is a triple of (d, q, a) in which document d is a multi-sentence paragraph that has the answer a. When paraphrasing, we keep the question q unchanged (to avoid accidentally changing its meaning) and generate new triples of (d(cid:48), q, a(cid:48)) such that the new document d(cid:48) has the new answer ', 'modified_lines': 'a(cid:48) in it. The procedure happens in two steps: (i) document paraphrasing – paraphrase d into d(cid:48) and (b) answer extraction – extract a(cid:48) from d(cid:48) that closely matches a. For the document paraphrasing step, we first split paragraphs into sentences and paraphrase them independently. We use k = 5, so each sentence has 25 paraphrase choices. A new document d(cid:48) is formed by simply replacing each sentence in d with a randomly-selected paraphrase. An obvious issue with this na¨ıve approach is that the original answer a might no longer be present in d(cid:48). ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The answer extraction addresses the aforementioned issue. Let s be the original sentence that con- tains the original answer a and s(cid:48) be its paraphrase. We identify the newly-paraphrased answer with ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'English to French NMTFrench to English NMTAutrefois, le thé avait été utilisé surtout pour les moines bouddhistes pour rester éveillé pendant la méditation.In the past, tea was used mostly for Buddhist monks to stay awake during the meditation.Previously, tea had been used primarily for Buddhist monks to stay awake during meditation.(input sentence)(paraphrased sentence)(translation sentence)k translationsk^2 paraphrases Published as a conference paper at ICLR 2018 ', 'modified_lines': '', 'original_lines': ' a(cid:48) in it. The procedure happens in two steps: (i) document paraphrasing – paraphrase d into d(cid:48) and (b) answer extraction – extract a(cid:48) from d(cid:48) that closely matches a. For the document paraphrasing step, we first split paragraphs into sentences and paraphrase them independently. We use k = 5, so each sentence has 25 paraphrase choices. A new document d(cid:48) is formed by simply replacing each sentence in d with a randomly-selected paraphrase. An obvious issue with this na¨ıve approach is that the original answer a might no longer be present in d(cid:48). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': '9NLTK implementation: http://www.nltk.org/ 10TensorFlow implementation: https://www.tensorflow.org/ ', 'paragraph_idx': 36, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Speedup over RNNs. To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional ', 'modified_lines': 'LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. ', 'original_lines': ' 8On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 36}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Training Inference ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Table 2: The performances of different models on SQuAD dataset. ', 'modified_lines': '', 'original_lines': 'LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Our model BiDAF Speedup ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. ', 'modified_lines': '', 'original_lines': '13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': '4.1.3 ABALATION STUDY AND ANALYSIS ', 'paragraph_idx': 42, 'before_section': None, 'context_before': '7.0x Table 4: Speed comparison between our model and BiDAF (Seo et al., 2016) on SQuAD dataset. ', 'modified_lines': ' 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Published as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 42, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Base Model ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Then we fix the portion of the augmented data, and search the sample weight of the original data. Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation. ', 'modified_lines': '', 'original_lines': ' 4.1.4 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).14 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. 14Only F1 scores are reported in Jia & Liang (2017) 9 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'The injected noise in the training data might not only improve the generalization of the model but also make it robust to the adversarial sentences. ', 'paragraph_idx': 44, 'before_section': None, 'context_before': 'the sampling ratio among the original, English-French-English and English-German-English data during training. ', 'modified_lines': '4.1.4 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to 9 Published as a conference paper at ICLR 2018 the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).14 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'multi-paragraph reading methods to achieve the similar or better performance, but it is out of the scope of this paper. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'like BiDAF to pinpoint the answer within that paragraph (Clark & Gardner, 2017), can signifi- cantly boost the performance on TriviaQA. However, in this paper, we focus on comparing with the single-paragraph reading baselines only. We believe that our model can be plugged into other ', 'modified_lines': '', 'original_lines': ' 10 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '2014) or Long Short Term Memory architectures (Hochreiter & Schmidhuber, 1997). For simple tasks such as text classification, with reinforcement learning techniques, models (Yu et al., 2017) have been proposed to skip irrelevant tokens to both further address the long dependencies issue ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'on RNNs. Despite being common, the sequential nature of RNN prevent parallel computation, as tokens must be fed into the RNN in order. Another drawback of RNNs is difficulty modeling long dependencies, although this is somewhat alleviated by the use of Gated Recurrent Unit (Chung et al., ', 'modified_lines': '', 'original_lines': ' 11 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Large-scale machine learning on heterogeneous distributed systems. CoRR, abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'don Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: ', 'modified_lines': '', 'original_lines': ' 12 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pp. 2021–2031, 2017. ', 'modified_lines': '', 'original_lines': '13 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, August 13 - 17, 2017, pp. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'tional self-attention network for rnn/cnn-free language understanding. CoRR, abs/1709.04696, 2017a. URL http://arxiv.org/abs/1709.04696. ', 'modified_lines': '', 'original_lines': '14 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-24 13:23:52
|
ICLR.cc/2018/Conference
|
HJl_DJk_z
|
S1XHmmfdz
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '{weiyu}@cs.cmu.edu, {ddohan,thangluong}@google.com ABSTRACT ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'FAST AND ACCURATE READING COMPREHENSION BY COMBINING SELF-ATTENTION AND CONVOLUTION ', 'modified_lines': 'Adams Wei Yu1∗, David Dohan2†, Minh-Thang Luong2† 1Carnegie Mellon University, 2Google Brain Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le Google Brain ', 'original_lines': 'Adams Wei Yu†∗, David Dohan‡, Minh-Thang Luong‡ Rui Zhao‡, Kai Chen‡, Mohammad Norouzi‡, Quoc V. Le‡ †Carnegie Mellon University, ‡Google Brain ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-27 00:15:55
|
ICLR.cc/2018/Conference
|
S1XHmmfdz
|
BJrfnCFiM
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '∗Work performed while Adams Wei Yu was with Google Brain. †Equal contribution. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'The key motivation behind the design of our model is the following: convolution captures the local structure of the text, while the self-attention learns the global interaction between each pair of words. ', 'modified_lines': '', 'original_lines': 'The additional context-query attention is a standard module to construct the query-aware context ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'vector for each position in the context paragraph, which is used in the subsequent modeling layers. The feed-forward nature of our architecture speeds up the model significantly. In our experiments on the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference. As a ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2018 ', 'modified_lines': 'The additional context-query attention is a standard module to construct the query-aware context ', 'original_lines': '', 'after_paragraph_idx': 7, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. For example, the current (as of Dec 19, 2017) best documented model, SAN Liu et al. (2017b), achieves 84.4 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'as it can process the input tokens in parallel. Note that even though self-attention has already been used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. ', 'modified_lines': '', 'original_lines': 'The use of convolutions also allows us to take advantage of common regularization methods in ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': 'ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. ', 'modified_lines': 'The use of convolutions also allows us to take advantage of common regularization methods in ', 'original_lines': '', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}]
|
2018-04-10 06:39:09
|
ICLR.cc/2018/Conference
|
BJrfnCFiM
|
rk7QKfsnM
|
[]
|
2018-04-23 08:28:11
|
ICLR.cc/2018/Conference
|
rk7QKfsnM
|
HyS-PO23z
|
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Current end-to-end machine reading and question answering (Q&A) models are ', 'modified_lines': 'primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the se- quential nature of RNNs. We propose a new Q&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equiva- lent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by back- translation from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score1 on the test set, which is significantly better than the best published F1 score of 81.8. ', 'original_lines': 'primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q&A architecture that does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtransla- tion from a neural machine translation model. On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8. ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'The key motivation behind the design of our model is the following: convolution captures the local structure of the text, while the self-attention learns the global interaction between each pair of words. The additional context-query attention is a standard module to construct the query-aware context vector for each position in the context paragraph, which is used in the subsequent modeling layers. The feed-forward nature of our architecture speeds up the model significantly. In our experiments ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'blocks of encoders that separately encodes the query and context. Then we learn the interactions between context and question by standard attentions (Xiong et al., 2016; Seo et al., 2016; Bahdanau et al., 2015). The resulting representation is encoded again with our recurrency-free encoder before ', 'modified_lines': 'finally decoding to the probability of each position being the start or end of the answer span. We call this architecture QANet, which is shown in Figure 1. ∗Work performed while Adams Wei Yu was with Google Brain. †Equal contribution. 1While the major results presented here are those obtained in Oct 2017, our latest scores (as of Apr 23, 2018) on SQuAD leaderboard is EM/F1=82.2/88.6 for single model and EM/F1=83.9/89.7 for ensemble, both ranking No.1. Notably, the EM of our ensemble is better than the human performance (82.3). 1 Published as a conference paper at ICLR 2018 ', 'original_lines': 'finally decoding to the probability of each position being the start or end of the answer span. This architecture is shown in Figure 1. ∗Work performed while Adams Wei Yu was with Google Brain. †Equal contribution. 1 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': 'Abstract', 'after_section': None, 'context_after': '• We propose an efficient reading comprehension model that exclusively built upon convo- lutions and self-attentions. To the best of our knowledge, we are the first to do so. This ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'language and then back to English, which not only enhances the number of training instances but also diversifies the phrasing. ', 'modified_lines': 'On the SQuAD dataset, QANet trained with the augmented data achieves 84.6 F1 score on the test set, which is significantly better than the best published result of 81.8 by Hu et al. (2017).2 We also conduct ablation test to justify the usefulness of each component of our model. In summary, the contribution of this paper are as follows: ', 'original_lines': 'On the SQuAD dataset, our model trained with the augmented data achieves 84.6 F1 score on the test set, which is significantly better than the best published result of 81.8 by Hu et al. (2017).1 We also conduct ablation test to justify the usefulness of each component of our model. In summary, the contribution of this paper are as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 THE MODEL', 'after_section': None, 'context_after': '2.1 PROBLEM FORMULATION ', 'paragraph_idx': 9, 'before_section': None, 'context_before': '2 THE MODEL In this section, we first formulate the reading comprehension problem and then describe the proposed ', 'modified_lines': 'model QANet: it is a feedforward model that consists of only convolutions and self-attention, a combination that is empirically effective, and is also a novel contribution of our work. ', 'original_lines': 'model: it is a feedforward model that consists of only convolutions and self-attention, a combination that is empirically effective, and is also a novel contribution of our work. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': 'For example, the current (as of Dec 19, 2017) best documented model, SAN Liu et al. (2017b), achieves 84.4 F1 score which is on par with our method. ', 'paragraph_idx': 11, 'before_section': '2.2 MODEL OVERVIEW', 'context_before': 'encoders, we only use convolutional and self-attention mechanism, discarding RNNs, which are used by most of the existing reading comprehension models. As a result, our model is much faster, as it can process the input tokens in parallel. Note that even though self-attention has already been ', 'modified_lines': ' 2After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. ', 'original_lines': 'used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. 1After our first submission of the draft, there are other unpublished results either on the leaderboard or arxiv. ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': 'use the same Encoder Block (right) throughout the model, only varying the number of convolutional layers for each block. We use layernorm and residual connection between every layer in the Encoder Block. We also share weights of the context and question encoder, and of the three output encoders. ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2018 ', 'modified_lines': 'Figure 1: An overview of the QANet architecture (left) which has several Encoder Blocks. We ', 'original_lines': 'Figure 1: An overview of our model architecture (left) which has several Encoder Blocks. We ', 'after_paragraph_idx': 13, 'before_paragraph_idx': None}, {'section': '2.2 MODEL OVERVIEW', 'after_section': '2.2 MODEL OVERVIEW', 'context_after': 'The use of convolutions also allows us to take advantage of common regularization methods in ConvNets such as stochastic depth (layer dropout) (Huang et al., 2016), which gives an additional gain of 0.2 F1 in our experiments. ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'the positional encoding (one of convolution, self-attention, or feed-forward-net) inside the encoder structure is wrapped inside a residual block. ', 'modified_lines': 'used extensively in Vaswani et al. (2017a), the combination of convolutions and self-attention is novel, and is significantly better than self-attention alone and gives 2.7 F1 gain in our experiments. ', 'original_lines': '', 'after_paragraph_idx': 12, 'before_paragraph_idx': None}, {'section': '2.2 MODEL OVERVIEW', 'after_section': None, 'context_after': 'f (q, c) = W0[q, c, q (cid:12) c], Most high performing models additionally use some form of query-to-context attention, such as BiDaF (Seo et al., 2016) and DCN (Xiong et al., 2016). Empirically, we find that, the DCN attention ', 'paragraph_idx': 13, 'before_section': '2.2 MODEL OVERVIEW', 'context_before': 'matrix S. Then the context-to-query attention is computed as A = S · QT ∈ Rn×d. The similarity function used here is the trilinear function (Seo et al., 2016): ', 'modified_lines': 'where (cid:12) is the element-wise multiplication and W0 is a trainable variable. ', 'original_lines': ' where (cid:12) is the element-wise multiplication and W0 is a trainable variable. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'after_section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'context_after': 'split into subword units as described in Luong et al. (2017). All models share the same hyperpa- English-German. Our English-French systems achieve 36.7 BLEU on newstest2014 for translating into French and 35.9 BLEU for the reverse direction. For English-German and on newstest2014, we obtain 27.6 BLEU for translating into German and 29.9 BLEU for the reverse direction. ', 'paragraph_idx': 18, 'before_section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'context_before': 'In this work, we consider attention-based neural machine translation (NMT) models Bahdanau et al. (2015); Luong et al. (2015), which have demonstrated excellent translation quality Wu et al. (2016), as the core models of our data augmentation pipeline. Specifically, we utilize the publicly available ', 'modified_lines': 'codebase3 provided by Luong et al. (2017), which replicates the Google’s NMT (GNMT) systems Wu et al. (2016). We train 4-layer GNMT models on the public WMT data for both English-French4 (36M sentence pairs) and English-German5 (4.5M sentence pairs). All data have been tokenized and rameters6 and are trained with different numbers of steps, 2M for English-French and 340K for ', 'original_lines': 'codebase2 provided by Luong et al. (2017), which replicates the Google’s NMT (GNMT) systems Wu et al. (2016). We train 4-layer GNMT models on the public WMT data for both English-French3 (36M sentence pairs) and English-German4 (4.5M sentence pairs). All data have been tokenized and rameters5 and are trained with different numbers of steps, 2M for English-French and 340K for ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Handling SQuAD Documents and Answers. We now discuss our specific procedure for the SQuAD dataset, which is essential for best performance gains. Remember that, each training exam- ple of SQuAD is a triple of (d, q, a) in which document d is a multi-sentence paragraph that has the answer a. When paraphrasing, we keep the question q unchanged (to avoid accidentally changing its meaning) and generate new triples of (d(cid:48), q, a(cid:48)) such that the new document d(cid:48) has the new answer a(cid:48) in it. The procedure happens in two steps: (i) document paraphrasing – paraphrase d into d(cid:48) and (b) answer extraction – extract a(cid:48) from d(cid:48) that closely matches a. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Our paraphrase process works as follows, supposedly with French as a pivotal language. First, we feed an input sequence into the beam decoder of an English-to-French model to obtain k French translations. Each of the French translation is then passed through the beam decoder of a reversed ', 'modified_lines': 'translation model to obtain a total of k2 paraphrases of the input sequence. Relation to existing Works. While the concept of backtranslation has been introduced before, it is often used to improve either the same translation task Sennrich et al. (2016) or instrinsic paraphrase evaluations Wieting et al. (2017); Mallinson et al. (2017). Our approach is a novel application of backtranslation to enrich training data for down-stream tasks, in this case, the question answering (QA) task. It is worth to note that (Dong et al., 2017) use paraphrasing techniques to improve QA; however, they only paraphrase questions and did not focus on the data augmentation aspect as we do in this paper. 3https://github.com/tensorflow/nmt 4http://www.statmt.org/wmt14/ 5http://www.statmt.org/wmt16/ 6https://github.com/tensorflow/nmt/blob/master/nmt/standard_hparams/ wmt16_gnmt_4_layer.json 5 English to French NMTFrench to English NMTAutrefois, le thé avait été utilisé surtout pour les moines bouddhistes pour rester éveillé pendant la méditation.In the past, tea was used mostly for Buddhist monks to stay awake during the meditation.Previously, tea had been used primarily for Buddhist monks to stay awake during meditation.(input sentence)(paraphrased sentence)(translation sentence)k translationsk^2 paraphrases Published as a conference paper at ICLR 2018 ', 'original_lines': 'translation model to obtain a total of k2 paraphrases of the input sequence. It is worth to note that Lapata et al. (2017) proposed a similar approach to ours but did not focus on the data augmentation aspect as we do in this paper. Specifically, the authors proposed a more complex backtranslation decoder that takes into account multiple input sentences to produce one output sequence. For our case, we are interested in generating multiple paraphrases instead. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'after_section': '3 DATA AUGMENTATION BY BACKTRANSLATION', 'context_after': 'The answer extraction addresses the aforementioned issue. Let s be the original sentence that con- tains the original answer a and s(cid:48) be its paraphrase. We identify the newly-paraphrased answer with simple heuristics as follows. Character-level 2-gram scores are computed between each word in s(cid:48) and the start / end words of a to find start and end positions of possible answers in s(cid:48). Among all candidate paraphrased answer, the one with the highest character 2-gram score with respect to a is Original ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'independently. We use k = 5, so each sentence has 25 paraphrase choices. A new document d(cid:48) is formed by simply replacing each sentence in d with a randomly-selected paraphrase. An obvious issue with this na¨ıve approach is that the original answer a might no longer be present in d(cid:48). ', 'modified_lines': 'selected as the new answer a(cid:48). Table 1 shows an example of the new answer found by this process.7 ', 'original_lines': ' 2https://github.com/tensorflow/nmt 3http://www.statmt.org/wmt14/ 4http://www.statmt.org/wmt16/ 5https://github.com/tensorflow/nmt/blob/master/nmt/standard_hparams/ wmt16_gnmt_4_layer.json 5 English to French NMTFrench to English NMTAutrefois, le thé avait été utilisé surtout pour les moines bouddhistes pour rester éveillé pendant la méditation.In the past, tea was used mostly for Buddhist monks to stay awake during the meditation.Previously, tea had been used primarily for Buddhist monks to stay awake during meditation.(input sentence)(paraphrased sentence)(translation sentence)k translationsk^2 paraphrases Published as a conference paper at ICLR 2018 selected as the new answer a(cid:48). Table 1 shows an example of the new answer found by this process.6 ', 'after_paragraph_idx': 23, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': 'training, 10.1K for validation, and another 10.1K for testing. The typical length of the paragraphs is around 250 while the question is of 10 tokens although there are exceptionally long cases. Only the training and validation data are publicly available, while the test data is hidden that one has to submit the code to a Codalab and work with the authors of (Rajpurkar et al., 2016) to retrieve the final test we only report the performance on the validation set, as we do not want to probe the unseen test set by frequent submissions. According to the observations from our experiments and previous works, such as (Seo et al., 2016; Xiong et al., 2016; Wang et al., 2017; Chen et al., 2017), the validation score is well correlated with the test score. length is set to 400 and any paragraph longer than that would be discarded. During training, we batch the examples by length and dynamically pad the short sentences with special symbol <PAD>. The maximum answer length is set to 30. We use the pretrained 300-D word vectors GLoVe (Penning- ', 'paragraph_idx': 31, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': '4.1.1 DATASET AND EXPERIMENTAL SETTINGS Dataset. We consider the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) ', 'modified_lines': 'for machine reading comprehension.8 SQuAD contains 107.7K query-answer pairs, with 87.5K for 7We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase s(cid:48) from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence. 8SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/ 6 Published as a conference paper at ICLR 2018 score. In our experiments, we report the test set result of our best single model.9 For further analysis, Data Preprocessing. We use the NLTK tokenizer to preprocess the data.10 The maximum context ', 'original_lines': 'for machine reading comprehension.7 SQuAD contains 107.7K query-answer pairs, with 87.5K for score. In our experiments, we report the test set result of our best single model.8 For further analysis, 6We also define a minimum threshold for elimination. If there is no answer with 2-gram score higher than the threshold, we remove the paraphrase s(cid:48) from our sampling process. If all paraphrases of a sentence are eliminated, no sampling will be performed for that sentence. 7SQuAD leaderboard: https://rajpurkar.github.io/SQuAD-explorer/ 8On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. 6 Published as a conference paper at ICLR 2018 Data Preprocessing. We use the NLTK tokenizer to preprocess the data.9 The maximum context ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 31}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '4.1.2 RESULTS ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'moving average is applied on all trainable variables with a decay rate 0.9999. Finally, we implement our model in Python using Tensorflow (Abadi et al., 2016) and carry out our ', 'modified_lines': 'experiments on an NVIDIA p100 GPU.11 ', 'original_lines': 'experiments on an NVIDIA p100 GPU.10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 36, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Speedup over RNNs. To measure the speedup of our model against the RNN models, we also test the corresponding model architecture with each encoder block replaced with a stack of bidirectional ', 'modified_lines': ' 9On the leaderboard of SQuAD, there are many strong candidates in the “ensemble” category with high EM/F1 scores. Although it is possible to improve the results of our model using ensembles, we focus on the “single model” category and compare against other models with the same category. 10NLTK implementation: http://www.nltk.org/ 11TensorFlow implementation: https://www.tensorflow.org/ 12The scores are collected from the latest version of the documented related work on Oct 27, 2017. 13The scores are collected from the leaderboard on Oct 27, 2017. ', 'original_lines': 'LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. 9NLTK implementation: http://www.nltk.org/ 10TensorFlow implementation: https://www.tensorflow.org/ 11The scores are collected from the latest version of the documented related work on Oct 27, 2017. 12The scores are collected from the leaderboard on Oct 27, 2017. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 36}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'EM / F1 40.4 / 51.0 62.5 / 71.0 ', 'paragraph_idx': 38, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'R-Net (Wang et al., 2017) BiDAF + Self Attention + ELMo Reinforced Mnemonic Reader (Hu et al., 2017) ', 'modified_lines': 'Dev set: QANet Dev set: QANet + data augmentation ×2 Dev set: QANet + data augmentation ×3 Test set: QANet + data augmentation ×3 Published12 ', 'original_lines': 'Dev set: Our Model Dev set: Our Model + data augmentation ×2 Dev set: Our Model + data augmentation ×3 Test set: Our Model + data augmentation ×3 Published11 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 38}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'EM / F1 40.4 / 51.0 62.5 / 71.0 ', 'paragraph_idx': 38, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': '75.1 / 83.8 76.2 / 84.6 ', 'modified_lines': 'LeaderBoard13 ', 'original_lines': 'LeaderBoard12 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 38}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'Training Inference 3.2 8.1 1.1 2.2 2.9x 3.7x ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'Table 2: The performances of different models on SQuAD dataset. ', 'modified_lines': 'LSTMs as is used in most existing models. Specifically, each (embedding and model) encoder block is replaced with a 1, 2, or 3 layer Bidirectional LSTMs respectively, as such layer numbers fall into the usual range of the reading comprehension models (Chen et al., 2017). All of these LSTMs have hidden size 128. The results of the speedup comparison are shown in Table 3. We can easily see that our model is significantly faster than all the RNN based models and the speedups range from 3 to 13 times in training and 4 to 9 times in inference. QANet RNN-1-128 Speedup RNN-2-128 Speedup RNN-3-128 ', 'original_lines': 'Ours RNN-1-128 Speedup RNN-2-128 Speedup RNN-3-128 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': '4.1 EXPERIMENTS ON SQUAD', 'context_after': 'settings in the original code to get its best performance, where the batch sizes for training and inference are both 60. The only part we changed is the optimizer, where Adam with learning 0.001 is used here, as with Adadelta we got a bit worse performance. The result is shown in Table 4 ', 'paragraph_idx': 41, 'before_section': '4.1 EXPERIMENTS ON SQUAD', 'context_before': 'Speedup over BiDAF model. In addition, we also use the same hardware (a NVIDIA p100 GPU) and compare the training time of getting the same performance between our model and the BiDAF ', 'modified_lines': 'model14(Seo et al., 2016), a classic RNN-based model on SQuAD. We mostly adopt the default ', 'original_lines': 'model13(Seo et al., 2016), a classic RNN-based model on SQuAD. We mostly adopt the default ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': 'BiDAF Speedup ', 'paragraph_idx': 43, 'before_section': None, 'context_before': 'Besides, we only need one fifth of the training time to achieve BiDAF’s best F1 score (77.0) on dev set. ', 'modified_lines': '14The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Published as a conference paper at ICLR 2018 QANet ', 'original_lines': 'Our model ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4.1.3 ABALATION STUDY AND ANALYSIS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '7.0x Table 4: Speed comparison between our model and BiDAF (Seo et al., 2016) on SQuAD dataset. ', 'modified_lines': '', 'original_lines': ' 13The code is directly downloaded from https://github.com/allenai/bi-att-flow 8 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 EXPERIMENTS ON SQUAD', 'after_section': None, 'context_after': '- convolution in encoders - self-attention in encoders ', 'paragraph_idx': 45, 'before_section': None, 'context_before': 'Empirically, the ratio (3:1:1) yields the best performance, with 1.5/1.1 gain over the base model on EM/F1. This is also the model we submitted for test set evaluation. ', 'modified_lines': '4.1.4 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).15 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. 15Only F1 scores are reported in Jia & Liang (2017) 9 Published as a conference paper at ICLR 2018 Base QANet ', 'original_lines': 'Base Model ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The injected noise in the training data might not only improve the generalization of the model but also make it robust to the adversarial sentences. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the sampling ratio among the original, English-French-English and English-German-English data during training. ', 'modified_lines': '', 'original_lines': '4.1.4 ROBUSTNESS STUDY In the following, we conduct experiments on the adversarial SQuAD dataset (Jia & Liang, 2017) to study the robustness of the proposed model. In this dataset, one or more sentences are appended to 9 Published as a conference paper at ICLR 2018 the original SQuAD context of test set, to intentionally mislead the trained models to produce wrong answers. However, the model is agnostic to those adversarial examples during training. We focus on two types of misleading sentences, namely, AddSent and AddOneSent. AddSent generates sentences that are similar to the question, but not contradictory to the correct answer, while AddOneSent adds a random human-approved sentence that is not necessarily related to the context. The model in use is exactly the one trained with the original SQuAD data (the one getting 84.6 F1 on test set), but now it is submitted to the adversarial server for evaluation. The results are shown in Table 6, where the F1 scores of other models are all extracted from Jia & Liang (2017).14 Again, we only compare the performance of single models. From Table 6, we can see that our model is on par with the state-of-the-art model Mnemonic, while significantly better than other models by a large margin. The robustness of our model is probably because it is trained with augmented data. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'AddSent AddOneSent ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'MPCM (Wang et al., 2016) ReasoNet (Shen et al., 2017b) Mnemonic (Hu et al., 2017) ', 'modified_lines': 'QANet ', 'original_lines': 'Our Model ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'of length 256 and 400 encapsulating the answer respectively. All the remaining setting are the same as SQuAD experiment, except that the training steps are set to 120K. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the authors of Joshi et al. (2017) also pick a verified subset that all the contexts inside can answer the associated questions. As the text could be long, we adopt the data processing similar to Hu et al. (2017); Joshi et al. (2017). In particular, for training and validation, we randomly select a window ', 'modified_lines': '', 'original_lines': ' 14Only F1 scores are reported in Jia & Liang (2017) 10 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Full EM / F1 ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'BiDAF (Seo et al., 2016) MEMEN (Pan et al., 2017) M-Reader (Hu et al., 2017)∗ ', 'modified_lines': 'QANet ', 'original_lines': 'Our Model ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'shown to be not only faster than the RNN architectures, but also effective in other tasks, such as text classification, machine translation or sentiment analysis. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'long text, as the context paragraphs may be hundreds of words long. Recently, attempts have been made to replace the recurrent networks by full convolution or full attention architectures (Kim, 2014; Gehring et al., 2017; Vaswani et al., 2017b; Shen et al., 2017a). Those models have been ', 'modified_lines': '', 'original_lines': ' 11 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'ACKNOWLEDGEMENT REFERENCES ', 'paragraph_idx': 4, 'before_section': None, 'context_before': '6 CONCLUSION ', 'modified_lines': 'In this paper, we propose a fast and accurate end-to-end model, QANet, for machine reading com- prehension. Our core innovation is to completely remove the recurrent networks in the encoder. The resulting model is fully feedforward, composed entirely of separable convolutions, attention, linear layers, and layer normalization, which is suitable for parallel computation. The resulting model is both fast and accurate: It surpasses the best published results on SQuAD dataset while up to 13/9 times faster than a competitive recurrent models for a training/inference iteration. Additionally, we find that we are able to achieve significant gains by utilizing data augmentation consisting of trans- lating context and passage pairs to and from another language as a way of paraphrasing the questions and contexts. Adams Wei Yu is supported by NVIDIA PhD Fellowship and CMU Presidential Fellowship. We would like to thank Samy Bengio, Lei Huang, Minjoon Seo, Noam Shazeer, Ashish Vaswani, Barret Zoph and the Google Brain Team for helpful discussions. ', 'original_lines': 'In this paper, we propose a fast and accurate end-to-end model for machine reading comprehension. Our core innovation is to completely remove the recurrent networks in the base model. The resulting model is fully feedforward, composed entirely of separable convolutions, attention, linear layers, and layer normalization, which is suitable for parallel computation. The resulting model is both fast and accurate: It surpasses the best published results on SQuAD dataset while up to 13/9 times faster than a competitive recurrent models for a training/inference iteration. Additionally, we find that we are able to achieve significant gains by utilizing data augmentation consisting of translating context and passage pairs to and from another language as a way of paraphrasing the questions and contexts. Adams Wei Yu is supported by NVIDIA PhD Fellowship. We would like to thank Lei Huang, Minjoon Seo, Ashish Vaswani and Barret Zoph for helpful discussions. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, 2015. ', 'modified_lines': '', 'original_lines': '12 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980. Kenton Lee, Tom Kwiatkowski, Ankur P. Parikh, and Dipanjan Das. Learning recurrent span repre- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1746– 1751, 2014. ', 'modified_lines': '', 'original_lines': '13 Published as a conference paper at ICLR 2018 Mirella Lapata, Rico Sennrich, and Jonathan Mallinson. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pp. 881–893, 2017. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Neural Information Processing Systems, 2017b. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017a. URL http://arxiv.org/abs/1706.03762. ', 'modified_lines': '', 'original_lines': '14 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-04-24 09:21:01
|
ICLR.cc/2018/Conference
|
rJ5y3iJA-
|
HJL9PAyRZ
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Deep reinforcement learning has demonstrated increasing capabilities for continuous con- ', 'modified_lines': 'trol problems, including agents that can move with skill and agility through their environ- ment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a spe- cific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique combine expert policies, as eval- uated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against two alternative baselines. ', 'original_lines': 'trol problems, agents in high-dimensional action spaces interacting with complex envi- ronments. One problem in this setting consists of how to combine multiple policies, each specialized in performing a specific control task and state distribution into on policy. Fo- cusing on the domain of simulated bipedal robot locomotion, we extend the method of policy distillation to the continuous action setting and leverage this technique combine expert policies. We also introduce a method for augmenting an existing policy network in order to enable the use of new state features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these tools allows us to create a method to incrementally augment a policy with new skills. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 2 RELATED WORK ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'resulting in a new policy that can perform both tasks, with marginal forgetting. In the distillation step we can also change the policy model design. For example, after learning to walk ', 'modified_lines': 'on flat and inclined ground we can add state features for the local terrain. Here we introduce the option to modify the sensory-motor transform, a model that maps from sensory input to motor or action outputs, by augmenting the policy network to include additional state features. This augmented version of the network will have additional capability to detect subsets of the state space and perform more robust locomotion planning to travel in complex environments. These multi-skilled policies will then be the basis for further integration of new tasks. The combination of these methods constructs a scalable method to integrate new skills into a single policy. ', 'original_lines': 'on flat and inclined ground we can add state features for the local terrain. Here we introduce the option to modify the sensory-motor transform were we augment the policy network to include additional state features Pouget and Snyder (2000)[Cite:??sensorymotor tranformation??]. A sensory-motor transform is model that maps from sensory input to motor or action outputs. This augmented version of the network will have additional capability to detect subsets of the state space and perform more robust locomotion planning to travel in complex environments. These multi-skilled policies will then be the basis for further integration of new tasks. The combination of these methods constructs a scalable method to integrate new skills into a single policy. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': '4 ', 'paragraph_idx': 22, 'before_section': '5 RESULTS', 'context_before': 'those outlined in Peng and van de Panne (2016) for flat terrain only. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, we further use a supplied motion capture clip of a natural human walking gait on flat ground. The 2D humanoid receives as input both a character and terrain ', 'modified_lines': 'state representation. A detailed description of the experimental setup is included in Section: 8.4. The tasks ', 'original_lines': 'state representation. A detailed description of the experimental setup is included in Section: 9.4. The tasks ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '5 RESULTS', 'after_section': None, 'context_after': 'are presented to the continual learner sequentially and the goal is to progressively learn to traverse all terrain types. ', 'paragraph_idx': 25, 'before_section': None, 'context_before': '(f) mixed ', 'modified_lines': 'Figure 2: The environments used to evaluate PLAiD. ', 'original_lines': 'Figure 2: The environments used to evaluate PLAiD. To-do: These could be of higher quality ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': 'outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 3c. This method works well for both combining learned skills and learning new skills. ', 'paragraph_idx': 24, 'before_section': '5 RESULTS', 'context_before': 'learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a MultiTasker: controller that trys to learn multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning may tasks together ', 'modified_lines': 'can be faster than learning each on separately Parisotto et al. (2015). The curriculum for using this method is shown in Figure 3a were during a single RL simulation time is spent learning each task. It is also possible to learn each separately but in parallel and then combine the resulting policies Figure 3b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is learned with this model it would occur ', 'original_lines': 'can be faster than learning each on separately [Cite:????]. The curriculum for using this method is shown in Figure 3a were during a single RL simulation time is spent learning each task. It is also possible to learn each separately but in parallel and then combine the resulting policies Figure 3b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is learned with this model it would occur ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 24}, {'section': '5.1 TRANSFER LEARNING', 'after_section': None, 'context_after': 'We evaluate our approach against two baselines. Firstly, we compare the above learning curriculum from learning new tasks in PLAiD with learning new tasks from Scratch: randomly initilized controller (Scratch). ', 'paragraph_idx': 28, 'before_section': '5.1 TRANSFER LEARNING', 'context_before': 'Figure 3: Outlines the curriculum learning process used in this evaluation. The red circle with a (cid:83) in it denotes a distillation step that combines policies. Each gray box denotes one iteration of learning a new policy. (a) is a parallel version of our method. This did not work because we could only get good results on ', 'modified_lines': 'the flat terrain. (b) Is similar to the MultiTasker in that all tasks are being trained for at the same time. (c) Is PLAiD where we can take advantage of TL to assist with learning new tasks. ', 'original_lines': 'the flat terrain. (b) Is similar to the MultiTasker in that all tasks are being trained for at the same time. (c) Is PLAiD where we can take advantage of TL to assist with learning new tasks. To-do: Make these diagrams more general, don’t need to specify which task was trained for ', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '5.1 TRANSFER LEARNING', 'after_section': '5.1 TRANSFER LEARNING', 'context_after': 'tasks equally. If we only consider the time taken to learn the new task the MultiTasker can learn quicker. However, as we will show later PLAiD will begin to out perform the MultiTasker as the number of tasks increases. (a) 5.2 FEATURE INJECTION ', 'paragraph_idx': 27, 'before_section': '5.1 TRANSFER LEARNING', 'context_before': 'tasks together, which complicates the learning process as simulation for each task is split across the training process and the overall RL task can be challenging. This is in contrast to using PLAiD that will integrate skills together after each new skill is learned. However, it has been shown that training across many tasks ', 'modified_lines': 'at once can increase overall learning speed and potentially reduce forgetting Teh et al. (2017). We can see this in Figure 4b where the MultiTasker is learning the new task (steps) with similar speed. However, after adding more tasks, we can see in Figure 4c the MultiTasker beginning to struggle with learning many tasks at the same time. On the other hand PLAiD learns the new task faster and is able to integrate the new skill required to solve the task robustly. While training, the MultiTasker splits its time across the number of (b) steps (c) slopes Figure 4: TL comparison over each of the environments. (a) Shows the benefit of using TL when learning a new task/skill, incline, when the controller has some knowledge of flat. (b) TL for both PLAiD and MultiTasker is similar (c) PLAiD is showing faster learning after adding an additional skill. The distiller initializes its policy with the most recently learned expert. The MultiTasker is also initialized from the most recent expert but alternates between each environment during training. The learning for PLAiD is split into two steps, first the TL part in green followed by the distillation part in red. Using TL assists in the learning of new tasks. ', 'original_lines': 'at once can increase overall learning speed and potentially reduce forgetting Teh et al. (2017). We found this result as well, shown in Figure 4b. While training, the MultiTasker splits its time across the number of (b) Figure 4: (a) Shows the benefit of using distillation when learning a new task/skill, incline, when the controller has some knowledge of flat. The distiller initializes its policy with flat expert. The MultiTasker is also initialized from the flat expert but alternates between flat and incline during training. (b) Shows that the MultiTasker can learn faster on steps, flat and incline than PLAiD learning the single task with TL does. To-do: Need policy evaluation comparison for with and without DAGGER. ', 'after_paragraph_idx': 27, 'before_paragraph_idx': 27}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'it to retain its original functional behaviour in a way similar to Chen et al. (2015). This is achieved by adding additional inputs to the neural network and initializing the weights and biases associated with these new features to 0. this approach is demonstrated in the third step of the experiment (π3 in Figure 3c) by ', 'modified_lines': 'combining the blind flat and incline expert policies via distillation. ', 'original_lines': 'combining the blind flat and incline expert policies via distillation. To-do: Maybe run an experiment to compare if the way we do terrain injection is better than random initialization (definitely helps with direct transfer, but what about for distill?) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.3 DISTILLING MULTIPLE POLICIES', 'after_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_after': '(a) MultiTasker on 3 tasks ', 'paragraph_idx': 32, 'before_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_before': 'as shown in Figure 5c. Using PLAiD to combine the skills of many policies appears to scale better with respect to the number of skills being integrated. This is likely because distillation is a semi-supervised method which is more stable than the un-supervised RL solution. This can be seen in Figure 5b and 5d ', 'modified_lines': 'where PLAiD combines the skills faster, and can find higher value policies in practice. ', 'original_lines': 'where PLAiD combines the skills faster, and can find To-do: higher value policies in practice. ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 32}, {'section': '6 DISCUSSION', 'after_section': None, 'context_after': 'Forgetting There are some indications that distillation is hindering during the initial few training steps. 7 Under review as a conference paper at ICLR 2018 and reward functions. We believe PLAiD would further outperform the MultiTasker if the tasks were more 6.1 LIMITATIONS 6.2 FUTURE WORK 7 CONCLUSION Using PLAiD we gain the benefits of distillation as a supervised method to integrate policies together. This being more efficient then learning all skills together, and is a favourable method for integrating many skills REFERENCES ', 'paragraph_idx': 35, 'before_section': '6 DISCUSSION', 'context_before': 'MultiTasker achieves marginally better average reward over the tasks Figures (c,d) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker (c) and distiller (d) (π7 from Figure 3c). After including another task (slopes) the MultiTasker struggles to achieve its previous ', 'modified_lines': 'performance and the distiller combines the tasks gracefully. We are initializing the network used in the distillation step with the most recently learning policy after TL. The large change in the initial state distribution from the previous seen distribution during TL could be causing larger gradients to appear, disrupting some of the structure learned during the TL step, shown in Figure 5b and 5d. There also might not exist a smooth transition in policy space between the newly learning policy and the previous learned policy distribution. 6 DISCUSSION Transfer Learning: For our transfer learning results we could be over fitting. The initial expert could have been over trained for the particular task it was learning. Making it more challenging for the policy to learning a new task, resulting in negative transfer. As we are using an actor-critic learning method we also studied the possibility of using the value functions for TL as well. We did not discover any empirical evidence that this assisted the learning process. When transferring to a new task the state distribution has changed and the reward function maybe be completely different. This makes it unlikely the value function will be accurate on this new task. Also, value functions are in general easier and faster to learn than policies, implying that it is less important to transfer knowledge from these function approximations. We also find that helpfulness of TL depends on not only the task difficulty task but the rewards as well. Two tasks may overlap in state space but the area they overlap could be easily to reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. To-do: I wonder if the distillation also provides some form of bias reduction that helps generalization and transfer learning... MultiTasker vs PLAiD The MultiTasker may be able to produce a policy that has higher overall average reward but in practice constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks the MultiTasker can favour the task with higher rewards, as this tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. We have shown that the distiller can scale better to the number of tasks than the MultiTasker. The tasks used in this analysis could be considered too similar, with respect to state space difficult and the reward functions dissimilar. After learning many new tasks the previous tasks may not receive a large enough potion of the distillation training process to preserve the experts skill well enough. How best to chose which data should be trained on next to best preserve the behaviour of experts is a general problem with multi-task learning. Distillation treats all tasks equally independent of their reward. This can result in very low value tasks receiving poten- tially more distribution than desired and high value tasks recieving not enough. We haven’t needed the use a one-hot vector to indicate what task to the agent is performing. We want the agent to be able to recognize which task it is being given but we do realize that some tasks could be too similar to differentiate between. For example, walking vs jogging on flat ground. It could be helpful to perform some kind of task prioritization during the distillation step. This could assist the agent with forgetting or help relearn tasks. Here we use the Mean Squared Error (MSE) to pull the distributions of the student policies in line with the expert polices for distillation. It could be more advantageous to use a better metric for distance between the policies. Previous methods have used KL Divergence in the discrete action space domain where the value function, in this case Deep Q-Netowrk (DQN), encodes the policy. This could improve the TL after distillation step. In this work we are not focusing on producing the best policy from a mixture of experts, but instead matching the distributions from a number of experts. The difference is subtle but in practice it can be too difficult to balance many reward functions. It could also be beneficial to use some kind of KL penalty while performing distillation. Something similar to the work in Teh et al. (2017), that will help keep the policy from shifting too much / to fast during training. together. Ideally, the more skilled an agent is the easier it should be to learn new skills. ', 'original_lines': 'performance and the distiller combines the tasks gracefully. To-do: Need to fix slopes for 5c, maybe cut he x-axis to 300,000 To-do: clip the tops of these figures 5.3.1 DISCUSSION PLAiD appears to work very well for this particular problem. Not only does TL often accelerate the learning on new tasks there are indications distillation can scale better over a larger set of tasks. We are initializing the network used the distillation learning step with the most recently learning policy after TL. The large change in the initial state distribution from the previous seen distribution during TL could be causing larger gradients to appear, disrupting some of the structure learned during the TL step. There also might not exist a smooth transition in policy space between the newly learning policy and the previous learned policy distribution. (a) steps (b) slopes Figure 6: Learning comparison over each of the environments. The learning for PLAiD is split into two steps, first the TL part in green followed by the distillation part in red. Using TL to learn new tasks is important To-do: These could be of higher quality 6 DISCUSSION Transfer Learning: For our transfer learning results we could be over fitting. The initial expert could have been over trained for the particular task it was learning. Making it more challenging for the policy to adjust to learning a new task. We also studied the possibility of reusing the value functions as well. We did not discover any empirical evidence that this assisted the learning process. When transferring to a new task the state distribution has changed and the reward function maybe be completely different. This makes it unlikely the value function will be accurate on this new task. Also, value functions are in general easier and faster to learn than policies, implying that it is less important to transfer knowledge from these function approximations. We also find that helpfulness of TL depends on not only the difficulty of the task but the rewards. Two tasks may overlap in state space but the area they overlap could be easy to reach. In this case TL will not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the area of the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. MultiTasker vs PLAiD The MultiTasker might be able to produce a policy that has higher overall average reward but this add many constraints to the types of tasks it can combine gracefully. If the reward functions are different between tasks or if even one task gets more simulation time over another the MultiTasker will favour the task with higher values. It is a non-trivial task to normalize the reward functions for each task to be able to combine them. We have shown that the distiller can scale better to the number of tasks than the MultiTasker. The tasks used in this analysis could be considered to similar, with respect to state space difficult and the reward functions dissimilar. To-do: We could get a comment on the somewhat obvious progressive difficulty in the ordering of tasks that we chose. To-do: We don’t use a one-hot vector to select experts, We are motivated by context-based behaviours. 1. After learning many new tasks the previous tasks may not receive a large enough potion of the distillation training process to preserve the experts skill well enough. This is a general problem with learning many tasks, how best to chose which data should be training on next to best preserve the behaviour of experts. 2. To-do: Distilling more policies can also make it difficult to do well on any particular policy Might be helpful to perform some kind of task prioritization during the distillation step. This could help the agent not forget or help relearn tasks better. Here we use the MSE to pull the distributions of the student policies in line with the expert polices for distillation. It could be more advantageous to use a better metric for distance between the policies. Previous methods have used KL Divergence in the discrete state space domain where the value function encodes the policy. This could improve the TL after distillation step. In this work we are not focusing on producing the best policy from a mixture of experts, but instead matching the distributions from a number of expert. The difference is subtle but in practice it can be too difficult to balance many reward functions It could be beneficial to use some kind of KL penalty while performing distillation. Something similar to the distral pape Teh et al. (2017), that will help keep the policy from shifting too much / too fast during initial training. 8 Under review as a conference paper at ICLR 2018 together. Given an already skilled expert agent we want the agent to use its known skills to accelerate the learning process for a new skill. Ideally, the more skilled an agent is the easier it should be to learn new skills. We present a method for the progressive integration of new skills into a single multi-skilled policy and evaluate it on motor control tasks in the context of robotics. We investigate methods for robust distillation of policies in the continuous action domain. We construct a new method to allow an agent to continuously integrate new skills, including the option where the state space dimension changes. Our method operates independent of scale in the reward functions used for each task. 8 ACKNOWLEDGEMENTS ', 'after_paragraph_idx': None, 'before_paragraph_idx': 35}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Pouget, A. and Snyder, L. H. (2000). Computational approaches to sensorimotor transformations. Nature Neuroscience, 3 Suppl(Supp):1192–8. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'action space matter? CoRR, abs/1611.01055. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-26 21:45:50
|
ICLR.cc/2018/Conference
|
HJL9PAyRZ
|
rJhAvAk0-
|
[]
|
2017-10-26 21:47:00
|
ICLR.cc/2018/Conference
|
rJhAvAk0-
|
HJxm_Ag0-
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 Distillation refers to the problem of combining the policies of one or more experts in order to create one 1 Under review as a conference paper at ICLR 2018 2 RELATED WORK 3 FRAMEWORK ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'Deep reinforcement learning has demonstrated increasing capabilities for continuous con- trol problems, including agents that can move with skill and agility through their environ- ment. An open problem in this setting is that of developing good strategies for integrating ', 'modified_lines': 'or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine expert policies, as evaluated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a pol- icy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against two alternative baselines. INTRODUCTION 1 As they gain experience, humans develop rich repertoires of motion skills that are useful in different contexts and environments. Recent advances in reinforcement learning provide an opportunity to understand how motion repertoires can best be learned, recalled, and augmented. Inspired by studies on the development and recall of movement patterns useful for different locomotion contextsRoemmich and Bastian (2015), we develop and evaluate an approach for learning multi-skilled movement repertoires. In what follows below, we refer to the proposed method as PLAID: Progressive Learning and Integration via Distillation. For long lived applications of complex control tasks a learning system may need to acquire and integrate additional skills. Accordingly, our problem is defined by the sequential acquisition and integration of new skills. Given an existing controller that is capable of one-or-more skills, we wish to: (a) efficiently learn a new skill or movement pattern in a way that is informed by the existing control policy, and (b) to reintegrate that into a single controller that is capable of the full motion repertoire. This process can then be repeated as necessary. In the process of acquiring a new skill, we also allow for a control policy to be augmented with additional inputs, without adversely impacting its performance. This is a process we refer to as input injection. Understanding the time course of sensorimotor learning in human motor control is an open research prob- lem Wolpert and Flanagan (2016) that exists concurrently with recent advances in deep reinforcement learn- ing. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in rein- forcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert poli- cies Parisotto et al. (2015); Teh et al. (2017); Rusu et al. (2015). However, given a new task for which an expert is not given, it is less clear how to both learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One well-known technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) Pan and Yang (2010), which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; 1Accompanying-Aideo there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomly-initialized model Rajendran et al. (2015). Additionally, while learning a new skill, the control policy should not forget how to perform old skills. The core contribution of this paper is a method (PLAID) to repeatedly expand and integrate a motion repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against two alternative baselines. We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while preserving existing capabilities. Transfer learning and distillation are of broad interest in machine learning and RL Pan and Yang (2010); Taylor and Stone (2009); Teh et al. (2017). Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the re- inforcement learning setting. A popular approach uses supervised learning to combine each policy by regression over the action distribution. This approach yields model compression Rusu et al. (2015) as well as a viable method for multi-task policy transfer Parisotto et al. (2015) on discrete action domains including the Arcade Learning Environment Bellemare et al. (2013). We adopt these techniques and extend them for the case of complex continuous action space tasks and make use of them as building block. Transfer Learning Transfer learning exploits the structure learned from a previous task in learning a new task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The con- cept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games Rusu et al. (2015); Parisotto et al. (2015); Rusu et al. (2016); Chen et al. (2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, Shin et al. (2017); Li and Hoiem (2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters Kirkpatrick et al. (2017). A different approach to combining policies is to use a hierarchical structure Tessler et al. (2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta- learning to achieve control policies that can quickly adapt their behavior according to current rewards Finn et al. (2017). This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks Tessler et al. (2016) This allows for the substitution of network modules for different robot types over a similar tasks Devin et al. (2017). Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks Kulkarni et al. (2016); Heess et al. (2016); Peng et al. (2017) While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions can be shaped by prior mocap clips Merel et al. (2017), and that these can then be integrated in a hierarchical controller. ', 'original_lines': 'or merging policies for multiple skills, where each individual skill is a specialist in a spe- cific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique combine expert policies, as eval- uated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against two alternative baselines. INTRODUCTION To-do: Really need to stress that our method with integrating NEW skills, most other methods are just composing many polices without ideas how to best learning a NEW skill Reinforcement Learning has been successfully used for complex control tasks in recent years. There has been multiple solutions for many difficult control tasks, however, little work has been done to combine learned skills. For long lived applications of complex control tasks a learning system may need to acquire and integrate additional skills. This can be seen in motor development of children, where there are a number of stages that are progressed though to learn robust motor skills Adolph and Berger (2007). Continual learn- ing could offer gains in learning speed and allow the development of a single policy capable of performing a wide range of tasks. How to best learn a robust method for combining previously learned skills to create a mutli-skilled controller that is more capable than the individual controllers remains an open question. This is a challenging problem for a number of reasons; scalability, forgetting, how best to combine policies and understand which expert to use when. single controller that can perform the tasks of a set of experts. Distillation treats the problem as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert policies Parisotto et al. (2015); Teh et al. (2017); Rusu et al. (2015). However, what if a new task for which there is no expert is given, what is a reasonable method for learning the new task and including this new skill in the repertoire of the agent? One well-known technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) Pan and Yang (2010), which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks is not always straightforward; there can be negative transfer, in some cases, a previously-trained model can take longer to learn a new task via fine-tuning than would a randomly-initialized model Rajendran et al. (2015). Here we investigate methods to continually learn and integrate new tasks into an existing policy in an efficient and robust fashion. Importantly, to learn these new skills while not forgetting how to perform old skills. The main building blocks in this work are policy transfer and multi-task policy distillation. First, an existing policy is trained on a new task using TL, TL should increase the speed at which new skills is learned. Then the resulting policy that has learned the new task can be distilled with the existing policy, resulting in a new policy that can perform both tasks, with marginal forgetting. In the distillation step we can also change the policy model design. For example, after learning to walk on flat and inclined ground we can add state features for the local terrain. Here we introduce the option to modify the sensory-motor transform, a model that maps from sensory input to motor or action outputs, by augmenting the policy network to include additional state features. This augmented version of the network will have additional capability to detect subsets of the state space and perform more robust locomotion planning to travel in complex environments. These multi-skilled policies will then be the basis for further integration of new tasks. The combination of these methods constructs a scalable method to integrate new skills into a single policy. Knowledge reuse has been a popular area of machine learning and AI for some time Pan and Yang (2010); Taylor and Stone (2009). There has been a significant amount of work in the area including methods to combine models that have both learned to solve a different task and methods for using information from learning other tasks while learning a new one Teh et al. (2017). Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the rein- forcement learning setting. A popular approach is to use supervised learning to combine each policy by regression over the action distribution. This approach has been shown to both yield model compression Rusu et al. (2015) and a viable method for multi-task policy transfer Parisotto et al. (2015) on discrete action domains including the Arcade Learning Environment Bellemare et al. (2013). We adopt these tech- niques and extend them for the case of complex continuous action space tasks and make use of them as building block. Transfer Learning There is limited work done in this area on environments with continuous control tasks. The work in Rajendran et al. (2015) proposes that it may be possible but does not show any examples of this. The concept of appending additional network structure while keeping the previous structure to ensure there is no forgetting has worked well on the Atari games Rusu et al. (2015); Parisotto et al. (2015); Rusu et al. (2016); Chen et al. (2015) Similar work uses methods to reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, Shin et al. (2017); Li and Hoiem (2016). There is also the challenge of catastrophic forgetting, where as new skills are learned old ones are forgotten. Some, recent work seeks to negate this issue using selective learning rates for specific network parameters Kirkpatrick et al. (2017). A different approach to combining policies is to use a hierarchical structure Tessler et al. (2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, their approach assumes that the new tasks will be at least a partial composition of previous tasks, and they do not demonstrate reintegration of the newly learned tasks. Hierarchical RL Another related line of work uses modularity to achieve transfer learning for robotic tasks. This modularity enables the swapping of network modules for different robot types over a similar task that was train with one particular robot Devin et al. (2017). Other works use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks Kulkarni et al. (2016); Heess et al. (2016); Peng et al. (2017) Introducing more modularity to enable transfer between differing task domains is a promising direction of research. Al- though, these works examines knowledge transfer it does not examine transfer and reintegration of policies for related tasks and associated problems such as catastrophic forgetting. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'J(π) = Er0,...,rT ', 'paragraph_idx': 14, 'before_section': None, 'context_before': '3.1 REINFORCEMENT LEARNING ', 'modified_lines': 'Leveraging the framework of reinforcement learning, we frame the problem as an Markov Decision Pro- cesses (MDP): at each time step t, the world (including the agent) is in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(st, at) = p(at|st) and resulting in state st+1 ∈ S according to transition probabilities T (st, at, st+1). Performing action at from state st produces a reward rt; the expected cumulative reward earned from following some policy π may then be written as: ', 'original_lines': 'Leveraging the framework of reinforcement learning, we frame the general problem as a Markov Decision Processes (MDP): at each time step t, the world (including the agent) is in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(st, at) = p(at|st) and resulting in state st+1 ∈ S according to transition probabilities T (st, at, st+1). Performing action at from state st produces a reward rt; the expected cumulative reward earned from following some policy π may then be written as: ', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}, {'section': '2 (1)', 'after_section': '2 (1)', 'context_after': 'at ∼ π(at | st, θπ) = N (µ(st | θµ), Σ) ', 'paragraph_idx': 15, 'before_section': '2 (1)', 'context_before': '(2) ', 'modified_lines': 'Our policy models a Gaussian distribution with a mean state dependent mean, µθt(st). Thus, Our stochastic policy may be formulated as follows: ', 'original_lines': 'We assume that actions follow a Gaussian distribution around a mean determined by the current state, µθt(st). Thus, Our stochastic policy may be formulated as follows: ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '2 (1)', 'after_section': '2 (1)', 'context_after': '∇θπ J(π(·|θπ)) = ', 'paragraph_idx': 16, 'before_section': '2 (1)', 'context_before': 'i on the diagonal, similar to Peng et al. (2017). ', 'modified_lines': 'To optimize our policy, we use stochastic policy gradient methods, a well-established family of techniques for reinforcement learning Sutton et al. (2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'original_lines': 'To optimize our policy, we employ policy gradient methods, a well-established family of techniques for reinforcement learning Sutton et al. (2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 15}, {'section': '2 (1)', 'after_section': None, 'context_after': 'Aπ(st, at) = I [δt > 0] = ', 'paragraph_idx': 16, 'before_section': '2 (1)', 'context_before': 't=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) represents the initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps Silver et al. (2014). Aπ(s, a) represents an ', 'modified_lines': 'advantage function Schulman et al. (2016). In this work, we use the Positive Temporal Differnce (PTD) update proposed by Van Hasselt (2012) for Aπ(s, a): ', 'original_lines': 'advantage function Schulman et al. (2016). In this work, we adapt the Positive Temporal Differnce (PTD) update proposed by Van Hasselt (2012): ', 'after_paragraph_idx': None, 'before_paragraph_idx': 16}, {'section': '2 (1)', 'after_section': '2 (1)', 'context_after': 'minimize ', 'paragraph_idx': 16, 'before_section': '2 (1)', 'context_before': 'lative reward from following policy π starting in state s. PTD has the benefit of being insensitive to the advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of ac- tions that have a positive advantage has been found to increase the stability of learning Van Hasselt (2012). ', 'modified_lines': 'Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'original_lines': 'Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv can be learned, which can be formulated as a regression problem: ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': '2 (1)', 'after_section': '2 (1)', 'context_after': 'does not necessarily produce an optimal mix of the given experts but instead tries to produce an expert that 3.3 TRANSFER LEARNING 3 Under review as a conference paper at ICLR 2018 Although we focus on the problem of being presented with tasks sequentially there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a MultiTasker: controller that trys to learn multiple tasks at the same time (MultiTasker), evaluate this method as well but we found that learning many skills from scratch was challenging, we were π0 π1 π0 π1 π2 (a) MultiTasker ', 'paragraph_idx': 16, 'before_section': '2 (1)', 'context_before': '3.2 POLICY DISTILLATION ', 'modified_lines': 'Given a set expert agents that have solved/mastered different tasks we may want to combine the skills of these different experts into a single multi-skilled agent. This process is referred to as distillation. Distillation best matches the action distributions produced by all experts. This can be preferred because this method functions independent of the reward functions used to train each expert. Distillation also scales well with respect to the number of tasks or experts that are being combined. Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a new task efficiently. This problem falls in the area of Transfer Learning Pan and Yang (2010). Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. 4 PROGRESSIVE LEARNING where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately Parisotto et al. (2015). The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. L0 L1 . . . Lω−1 Lω L0 L1 . . . Lω−1 Lω . . . πω πω+1 D πω+1 ', 'original_lines': 'When there exist many agents that have solved/mastered different tasks we may want to combine the skills of these different experts into a single expert agent. This process is referred to as distillation. Distillation best matches the action distributions produced by the experts. This can be preferred because this method can function independent of the separate reward functions used when training each expert. Here we also favour distillation, as the method should scale better to the number of tasks or experts that are being combined. Given an expert that has solved/mastered a task we want to reuse the knowledge the expert has in order to learn a new task. This problem falls in the area of Transfer Learning Pan and Yang (2010). When considering the distribution of states the existing expert is skilled at solving, (Dωi the source distribution) it may be advantageous to start learning the new, target task ωi+1 with target distribution Dωi+1 using assistance from the existing expert. The agent learning how to solve the target task with domain Dωi+1 is sometimes referred to as the student. When the expert is used to assist the student in learning the target task and can be referred to as the teacher. The success of these methods rely on the assumption that there is some overlap between the Dωi and Dωi+1 state distributions. 4 PROGRESSIVE LEARNING AND INTEGRATION In this section we detail our proposed learning framework for continual policy transfer and distillation (Progressive Learning and Integration via Distillation (PLAiD)). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here we assume the task to be somewhat similar to previous tasks ωi such that transfer can be beneficial. The more skilled an agent is the more likely it will have some knowledge that will assist in learning a new task. We adopt the straightforward transfer learning strategy of using an existing policy network and fine-tuning it to the new task. Since we are not interested in retaining any of our previous skills in this step, we can update this policy without concern for forgetting. In the integration (distillation) step, we are interested in combining all past skills (π0, . . . , πi) with the newly acquired one πi+1. Traditional approaches have commonly used policy regression where the tuples for the regression are generated by running trajectories of the expert policy on its task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is because the student experiences a different state distribution than the expert during evaluation. To compensate for this state distribution difference sections of the trajectories should be generated by the student, this allows the expert to suggest behaviour that will pull the student state distribution closer to the expert’s. This is a common problem in creating a model to reproduce a given distribution of trajectories Ross et al. (2010); Bengio et al. (2015); Martinez et al. (2017); Lamb et al. (2016). We use the DAgger algorithm and demonstrate that for our locomotion problems, it is crucial to the robustness of distilled policies Parisotto et al. (2015).As our reinforcement learning algorithms is an actor-critic method, we also perform regression on the critic by fitting both in the same step. 4.1 HIGH LEVEL EXPERIMENT DESIGN For the results presented in this work, the range of tasks to be solved share a similar action space and state space, as our main focus is to demonstrate continual learning between related tasks. However, the conceptual framework allows for simple extensions that would permit differing action or state spaces, since we can use a new network model as the student policy in the distillation step. In Figure 1 a flow diagram of PLAiD, showing how new skills are learned and integrated into a new policy, is given. Figure 1: A flow diagram for PLAiD that can continuously integrating new skills. Given an initially random (π0) or expert policy ˆπi on Di, perform transfer learning (TL) starting from that policy to master a new skill, producing a new policy πi+1 that is an expert on Di+1. After learning the new skill, policies ˆπi and πi+1 (cid:83) Di+1. are distilled into a new policy that has the capabilities of both, ˆπi and πi+1, i.e. an expert in Di During distillation it is possible to alter the design of the policy model, possibly reducing its size. This new distilled policy ˆπi+1 is used as the initial policy ˆπi for the next skill to be learned. 5 RESULTS The method in this work is evaluated on a humanoid robotic simulation. Here a bipedal character is trained to navigate multiple types of terrain, including flat and incline, shown in Figure 2a and Figure 2b as well as steps (Figure 2c), slopes (Figure 2d), gaps (Figure 2e) and a combination of all the terrains mixed (Fig- ure 2f). The environment used is similar to the environments used in Peng and van de Panne (2016) where the goal is to learn the pd-targets to establish a robust walking gait. We add additional terrain types to this problem as well. An example of a biped trained to walk on an steps is shown in Figure 2c. In this experiment, our set of tasks consists of To-do: 5 different terrains that a 2D humanoid walker learns to traverse. The details of the learning tasks as well as the reinforcement learning algorithm closely follow those outlined in Peng and van de Panne (2016) for flat terrain only. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, we further use a supplied motion capture clip of a natural human walking gait on flat ground. The 2D humanoid receives as input both a character and terrain state representation. A detailed description of the experimental setup is included in Section: 8.4. The tasks 4 TLDistill ii+1i+1i+1ˆ0ˆ Under review as a conference paper at ICLR 2018 (a) flat (b) incline (c) steps (d) slopes (e) gaps (f) mixed Figure 2: The environments used to evaluate PLAiD. are presented to the continual learner sequentially and the goal is to progressively learn to traverse all terrain types. where a number of skills are learned at the same time. It has been shown that learning may tasks together can be faster than learning each on separately Parisotto et al. (2015). The curriculum for using this method is shown in Figure 3a were during a single RL simulation time is spent learning each task. It is also possible to learn each separately but in parallel and then combine the resulting policies Figure 3b. We attempted to only able to get fair results for the flat task. Also, when a new task is learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 3c. This method works well for both combining learned skills and learning new skills. TLflat TLincline TLsteps TLslopes TLflat TLincline TLsteps TLslopes π3 π4 (cid:83) π5 ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Chen, T., Goodfellow, I., and Shlens, J. (2015). Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'editors, Advances in Neural Information Processing Systems 28, pages 1171–1179. Curran Associates, Inc. ', 'modified_lines': '', 'original_lines': '8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 16:00:24
|
ICLR.cc/2018/Conference
|
HJxm_Ag0-
|
HJ4uMeWRZ
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Distillation refers to the problem of combining the policies of one or more experts in order to create one The core contribution of this paper is a method (PLAID) to repeatedly expand and integrate a motion repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against two alternative baselines. We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'injection. Understanding the time course of sensorimotor learning in human motor control is an open research prob- ', 'modified_lines': 'lem ? that exists concurrently with recent advances in deep reinforcement learning. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in reinforcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression prob- lem where the objective is to learn a model that matches the output distributions of all expert policies ???. However, given a new task for which an expert is not given, it is less clear how to learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One well-known technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) ?, which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomly-initialized model ?. Additionally, while learning a new skill, the control policy should not forget how to perform old skills. 1 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'lem Wolpert and Flanagan (2016) that exists concurrently with recent advances in deep reinforcement learn- ing. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in rein- forcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert poli- cies Parisotto et al. (2015); Teh et al. (2017); Rusu et al. (2015). However, given a new task for which an expert is not given, it is less clear how to both learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One well-known technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) Pan and Yang (2010), which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; 1Accompanying-Aideo 1 Under review as a conference paper at ICLR 2018 there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomly-initialized model Rajendran et al. (2015). Additionally, while learning a new skill, the control policy should not forget how to perform old skills. ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'Transfer Learning Transfer learning exploits the structure learned from a previous task in learning a new 3 FRAMEWORK ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '2 RELATED WORK ', 'modified_lines': 'Transfer learning and distillation are of broad interest in machine learning and RL ???. Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the rein- forcement learning setting. A popular approach uses supervised learning to combine each policy by regres- sion over the action distribution. This approach yields model compression ? as well as a viable method for multi-task policy transfer ? on discrete action domains including the Arcade Learning Environment ?. We adopt these techniques and extend them for the case of complex continuous action space tasks and make use of them as building block. task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The concept of appending additional network structure while keeping the previous structure to reduce catas- trophic forgetting has worked well on Atari games ???? Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, ??. Recent work seeks to mitigate this issue using selective learning rates for specific network parameters ?. A different approach to combining policies is to use a hierarchical structure ?. In this setting, previously-learned policies are avail- able as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve control policies that can quickly adapt their behavior according to current rewards ?. This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks ? This allows for the substitution of network modules for different robot types over a similar tasks ?. Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks ??? While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions that can be shaped by prior mocap clips ?, and that these can then be integrated in a hierarchical controller. ', 'original_lines': 'Transfer learning and distillation are of broad interest in machine learning and RL Pan and Yang (2010); Taylor and Stone (2009); Teh et al. (2017). Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the re- inforcement learning setting. A popular approach uses supervised learning to combine each policy by regression over the action distribution. This approach yields model compression Rusu et al. (2015) as well as a viable method for multi-task policy transfer Parisotto et al. (2015) on discrete action domains including the Arcade Learning Environment Bellemare et al. (2013). We adopt these techniques and extend them for the case of complex continuous action space tasks and make use of them as building block. task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The con- cept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games Rusu et al. (2015); Parisotto et al. (2015); Rusu et al. (2016); Chen et al. (2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, Shin et al. (2017); Li and Hoiem (2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters Kirkpatrick et al. (2017). A different approach to combining policies is to use a hierarchical structure Tessler et al. (2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta- learning to achieve control policies that can quickly adapt their behavior according to current rewards Finn et al. (2017). This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks Tessler et al. (2016) This allows for the substitution of network modules for different robot types over a similar tasks Devin et al. (2017). Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks Kulkarni et al. (2016); Heess et al. (2016); Peng et al. (2017) While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions can be shaped by prior mocap clips Merel et al. (2017), and that these can then be integrated in a hierarchical controller. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'according to transition probabilities T (st, at, st+1). Performing action at from state st produces a reward rt; the expected cumulative reward earned from following some policy π may then be written as: ', 'paragraph_idx': 13, 'before_section': None, 'context_before': '3.1 REINFORCEMENT LEARNING ', 'modified_lines': 'Leveraging the framework of reinforcement learning, we frame the problem as a Markov Decision Processes (MDP): at each time step t, the world (including the agent) is in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(st, at) = p(at|st) and resulting in state st+1 ∈ S ', 'original_lines': 'Leveraging the framework of reinforcement learning, we frame the problem as an Markov Decision Pro- cesses (MDP): at each time step t, the world (including the agent) is in a state st ∈ S, wherein the agent is able to perform actions at ∈ A, sampled from a policy π(st, at) = p(at|st) and resulting in state st+1 ∈ S ', 'after_paragraph_idx': 13, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(1) where T is the time horizon, and γ is the discount factor, defining the planning horizon length. The agent’s goal is to learn an optimal policy, π∗, maximizing J(π). If the policy has parameters θπ, then ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 't=0 ', 'modified_lines': '', 'original_lines': '2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 (2)', 'after_section': '2 (2)', 'context_after': '∇θπ J(π(·|θπ)) = ', 'paragraph_idx': 14, 'before_section': '2 (2)', 'context_before': 'where Σ is a diagonal covariance matrix with entries σ2 ', 'modified_lines': 'i on the diagonal, similar to ?. To optimize our policy, we use stochastic policy gradient methods, which are well-established family of techniques for reinforcement learning ?. The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'original_lines': 'i on the diagonal, similar to Peng et al. (2017). To optimize our policy, we use stochastic policy gradient methods, a well-established family of techniques for reinforcement learning Sutton et al. (2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 14}, {'section': '2 (2)', 'after_section': None, 'context_after': 'Aπ(st, at) = I [δt > 0] = ', 'paragraph_idx': 15, 'before_section': '2 (2)', 'context_before': 'where dθ = (cid:82) t=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) represents the ', 'modified_lines': 'initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps ?. Aπ(s, a) represents an advantage function ?. In this work, we use the Positive Temporal Difference (PTD) update proposed by ? for Aπ(s, a): ', 'original_lines': 'initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps Silver et al. (2014). Aπ(s, a) represents an advantage function Schulman et al. (2016). In this work, we use the Positive Temporal Differnce (PTD) update proposed by Van Hasselt (2012) for Aπ(s, a): ', 'after_paragraph_idx': None, 'before_paragraph_idx': 15}, {'section': '2 (2)', 'after_section': '2 (2)', 'context_after': 'minimize ', 'paragraph_idx': 15, 'before_section': '2 (2)', 'context_before': 'where Vπ(s) = E is the value function, which gives the expected discounted cumu- lative reward from following policy π starting in state s. PTD has the benefit of being insensitive to the ', 'modified_lines': 'advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of actions that have a positive advantage has been found to increase the stability of learning ?. Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'original_lines': 'advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of ac- tions that have a positive advantage has been found to increase the stability of learning Van Hasselt (2012). Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '2 (2)', 'after_section': '2 (2)', 'context_after': 'these different experts into a single multi-skilled agent. This process is referred to as distillation. Distillation does not necessarily produce an optimal mix of the given experts but instead tries to produce an expert that 3.3 TRANSFER LEARNING Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a 3 Under review as a conference paper at ICLR 2018 π0 ', 'paragraph_idx': 15, 'before_section': '2 (2)', 'context_before': '3.2 POLICY DISTILLATION ', 'modified_lines': 'Given a set of expert agents that have solved/mastered different tasks we may want to combine the skills of best matches the action distributions produced by all experts. This method functions independent of the reward functions used to train each expert. Distillation also scales well with respect to the number of tasks or experts that are being combined. new task efficiently. This problem falls in the area of Transfer Learning ?. Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. 4 PROGRESSIVE LEARNING Although we focus on the problem of being presented with tasks sequentially, there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately ?. The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. ', 'original_lines': 'Given a set expert agents that have solved/mastered different tasks we may want to combine the skills of best matches the action distributions produced by all experts. This can be preferred because this method functions independent of the reward functions used to train each expert. Distillation also scales well with respect to the number of tasks or experts that are being combined. new task efficiently. This problem falls in the area of Transfer Learning Pan and Yang (2010). Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. 4 PROGRESSIVE LEARNING Although we focus on the problem of being presented with tasks sequentially there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a MultiTasker: controller that trys to learn multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately Parisotto et al. (2015). The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '4.2 HIGH LEVEL EXPERIMENT DESIGN The results presented in this work cover a range of tasks that share a similar action space and state space. Our focus is to demonstrate continual learning between related tasks. However, the conceptual framework allows for extensions that would permit differing state spaces, described later in Section: 5.2. 5 RESULTS ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, portions of the trajectories should be generated by the student, this allows the expert to suggest behaviour that will pull the state distribution of the student closer to the expert’s. This is a common problem in ', 'modified_lines': 'learning a model to reproduce a given distribution of trajectories ????. We use a method similar to the DAgger algorithm ? which has been found to be useful for distilling policies ?.As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. ', 'original_lines': 'learning a model to reproduce a given distribution of trajectories Ross et al. (2010); Bengio et al. (2015); Martinez et al. (2017); Lamb et al. (2016). We use a method similar to the DAgger algorithm Ross et al. (2010) which has been found to be useful for distilling policies Parisotto et al. (2015).As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. 4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': None, 'context_after': '5.1 TRANSFER LEARNING ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Figure 2: The environments used to evaluate PLAID. We evaluate our approach against two baselines. Firstly, we compare the above learning curriculum from ', 'modified_lines': 'learning new tasks in PLAID with learning new tasks from randomly initialized controller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively transferred after distillation steps. Second, we compare to the MultiTasker, to demonstrate that iterated distillation is effective for retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. ', 'original_lines': 'learning new tasks in PLAID with learning new tasks from Scratch: randomly initilized controller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively transferred after distillation steps. Second, we compare to the MultiTasker, to demonstrate that iterated distillation is effective for retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS', 'after_section': None, 'context_after': '5 ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'terrain. These new terrain features can assist the agent in identifying which task domain it is operating in. We introduce the idea of feature injection for this purpose. We augment a policy with additional input ', 'modified_lines': 'features while allowing it to retain its original functional behaviour similar to ?. This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. ', 'original_lines': 'features while allowing it to retain its original functional behaviour similar to Chen et al. (2015). This is ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.3 DISTILLING MULTIPLE POLICIES ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the most recent expert but alternates between each environment during training. The learning for PLAID is split into two steps, first the TL part in green followed by the distillation part in red. Using TL assists in the learning of new tasks. ', 'modified_lines': '', 'original_lines': ' achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': None, 'context_after': 'There are some indications that distillation is hindering during the initial few training iterations. We are initializing the network used in distillation with the most recently learning policy after TL. The large ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'solution. This can be seen in Figure 4d, 4e and especially in 4f where PLAID combines the skills faster, and can find higher value policies in practice. PLAID also presents zero-shot training on tasks it has never trained on. In Figure 5 this generalization is shown as the agent navigate across the mixed environment. ', 'modified_lines': 'The results of the controller are also displayed in the accompanying Video 1 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '6 DISCUSSION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'gradients to appear, disrupting some of the structure learned during the TL step, shown in Figure 4d and 4e. There also might not exist a smooth transition in policy space between the newly learned policy and the previous policy distribution. ', 'modified_lines': '', 'original_lines': ' Forgetting To-do: Need that TL only, forget baseline ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall average reward but in practice constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks the MultiTasker can favour the task with higher rewards, as this tasks ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. ', 'modified_lines': '', 'original_lines': 'To-do: I wonder if the distillation also provides some form of bias reduction that helps generalization and transfer learning... 6 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAID on 3 tasks (e) PLAID on 4 tasks (f) PLAID on 5 tasks Figure 4: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAID (f) are trained on gaps. (a) (b) (c) (d) (e) (f) (g) Figure 5: Still frame shots of the pd-biped traversing the mixed environment. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 DISCUSSION', 'after_section': None, 'context_after': '7 CONCLUSION 8 APPENDIX ', 'paragraph_idx': 31, 'before_section': '6 DISCUSSION', 'context_before': '6.2 FUTURE WORK: ', 'modified_lines': 'It would be of interest to develop a method for task prioritization during the distillation step. This could assist the agent with forgetting issues or help with relearning tasks. While we currently use the Mean Squared Error (MSE) to pull the distributions of student policies in line with expert polices for distillation, better distance metrics would likely be helpful. Previous methods have used KL Divergence in the discrete action space domain where the state-action value function encodes the policy, e.g., as with Deep Q-Network (DQN). In this work we do not focus on producing the best policy from a mixture of experts, but instead we match the distributions from a number of experts. The difference is subtle but in practice it can be more difficult to balance many experts with respect to their reward functions. It could also be beneficial to use a KL penalty while performing distillation, i.e., something similar to the work in ? in order to keep the policy from changing too rapidly during training. We have proposed and evaluated a method for the progressive learning and integration (via distillation) of motion skills. The method exploits transfer learning to speed learning of new skills, along with input injec- tion where needed, as well as continuous-action distillation, using DAGGER-style learning. This compares favorably to baselines consisting of learning all skills together, or learning all the skills individually before integration. We believe that there remains much to learned about the best training and integration methods for movement skill repertoires, as is also reflected in the human motor learning literature. ', 'original_lines': 'It could be helpful to perform some kind of task prioritization during the distillation step. This could assist the agent with forgetting issues or help with relearning tasks. Here we use the Mean Squared Error (MSE) to pull the distributions of student policies in line with expert polices for distillation. It could be more advantageous to use a better metric for distance between policies. Previous methods have used KL Divergence in the discrete action space domain where the value function, encodes the policy, for example, with Deep Q-Netowrk (DQN). In this work we are not focusing on producing the best policy from a mixture of experts, but instead matching the distributions from a number of experts. The difference is subtle but in practice it can be more difficult to balance many experts with respect to their reward functions. It could also be beneficial to use some kind of KL penalty while performing distillation. Potentially something similar to the work in Teh et al. (2017), that will help keep the policy from shifting to fast during training. Using PLAID we gain the benefits of distillation as a supervised method to integrate policies together. This being more efficient then learning all skills together, and is a favourable method for integrating multiple skills together. Ideally, the more skilled an agent is the easier it should be to learn new skills. REFERENCES Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253–279. Bengio, S., Vinyals, O., Jaitly, N., and Shazeer, N. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 1171–1179. Curran Associates, Inc. Chen, T., Goodfellow, I., and Shlens, J. (2015). Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641. Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. (2017). Learning modular neural network policies for multi-task and multi-robot transfer. In Robotics and Automation (ICRA), 2017 IEEE Inter- national Conference on, pages 2169–2176. IEEE. Finn, C., Abbeel, P., and Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400. Heess, N., Wayne, G., Tassa, Y., Lillicrap, T. P., Riedmiller, M. A., and Silver, D. (2016). Learning and transfer of modulated locomotor controllers. CoRR, abs/1610.05182. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. (2017). 8 Under review as a conference paper at ICLR 2018 Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sci- ences, 114(13):3521–3526. Kulkarni, T. D., Narasimhan, K., Saeedi, A., and Tenenbaum, J. (2016). Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in Neural Information Processing Systems 29, pages 3675–3683. Lamb, A. M., ALIAS PARTH GOYAL, A. G., Zhang, Y., Zhang, S., Courville, A. C., and Bengio, Y. (2016). Professor forcing: A new algorithm for training recurrent networks. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R., editors, Advances in Neural Information Processing Systems 29, pages 4601–4609. Curran Associates, Inc. Li, Z. and Hoiem, D. (2016). Learning without forgetting. CoRR, abs/1606.09282. Martinez, J., Black, M. J., and Romero, J. (2017). On human motion prediction using recurrent neural networks. CoRR, abs/1705.02445. Merel, J., Tassa, Y., Srinivasan, S., Lemmon, J., Wang, Z., Wayne, G., and Heess, N. (2017). Learning human behaviors from motion capture by adversarial imitation. arXiv preprint arXiv:1707.02201. Pan, S. J. and Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359. Parisotto, E., Ba, J. L., and Salakhutdinov, R. (2015). Actor-mimic: Deep multitask and transfer reinforce- ment learning. arXiv preprint arXiv:1511.06342. Peng, X. B., Berseth, G., Yin, K., and Van De Panne, M. (2017). Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG), 36(4):41. Peng, X. B. and van de Panne, M. (2016). Learning locomotion skills using deeprl: Does the choice of action space matter? CoRR, abs/1611.01055. Rajendran, J., Lakshminarayanan, A. S., Khapra, M. M., Prasanna, P., and Ravindran, B. (2015). Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain. arXiv preprint arXiv:1510.02879. Roemmich, R. T. and Bastian, A. J. (2015). Two ways to save a newly learned motor pattern. Journal of neurophysiology, 113(10):3519–3530. Ross, S., Gordon, G. J., and Bagnell, J. A. (2010). No-regret reductions for imitation learning and structured prediction. CoRR, abs/1011.0686. Rusu, A. A., Colmenarejo, S. G., Gulcehre, C., Desjardins, G., Kirkpatrick, J., Pascanu, R., Mnih, V., Kavukcuoglu, K., and Hadsell, R. (2015). Policy distillation. arXiv preprint arXiv:1511.06295. Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., and Hadsell, R. (2016). Progressive Neural Networks. arXiv. Schulman, J., Moritz, P., Levine, S., Jordan, M., and Abbeel, P. (2016). High-dimensional continuous con- trol using generalized advantage estimation. In International Conference on Learning Representations (ICLR 2016). Shin, H., Lee, J. K., Kim, J., and Kim, J. (2017). Continual learning with deep generative replay. arXiv preprint arXiv:1705.08690. Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. (2014). Deterministic policy gradient algorithms. In ICML. Sutton, R. S., McAllester, D. A., Singh, S. P., and Mansour, Y. (2000). Policy gradient methods for rein- forcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063. Taylor, M. E. and Stone, P. (2009). Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633–1685. Teh, Y. W., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., and Pascanu, R. (2017). Distral: Robust multitask reinforcement learning. arXiv preprint arXiv:1707.04175. Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., and Mannor, S. (2016). A Deep Hierarchical Approach to Lifelong Learning in Minecraft. arXiv, pages 1–6. 9 Under review as a conference paper at ICLR 2018 Van Hasselt, H. (2012). Reinforcement learning in continuous state and action spaces. In Reinforcement Learning, pages 207–251. Springer. Wolpert, D. M. and Flanagan, J. R. (2016). Computations underlying sensorimotor learning. Current opinion in neurobiology, 37:7–11. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 31}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Terrain Types All terrain types are randomly generated per episode, except for the flat terrain. The incline terrain is slanted and the slant of the terrain is randomly sampled between 20 and 25 degrees. The steps terrain consists of flat segments with widths randomly sampled from 1.0 m to 1.5 m followed by sharp steps ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'How does pd-biped work details on reward function ', 'modified_lines': '', 'original_lines': '10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 17:52:44
|
ICLR.cc/2018/Conference
|
HJ4uMeWRZ
|
HJCXTx-Ab
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'For long lived applications of complex control tasks a learning system may need to acquire and integrate additional skills. Accordingly, our problem is defined by the sequential acquisition and integration of new ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'As they gain experience, humans develop rich repertoires of motion skills that are useful in different contexts and environments. Recent advances in reinforcement learning provide an opportunity to understand how motion repertoires can best be learned, recalled, and augmented. Inspired by studies on the development ', 'modified_lines': 'and recall of movement patterns useful for different locomotion contexts Roemmich and Bastian (2015), we develop and evaluate an approach for learning multi-skilled movement repertoires. In what follows, we refer to the proposed method as PLAID: Progressive Learning and Integration via Distillation. ', 'original_lines': 'and recall of movement patterns useful for different locomotion contexts ?, we develop and evaluate an approach for learning multi-skilled movement repertoires. In what follows, we refer to the proposed method as PLAID: Progressive Learning and Integration via Distillation. ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Distillation refers to the problem of combining the policies of one or more experts in order to create one The core contribution of this paper is a method (PLAID) to repeatedly expand and integrate a motion repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against two alternative baselines. We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'injection. Understanding the time course of sensorimotor learning in human motor control is an open research prob- ', 'modified_lines': 'lem Wolpert and Flanagan (2016) that exists concurrently with recent advances in deep reinforcement learn- ing. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in rein- forcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert poli- cies Parisotto et al. (2015); Teh et al. (2017); Rusu et al. (2015). However, given a new task for which an expert is not given, it is less clear how to learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One well-known technique in machine learn- ing to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) Pan and Yang (2010), which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomly-initialized model Rajendran et al. (2015). Additionally, while learning a new skill, the control policy should not forget how to perform old skills. 1 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'lem ? that exists concurrently with recent advances in deep reinforcement learning. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in reinforcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression prob- lem where the objective is to learn a model that matches the output distributions of all expert policies ???. However, given a new task for which an expert is not given, it is less clear how to learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One well-known technique in machine learning to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) ?, which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; there can be negative transfer wherein a previously-trained model can take longer to learn a new task via fine-tuning than would a randomly-initialized model ?. Additionally, while learning a new skill, the control policy should not forget how to perform old skills. 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'Transfer Learning Transfer learning exploits the structure learned from a previous task in learning a new 3 FRAMEWORK ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '2 RELATED WORK ', 'modified_lines': 'Transfer learning and distillation are of broad interest in machine learning and RL Pan and Yang (2010); Taylor and Stone (2009); Teh et al. (2017). Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the re- inforcement learning setting. A popular approach uses supervised learning to combine each policy by regression over the action distribution. This approach yields model compression Rusu et al. (2015) as well as a viable method for multi-task policy transfer Parisotto et al. (2015) on discrete action domains including the Arcade Learning Environment Bellemare et al. (2013). We adopt these techniques and extend them for the case of complex continuous action space tasks and make use of them as building block. task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The con- cept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games Rusu et al. (2015); Parisotto et al. (2015); Rusu et al. (2016); Chen et al. (2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, Shin et al. (2017); Li and Hoiem (2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters Kirkpatrick et al. (2017). A different approach to combining policies is to use a hierarchical structure Tessler et al. (2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta- learning to achieve control policies that can quickly adapt their behavior according to current rewards Finn et al. (2017). This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks Tessler et al. (2016) This allows for the substitution of network modules for different robot types over a similar tasks Devin et al. (2017). Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a com- plex motor control problem, defining a decomposition of the overall task into smaller tasks Kulkarni et al. (2016); Heess et al. (2016); Peng et al. (2017) While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions that can be shaped by prior mocap clips Merel et al. (2017), and that these can then be integrated in a hierarchical controller. ', 'original_lines': 'Transfer learning and distillation are of broad interest in machine learning and RL ???. Here we outline some of the most relevant work in the area of Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the rein- forcement learning setting. A popular approach uses supervised learning to combine each policy by regres- sion over the action distribution. This approach yields model compression ? as well as a viable method for multi-task policy transfer ? on discrete action domains including the Arcade Learning Environment ?. We adopt these techniques and extend them for the case of complex continuous action space tasks and make use of them as building block. task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The concept of appending additional network structure while keeping the previous structure to reduce catas- trophic forgetting has worked well on Atari games ???? Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, ??. Recent work seeks to mitigate this issue using selective learning rates for specific network parameters ?. A different approach to combining policies is to use a hierarchical structure ?. In this setting, previously-learned policies are avail- able as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve control policies that can quickly adapt their behavior according to current rewards ?. This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks ? This allows for the substitution of network modules for different robot types over a similar tasks ?. Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks ??? While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions that can be shaped by prior mocap clips ?, and that these can then be integrated in a hierarchical controller. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(2) Our policy models a Gaussian distribution with a mean state dependent mean, µθt(st). Thus, our stochastic policy may be formulated as follows: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'J(π(·|θπ)) ', 'modified_lines': '', 'original_lines': '2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'To optimize our policy, we use stochastic policy gradient methods, which are well-established family of ∇θπ J(π(·|θπ)) = ', 'paragraph_idx': 14, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': 'where Σ is a diagonal covariance matrix with entries σ2 ', 'modified_lines': 'i on the diagonal, similar to Peng et al. (2017). techniques for reinforcement learning Sutton et al. (2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'original_lines': 'i on the diagonal, similar to ?. techniques for reinforcement learning ?. The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 14}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': None, 'context_after': 'Aπ(st, at) = I [δt > 0] = ', 'paragraph_idx': 15, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': 'where dθ = (cid:82) t=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) represents the ', 'modified_lines': 'initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps Silver et al. (2014). Aπ(s, a) represents an advantage function Schulman et al. (2016). In this work, we use the Positive Temporal Difference (PTD) update proposed by Van Hasselt (2012) for Aπ(s, a): ', 'original_lines': 'initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps ?. Aπ(s, a) represents an advantage function ?. In this work, we use the Positive Temporal Difference (PTD) update proposed by ? for Aπ(s, a): ', 'after_paragraph_idx': None, 'before_paragraph_idx': 15}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'minimize ', 'paragraph_idx': 15, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': 'where Vπ(s) = E is the value function, which gives the expected discounted cumu- lative reward from following policy π starting in state s. PTD has the benefit of being insensitive to the ', 'modified_lines': 'advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of ac- tions that have a positive advantage has been found to increase the stability of learning Van Hasselt (2012). Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'original_lines': 'advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of actions that have a positive advantage has been found to increase the stability of learning ?. Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '3.3 TRANSFER LEARNING', 'after_section': None, 'context_after': '4 PROGRESSIVE LEARNING π0 ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '3.3 TRANSFER LEARNING Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a ', 'modified_lines': 'new task efficiently. This problem falls in the area of Transfer Learning Pan and Yang (2010). Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. 3 Under review as a conference paper at ICLR 2018 Although we focus on the problem of being presented with tasks sequentially, there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately Parisotto et al. (2015). The curriculum for using this method is shown in Figure 7a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 7b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 7c. This method works well for both combining learned skills and learning new skills. ', 'original_lines': 'new task efficiently. This problem falls in the area of Transfer Learning ?. Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. Although we focus on the problem of being presented with tasks sequentially, there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately ?. The curriculum for using this method is shown in Figure 1a were during 3 Under review as a conference paper at ICLR 2018 a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '4.2 HIGH LEVEL EXPERIMENT DESIGN The results presented in this work cover a range of tasks that share a similar action space and state space. Our focus is to demonstrate continual learning between related tasks. However, the conceptual framework allows for extensions that would permit differing state spaces, described later in Section: 5.2. 5 RESULTS ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, portions of the trajectories should be generated by the student, this allows the expert to suggest behaviour that will pull the state distribution of the student closer to the expert’s. This is a common problem in ', 'modified_lines': 'learning a model to reproduce a given distribution of trajectories Ross et al. (2010); Bengio et al. (2015); Martinez et al. (2017); Lamb et al. (2016). We use a method similar to the DAgger algorithm Ross et al. (2010) which has been found to be useful for distilling policies Parisotto et al. (2015).As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. 4 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'learning a model to reproduce a given distribution of trajectories ????. We use a method similar to the DAgger algorithm ? which has been found to be useful for distilling policies ?.As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': 'traversing various terrains, while also matching a motion capture clip of a natural human walking gait on (a) flat ', 'paragraph_idx': 30, 'before_section': '5 RESULTS', 'context_before': 'traverse. A bipedal character is trained to navigate multiple types of terrain, including flat in (Figure 2a), incline (Figure 2b), steps (Figure 2c), slopes (Figure 2d), gaps (Figure 2e) and a combination of all terrains mixed (Figure 2f) that are not trained on. The goal in these tasks is to maintain a consistent forward velocity ', 'modified_lines': 'flat ground, similar to Peng and van de Panne (2016). The 2D humanoid receives as input both a character and eventually a terrain state representation. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'original_lines': ' 4 Under review as a conference paper at ICLR 2018 flat ground, similar to ?. The 2D humanoid receives as input both a character and eventually a terrain state representation. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': '5.2 FEATURE INJECTION', 'after_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_after': '5 ', 'paragraph_idx': 35, 'before_section': '5.2 FEATURE INJECTION', 'context_before': 'terrain. These new terrain features can assist the agent in identifying which task domain it is operating in. We introduce the idea of feature injection for this purpose. We augment a policy with additional input ', 'modified_lines': 'features while allowing it to retain its original functional behaviour similar to Chen et al. (2015). This is ', 'original_lines': 'features while allowing it to retain its original functional behaviour similar to ?. This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 34}, {'section': '5.2 FEATURE INJECTION', 'after_section': None, 'context_after': '5.3 DISTILLING MULTIPLE POLICIES ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'the most recent expert but alternates between each environment during training. The learning for PLAID is split into two steps, first the TL part in green followed by the distillation part in red. Using TL assists in the learning of new tasks. ', 'modified_lines': ' achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1https : //www.dropbox.com/s/h2objbaz5tv7hre/ContinualLearning.mp4?dl = 0 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall average reward but in practice constraints can keep the method from combining skills gracefully. If the reward ', 'modified_lines': '', 'original_lines': 'functions are different between tasks the MultiTasker can favour the task with higher rewards, as this tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker my also favour tasks that are in general easier than other tasks. We have shown that the distiller can scale better to the number of tasks than the MultiTasker. The tasks used in this analysis could be considered too similar, with respect to state space and reward functions. We believe ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 POLICY DISTILLATION', 'after_section': None, 'context_after': 'PLAID would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar. ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Figure 5: Still frame shots of the pd-biped traversing the mixed environment. ', 'modified_lines': 'functions are different between tasks the MultiTasker can favour the task with higher rewards, as this tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker my also favour tasks that are in general easier than other tasks. We have shown that the distiller can scale better to the number of tasks than the MultiTasker. The tasks used in this analysis could be considered too similar, with respect to state space and reward functions. We believe ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'For the evaluation of each model on a particular task we use the average reward achieved by the agent over at most 100 seconds of simulation time. We average this over running the agent over a number of randomly ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'that point on. Each training simulation takes approximately 5 hours across 8 threads. For network training we use Stochastic Gradient Decent (SGD) with momentum. During the distillation step we use gradually anneal the probability of selecting an expert action from 1 to 0 over 10, 000 iterations. ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 18:38:29
|
ICLR.cc/2018/Conference
|
HJCXTx-Ab
|
HJn57b-Ab
|
[{'section': '4 PROGRESSIVE LEARNING', 'after_section': '4 PROGRESSIVE LEARNING', 'context_after': 'evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns for both combining learned skills and learning new skills. π0 ', 'paragraph_idx': 18, 'before_section': '4 PROGRESSIVE LEARNING', 'context_before': 'skills. The first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately Parisotto et al. (2015). The curriculum for using this method is shown ', 'modified_lines': 'in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well ', 'original_lines': 'in Figure 7a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 7b. We attempted to each task sequentially using TL from the previous, most skilled policy Figure 7c. This method works well ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'after_section': None, 'context_after': '4.2 HIGH LEVEL EXPERIMENT DESIGN ', 'paragraph_idx': 27, 'before_section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'context_before': 'by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is because the student experiences a different distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, ', 'modified_lines': 'portions of the trajectories should be generated by the student, this allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning a model to reproduce a given distribution of trajectories Ross et al. (2010); Bengio et al. (2015); Martinez et al. (2017); Lamb et al. (2016). We use a method similar to the DAGGER algorithm Ross et al. (2010) which has been found to be useful for distilling policies Parisotto et al. (2015).As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. ', 'original_lines': 'portions of the trajectories should be generated by the student, this allows the expert to suggest behaviour that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning a model to reproduce a given distribution of trajectories Ross et al. (2010); Bengio et al. (2015); Martinez et al. (2017); Lamb et al. (2016). We use a method similar to the DAgger algorithm Ross et al. (2010) which has been found to be useful for distilling policies Parisotto et al. (2015).As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': '(a) flat ', 'paragraph_idx': 30, 'before_section': '5 RESULTS', 'context_before': 'mixed (Figure 2f) that are not trained on. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to Peng and van de Panne (2016). The 2D humanoid receives as input both a character ', 'modified_lines': 'and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'original_lines': 'and eventually a terrain state representation. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '5.1 TRANSFER LEARNING ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'learning new tasks in PLAID with learning new tasks from randomly initialized controller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively transferred after distillation steps. Second, we compare to the MultiTasker, to demonstrate that iterated distillation is effective for retention of ', 'modified_lines': 'learned skills. The MultiTasker is also used as a baseline for comparing learning speed. The results of the PLAID controller are displayed in the accompanying Video 1 ', 'original_lines': 'learned skills. The MultiTasker is also used as a baseline for comparing learning speed. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2', 'after_section': None, 'context_after': '5.3 DISTILLING MULTIPLE POLICIES ', 'paragraph_idx': 34, 'before_section': None, 'context_before': 'split into two steps, first the TL part in green followed by the distillation part in red. Using TL assists in the learning of new tasks. ', 'modified_lines': 'without a local map of the terrain can be combined into a single policy that has new state features for the terrain. These new terrain features can assist the agent in identifying which task domain it is operating in. We introduce the idea of input injection for this purpose. We augment a policy with additional input features while allowing it to retain its original functional behaviour similar to Chen et al. (2015). This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. ', 'original_lines': 'achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'There are some indications that distillation is hindering during the initial few training iterations. We are initializing the network used in distillation with the most recently learning policy after TL. The large ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'solution. This can be seen in Figure 4d, 4e and especially in 4f where PLAID combines the skills faster, and can find higher value policies in practice. PLAID also presents zero-shot training on tasks it has never trained on. In Figure 5 this generalization is shown as the agent navigate across the mixed environment. ', 'modified_lines': '', 'original_lines': 'The results of the controller are also displayed in the accompanying Video 1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': None, 'context_after': 'Transfer Learning: Because we are using an actor-critic learning method we also studied the possibility of using the value functions for TL. We did not discover any empirical evidence that this assisted the learning process. When transferring to a new task the state distribution has changed and the reward function maybe be completely different. This makes it unlikely the value function will be accurate on this new task. Also, value functions are in general easier and faster to learn than policies, implying that it is less important to transfer. We also find that helpfulness of TL depends on not only the task difficulty task but the reward ', 'paragraph_idx': 18, 'before_section': None, 'context_before': '6 DISCUSSION ', 'modified_lines': 'MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall average reward, but in practice constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks the MultiTasker can favor the task with higher rewards, as these tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker may also favor tasks that are in general easier than other tasks. We have shown that the distiller scales better with respect to the number of tasks than the MultiTasker. We believe PLAID would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar. In our evaluation we compare the number of iterations PLAID uses to the number the MultiTasker uses on only the new task, which is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that are needed to learn new tasks 6 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAID on 3 tasks (e) PLAID on 4 tasks (f) PLAID on 5 tasks Figure 4: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAID (f) are trained on gaps. (a) (b) (c) (d) (e) (f) (g) Figure 5: Still frame shots of the pd-biped traversing the mixed environment. then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioral cloning. We did not perform a study into how effective distillation could be by conservatively collecting data from the simulation in order to keep comparatively similar learning conditions between distillation and the MultiTasker. We believe the another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. 7 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.1 LIMITATIONS:', 'after_section': None, 'context_after': '6.1 LIMITATIONS: In our transfer learning results we could be over fitting the initial expert for the particular task it was learn- ing. Making it more challenging for the policy to learning a new task, resulting in negative transfer. After learning many new tasks the previous tasks may not receive a large enough potion of the distillation training ', 'paragraph_idx': 45, 'before_section': '6 DISCUSSION', 'context_before': 'is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. ', 'modified_lines': 'Once integrated, the skills for our locomotion tasks are self-selecting based on their context, i.e., the knowl- edge of the upcoming terrain. It may be that other augmentation and distillation strategies are better for situations where the current reward function or a one-hot vector is used to select the currently active expert. ', 'original_lines': 'MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall average reward but in practice constraints can keep the method from combining skills gracefully. If the reward 1https : //www.dropbox.com/s/h2objbaz5tv7hre/ContinualLearning.mp4?dl = 0 6 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAID on 3 tasks (e) PLAID on 4 tasks (f) PLAID on 5 tasks Figure 4: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAID (f) are trained on gaps. (a) (b) (c) (d) (e) (f) (g) Figure 5: Still frame shots of the pd-biped traversing the mixed environment. functions are different between tasks the MultiTasker can favour the task with higher rewards, as this tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker my also favour tasks that are in general easier than other tasks. We have shown that the distiller can scale better to the number of tasks than the MultiTasker. The tasks used in this analysis could be considered too similar, with respect to state space and reward functions. We believe PLAID would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar. In our evaluation we compare the number of iterations PLAID uses to the number the MultiTasker uses on only the new task, this is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that are needed to learn new tasks then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of 7 Under review as a conference paper at ICLR 2018 simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioral cloning. We did not perform a study into how effective distillation could be by conservatively collecting data from the simulation in order to keep comparatively similar learning conditions between distillation and the MultiTasker. We believe the another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 44}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. (2017). ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Heess, N., Wayne, G., Tassa, Y., Lillicrap, T. P., Riedmiller, M. A., and Silver, D. (2016). Learning and transfer of modulated locomotor controllers. CoRR, abs/1610.05182. ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., and Mannor, S. (2016). A Deep Hierarchical Approach ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Teh, Y. W., Bapst, V., Czarnecki, W. M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., and Pascanu, R. (2017). Distral: Robust multitask reinforcement learning. arXiv preprint arXiv:1707.04175. ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 19:05:55
|
ICLR.cc/2018/Conference
|
HJn57b-Ab
|
S1jtuW-C-
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'we develop and evaluate an approach for learning multi-skilled movement repertoires. In what follows, we refer to the proposed method as PLAID: Progressive Learning and Integration via Distillation. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'As they gain experience, humans develop rich repertoires of motion skills that are useful in different contexts and environments. Recent advances in reinforcement learning provide an opportunity to understand how motion repertoires can best be learned, recalled, and augmented. Inspired by studies on the development ', 'modified_lines': 'and recall of movement patterns useful for different locomotion contexts (Roemmich and Bastian, 2015), ', 'original_lines': 'and recall of movement patterns useful for different locomotion contexts Roemmich and Bastian (2015), ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Distillation refers to the problem of combining the policies of one or more experts in order to create one single controller that can perform the tasks of a set of experts. It can be cast as a supervised regression problem where the objective is to learn a model that matches the output distributions of all expert poli- expert is not given, it is less clear how to learn the new task while successfully integrating this new skill in the pre-existing repertoire of the control policy for an agent. One well-known technique in machine learn- there can be negative transfer wherein a previously-trained model can take longer to learn a new task via a new skill, the control policy should not forget how to perform old skills. 1 ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'injection. Understanding the time course of sensorimotor learning in human motor control is an open research prob- ', 'modified_lines': 'lem (Wolpert and Flanagan, 2016) that exists concurrently with recent advances in deep reinforcement learning. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forget- ting, and scalability are all in play for both human motor control models and the learning curricula proposed in reinforcement learning. While the development of hierarchical models for skills offers one particular so- lution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. cies (Parisotto et al., 2015; Teh et al., 2017; Rusu et al., 2015). However, given a new task for which an ing to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) (Pan and Yang, 2010), which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; fine-tuning than would a randomly-initialized model (Rajendran et al., 2015). Additionally, while learning ', 'original_lines': 'lem Wolpert and Flanagan (2016) that exists concurrently with recent advances in deep reinforcement learn- ing. Issues of generalization, context-dependent recall, transfer or "savings" in fast learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in rein- forcement learning. While the development of hierarchical models for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. cies Parisotto et al. (2015); Teh et al. (2017); Rusu et al. (2015). However, given a new task for which an ing to significantly improve sample efficiency across similar tasks is to use Transfer Learning (TL) Pan and Yang (2010), which seeks to reuse knowledge learned from solving a previous task to efficiently learn a new task. However, transferring knowledge from previous tasks to new tasks may not be straightforward; fine-tuning than would a randomly-initialized model Rajendran et al. (2015). Additionally, while learning ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'Deep Reinforcement Learning (DRL) for continuous control environments. Distillation Recent works have explored the problem of combining multiple expert policies in the re- inforcement learning setting. A popular approach uses supervised learning to combine each policy by the case of complex continuous action space tasks and make use of them as building block. Transfer Learning Transfer learning exploits the structure learned from a previous task in learning a new setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta- 3 FRAMEWORK ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '2 RELATED WORK ', 'modified_lines': 'Transfer learning and distillation are of broad interest in machine learning and RL (Pan and Yang, 2010; Taylor and Stone, 2009; Teh et al., 2017). Here we outline some of the most relevant work in the area of regression over the action distribution. This approach yields model compression (Rusu et al., 2015) as well as a viable method for multi-task policy transfer (Parisotto et al., 2015) on discrete action domains including the Arcade Learning Environment (Bellemare et al., 2013). We adopt these techniques and extend them for task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The concept of appending additional network structure while keeping the previous structure to reduce catas- trophic forgetting has worked well on Atari games (Rusu et al., 2015; Parisotto et al., 2015; Rusu et al., 2016; Chen et al., 2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, (Shin et al., 2017; Li and Hoiem, 2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters (Kirkpatrick et al., 2017). A different approach to combining policies is to use a hierarchical structure (Tessler et al., 2016). In this learning to achieve control policies that can quickly adapt their behavior according to current rewards (Finn et al., 2017). This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks (Tessler et al., 2016) This allows for the substitution of network modules for different robot types over a similar tasks (Devin et al., 2017). Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a complex motor control problem, defining a decomposition of the overall task into smaller tasks (Kulkarni et al., 2016; Heess et al., 2016; Peng et al., 2017) While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions that can be shaped by prior mocap clips (Merel et al., 2017), and that these can then be integrated in a hierarchical controller. ', 'original_lines': 'Transfer learning and distillation are of broad interest in machine learning and RL Pan and Yang (2010); Taylor and Stone (2009); Teh et al. (2017). Here we outline some of the most relevant work in the area of regression over the action distribution. This approach yields model compression Rusu et al. (2015) as well as a viable method for multi-task policy transfer Parisotto et al. (2015) on discrete action domains including the Arcade Learning Environment Bellemare et al. (2013). We adopt these techniques and extend them for task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The con- cept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games Rusu et al. (2015); Parisotto et al. (2015); Rusu et al. (2016); Chen et al. (2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, Shin et al. (2017); Li and Hoiem (2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters Kirkpatrick et al. (2017). A different approach to combining policies is to use a hierarchical structure Tessler et al. (2016). In this learning to achieve control policies that can quickly adapt their behavior according to current rewards Finn et al. (2017). This work is demonstrated on parameterized task domains. Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks Tessler et al. (2016) This allows for the substitution of network modules for different robot types over a similar tasks Devin et al. (2017). Other methods use Hierarchical Reinforcement Learning (HRL) as a method for simplifying a com- plex motor control problem, defining a decomposition of the overall task into smaller tasks Kulkarni et al. (2016); Heess et al. (2016); Peng et al. (2017) While these methods examine knowledge transfer, they do not examine the reintegration of policies for related tasks and the associated problems such as catastrophic forgetting. Recent work examines learned motions that can be shaped by prior mocap clips Merel et al. (2017), and that these can then be integrated in a hierarchical controller. ', 'after_paragraph_idx': 8, 'before_paragraph_idx': None}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'To optimize our policy, we use stochastic policy gradient methods, which are well-established family of ∇θπ J(π(·|θπ)) = ', 'paragraph_idx': 14, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': 'where Σ is a diagonal covariance matrix with entries σ2 ', 'modified_lines': 'i on the diagonal, similar to (Peng et al., 2017). techniques for reinforcement learning (Sutton et al., 2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'original_lines': 'i on the diagonal, similar to Peng et al. (2017). techniques for reinforcement learning Sutton et al. (2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 14}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': None, 'context_after': 'Aπ(st, at) = I [δt > 0] = ', 'paragraph_idx': 15, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': 'where dθ = (cid:82) t=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) represents the initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting ', 'modified_lines': 'at state s0 and following the policy π(a, s|θπ) for T steps (Silver et al., 2014). Aπ(s, a) represents an advantage function (Schulman et al., 2016). In this work, we use the Positive Temporal Difference (PTD) update proposed by (Van Hasselt, 2012) for Aπ(s, a): ', 'original_lines': 'at state s0 and following the policy π(a, s|θπ) for T steps Silver et al. (2014). Aπ(s, a) represents an advantage function Schulman et al. (2016). In this work, we use the Positive Temporal Difference (PTD) update proposed by Van Hasselt (2012) for Aπ(s, a): ', 'after_paragraph_idx': None, 'before_paragraph_idx': 15}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'is the value function, which gives the expected discounted cumu- lative reward from following policy π starting in state s. PTD has the benefit of being insensitive to the advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of ac- ', 'modified_lines': 'tions that have a positive advantage has been found to increase the stability of learning (Van Hasselt, 2012). ', 'original_lines': 'tions that have a positive advantage has been found to increase the stability of learning Van Hasselt (2012). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 TRANSFER LEARNING', 'after_section': '3.3 TRANSFER LEARNING', 'context_after': 'the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '3.3 TRANSFER LEARNING Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a ', 'modified_lines': 'new task efficiently. This problem falls in the area of Transfer Learning (Pan and Yang, 2010). Considering ', 'original_lines': 'new task efficiently. This problem falls in the area of Transfer Learning Pan and Yang (2010). Considering ', 'after_paragraph_idx': 17, 'before_paragraph_idx': None}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': '4 PROGRESSIVE LEARNING', 'context_after': 'in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were ', 'paragraph_idx': 18, 'before_section': '4 PROGRESSIVE LEARNING', 'context_before': 'for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills. The first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster ', 'modified_lines': 'than learning each task separately (Parisotto et al., 2015). The curriculum for using this method is shown ', 'original_lines': 'than learning each task separately Parisotto et al. (2015). The curriculum for using this method is shown ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'after_section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'context_after': 'actor-critic method, we also perform regression on the critic by fitting both in the same step. 4.2 HIGH LEVEL EXPERIMENT DESIGN ', 'paragraph_idx': 27, 'before_section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'context_before': 'distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, portions of the trajectories should be generated by the student, this allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning ', 'modified_lines': 'a model to reproduce a given distribution of trajectories (Ross et al., 2010; Bengio et al., 2015; Martinez et al., 2017; Lamb et al., 2016). We use a method similar to the DAGGER algorithm (Ross et al., 2010) which has been found to be useful for distilling policies (Parisotto et al., 2015).As our RL algorithm is an ', 'original_lines': 'a model to reproduce a given distribution of trajectories Ross et al. (2010); Bengio et al. (2015); Martinez et al. (2017); Lamb et al. (2016). We use a method similar to the DAGGER algorithm Ross et al. (2010) which has been found to be useful for distilling policies Parisotto et al. (2015).As our RL algorithm is an ', 'after_paragraph_idx': 27, 'before_paragraph_idx': 27}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': 'and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. ', 'paragraph_idx': 30, 'before_section': '5 RESULTS', 'context_before': 'incline (Figure 2b), steps (Figure 2c), slopes (Figure 2d), gaps (Figure 2e) and a combination of all terrains mixed (Figure 2f) that are not trained on. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on ', 'modified_lines': 'flat ground, similar to (Peng and van de Panne, 2016). The 2D humanoid receives as input both a character ', 'original_lines': 'flat ground, similar to Peng and van de Panne (2016). The 2D humanoid receives as input both a character ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': '5.2', 'after_section': '5.2', 'context_after': 'by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the ', 'paragraph_idx': 35, 'before_section': '5.2', 'context_before': 'terrain. These new terrain features can assist the agent in identifying which task domain it is operating in. We introduce the idea of input injection for this purpose. We augment a policy with additional input features ', 'modified_lines': 'while allowing it to retain its original functional behaviour similar to (Chen et al., 2015). This is achieved ', 'original_lines': 'while allowing it to retain its original functional behaviour similar to Chen et al. (2015). This is achieved ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 34}]
|
2017-10-27 19:26:59
|
ICLR.cc/2018/Conference
|
S1jtuW-C-
|
BklpZzb0b
|
[{'section': '4 PROGRESSIVE LEARNING', 'after_section': '4 PROGRESSIVE LEARNING', 'context_after': 'of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately (Parisotto et al., 2015). The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn ', 'paragraph_idx': 18, 'before_section': '4 PROGRESSIVE LEARNING', 'context_before': 'Although we focus on the problem of being presented with tasks sequentially, there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple ', 'modified_lines': 'skills, the first being a controller that learns multiple tasks at the same time (MultiTasker), where a number ', 'original_lines': 'skills. The first being a controller that learns multiple tasks at the same time (MultiTasker), where a number ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': '3 FRAMEWORK', 'after_section': None, 'context_after': 'In the integration (distillation) step, we are interested in combining all past skills (π0, . . . , πi) with the distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, a model to reproduce a given distribution of trajectories (Ross et al., 2010; Bengio et al., 2015; Martinez et al., 2017; Lamb et al., 2016). We use a method similar to the DAGGER algorithm (Ross et al., 2010) 4.2 HIGH LEVEL EXPERIMENT DESIGN The results presented in this work cover a range of tasks that share a similar action space and state space. allows for extensions that would permit differing state spaces, described later in Section: 5.2. 4 ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION ', 'modified_lines': 'In this section, we detail our proposed learning framework for continual policy transfer and distillation (PLAID). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here transfer can be beneficial if the task structure is somewhat similar to previous tasks ωi. We adopt the TL strategy of using an existing policy network and fine-tuning it to a new task. Since we are not concerned with retaining previous skills in this step, we can update this policy without concern for forgetting. As the agent learns it will develop more skills and the addition of every new skill can increase the probability of transferring knowledge to assist the learning of the next skill. newly acquired skill πi+1. Traditional approaches have used policy regression where data is generated by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is caused by the student experiences a different portions of the trajectories should be generated by the student. This allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning which is useful for distilling policies (Parisotto et al., 2015). As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. Our focus is to demonstrate continual learning between related tasks. In addition, the conceptual framework ', 'original_lines': 'In this section we detail our proposed learning framework for continual policy transfer and distillation (PLAID). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here transfer can be beneficial if the task structure is somewhat similar to previous tasks ωi. We adopt the TL strategy of using an existing policy network and fine-tuning it to a new task. Since we are not concerned with retaining previous skills in this step, we can update this policy without concern for forgetting. As the agent learns it will develop more skills, the addition of every new skill can increase the probability of transferring knowledge to assist the learning of the next skill. newly acquired skill πi+1. Traditional approaches have used policy regression where data is generated by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is because the student experiences a different portions of the trajectories should be generated by the student, this allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning which has been found to be useful for distilling policies (Parisotto et al., 2015).As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. Our focus is to demonstrate continual learning between related tasks. However, the conceptual framework ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': 'incline (Figure 2b), steps (Figure 2c), slopes (Figure 2d), gaps (Figure 2e) and a combination of all terrains (a) flat ', 'paragraph_idx': 30, 'before_section': None, 'context_before': '5 RESULTS In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker learns to ', 'modified_lines': 'traverse. A bipedal character is trained to navigate multiple types of terrain including flat in (Figure 2a), mixed (Figure 2f) on which agents are trained. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng and van de Panne, 2016). The 2D humanoid receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally- spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'original_lines': 'traverse. A bipedal character is trained to navigate multiple types of terrain, including flat in (Figure 2a), mixed (Figure 2f) that are not trained on. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng and van de Panne, 2016). The 2D humanoid receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'after_paragraph_idx': 30, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'learning new tasks in PLAID with learning new tasks from randomly initialized controller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively transferred after distillation steps. 5.1 TRANSFER LEARNING ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'Figure 2: The environments used to evaluate PLAID. ', 'modified_lines': 'We evaluate our approach against two baselines. First, we compare the above learning curriculum from Second, we compare to the MultiTasker to demonstrate that iterated distillation is effective for the retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. The results of the PLAID controller are displayed in the accompanying Video 1 ', 'original_lines': 'We evaluate our approach against two baselines. Firstly, we compare the above learning curriculum from Second, we compare to the MultiTasker, to demonstrate that iterated distillation is effective for retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. The results of the PLAID controller are displayed in the accompanying Video 1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 5 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'number of tasks it must learn at the same time. While PLAID learns the new tasks faster and is able to integrate the new skill required to solve the task robustly. ', 'modified_lines': '', 'original_lines': '5.2 INPUT INJECTION An appealing property of using distillation in PLAID is that the combined policy model need not resemble that of the individual expert controllers. For example, two different experts lacking state features and trained ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2', 'after_section': None, 'context_after': 'without a local map of the terrain can be combined into a single policy that has new state features for the We introduce the idea of input injection for this purpose. We augment a policy with additional input features while allowing it to retain its original functional behaviour similar to (Chen et al., 2015). This is achieved ', 'paragraph_idx': 37, 'before_section': '5.2', 'context_before': 'is showing faster learning after adding an additional skill and MultiTasker failing to learn the new task. The distiller initializes its policy with the most recently learned expert. The MultiTasker is also initialized from the most recent expert but alternates between each environment during training. The learning for PLAID is ', 'modified_lines': 'split into two steps, with TL (in green) going first followed by the distillation part (in red). Using TL assists in the learning of new tasks. 5.2 INPUT INJECTION An appealing property of using distillation in PLAID is that the combined policy model need not resemble that of the individual expert controllers. For example, two different experts lacking state features and trained terrain. These new terrain features can assist the agent the task domain in which it operates. ', 'original_lines': 'split into two steps, first the TL part in green followed by the distillation part in red. Using TL assists in the learning of new tasks. terrain. These new terrain features can assist the agent in identifying which task domain it is operating in. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '5.3 DISTILLING MULTIPLE POLICIES', 'after_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_after': 'with respect to the number of tasks. When training the MultiTasker over two or even three tasks (Figure 4a) the method displays good results, however when learning a fourth or more tasks the method struggles, as shown in Figure 4b and 4b. Part of the reason for this struggle is when new tasks are added the MultiTasker increasingly complex resulting in the MultiTasker favouring easier tasks. Using PLAID to combine the skills of many policies appears to scale better with respect to the number of skills being integrated. This is likely because distillation is a semi-supervised method which is more stable than the un-supervised RL initializing the network used in distillation with the most recently learning policy after TL. The large change in the initial state distribution from the previous seen distribution during TL could be causing larger gradients to appear, disrupting some of the structure learned during the TL step, shown in Figure 4d and ', 'paragraph_idx': 38, 'before_section': None, 'context_before': '5.3 DISTILLING MULTIPLE POLICIES ', 'modified_lines': 'Training over multiple tasks at the same time my help the agent learn skills quicker, but this may not scale has to make trade-offs between more tasks to maximizes. As more tasks are added, this trade-off becomes solution. This can be seen in Figure 4d, 4e and especially in 4f where PLAID combines the skills faster and can find higher value policies in practice. PLAID also presents zero-shot training on tasks which it has never trained. In Figure 5 this generalization is shown as the agent navigate across the mixed environment. There are some indications that distillation is hindering training during the initial few iterations. We are ', 'original_lines': 'Training over multiple tasks at the same time my help the agent learn skills quicker but this may not scale has to make trade-offs between more tasks to maximizes. As more tasks are added this trade-off becomes solution. This can be seen in Figure 4d, 4e and especially in 4f where PLAID combines the skills faster, and can find higher value policies in practice. PLAID also presents zero-shot training on tasks it has never trained on. In Figure 5 this generalization is shown as the agent navigate across the mixed environment. There are some indications that distillation is hindering during the initial few training iterations. We are ', 'after_paragraph_idx': 38, 'before_paragraph_idx': None}, {'section': '3.2 POLICY DISTILLATION', 'after_section': None, 'context_after': 'may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task We have shown that the distiller scales better with respect to the number of tasks than the MultiTasker. We functions dissimilar. 6 ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall average reward, but in practice constraints can keep the method from combining skills gracefully. If the reward ', 'modified_lines': 'functions are different between tasks, the MultiTasker can favor this task with higher rewards, as these tasks in order to combine them. The MultiTasker may also favor tasks that are easier than other tasks in general. expect PLAID would further outperform the MultiTasker if the tasks were more difficult and the reward ', 'original_lines': 'functions are different between tasks the MultiTasker can favor the task with higher rewards, as these tasks in order to combine them. The MultiTasker may also favor tasks that are in general easier than other tasks. believe PLAID would further outperform the MultiTasker if the tasks were more difficult and the reward In our evaluation we compare the number of iterations PLAID uses to the number the MultiTasker uses on only the new task, which is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that are needed to learn new tasks ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 DISCUSSION', 'after_section': '6 DISCUSSION', 'context_after': 'of using the value functions for TL. We did not discover any empirical evidence that this assisted the 7 Under review as a conference paper at ICLR 2018 6.1 LIMITATIONS: ', 'paragraph_idx': 40, 'before_section': None, 'context_before': 'Figure 5: Still frame shots of the pd-biped traversing the mixed environment. ', 'modified_lines': 'In our evaluation we compare the number of iterations PLAID uses to the number the MultiTasker uses on only the new task, which is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that are needed to learn new tasks then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioral cloning. We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. Transfer Learning: Because we are using an actor-critic learning method, we also studied the possibility learning process. When transferring to a new task, the state distribution has changed and the reward function maybe be completely different. This makes it unlikely that the value function will be accurate on this new task. In addition, value functions are in general easier and faster to learn than policies, implying that value function reuse is less important to transfer. We also find that helpfulness of TL depends on not only the task difficulty task but the reward function as well. Two tasks may overlap in state space but the area they overlap could be easily reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. ', 'original_lines': 'then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioral cloning. We did not perform a study into how effective distillation could be by conservatively collecting data from the simulation in order to keep comparatively similar learning conditions between distillation and the MultiTasker. We believe the another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. Transfer Learning: Because we are using an actor-critic learning method we also studied the possibility learning process. When transferring to a new task the state distribution has changed and the reward function maybe be completely different. This makes it unlikely the value function will be accurate on this new task. Also, value functions are in general easier and faster to learn than policies, implying that it is less important to transfer. We also find that helpfulness of TL depends on not only the task difficulty task but the reward function as well. Two tasks may overlap in state space but the area they overlap could be easily reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. ', 'after_paragraph_idx': 41, 'before_paragraph_idx': None}]
|
2017-10-27 20:06:15
|
ICLR.cc/2018/Conference
|
BklpZzb0b
|
BkKXCq_0b
|
[{'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'at ∼ π(at | st, θπ) = N (µ(st | θµ), Σ) Σ = diag{σ2 i } (3) ∇θπ J(π(·|θπ)) = ', 'paragraph_idx': 13, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': '(2) ', 'modified_lines': 'Our policy models a Gaussian distribution with a mean state dependent mean, µθt(st). Thus, our stochastic policy may be formulated as follows: where Σ is a diagonal covariance matrix with entries σ2 2017). i on the diagonal, similar to (Peng et al., To optimize our policy, we use stochastic policy gradient methods, which are well-established family of techniques for reinforcement learning (Sutton et al., 2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'original_lines': 'Our policy models a Gaussian distribution with a mean state dependent mean, µθt(st). Thus, our stochastic policy may be formulated as follows: where Σ is a diagonal covariance matrix with entries σ2 i on the diagonal, similar to (Peng et al., 2017). To optimize our policy, we use stochastic policy gradient methods, which are well-established family of techniques for reinforcement learning (Sutton et al., 2000). The gradient of the expected reward with respect to the policy parameters, ∇θπ J(π(·|θπ)), is given by: ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 13}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': None, 'context_after': 'Aπ(st, at) = I [δt > 0] = ', 'paragraph_idx': 14, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': '(cid:80)T where dθ = (cid:82) ', 'modified_lines': 't=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) rep- resents the initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps (Silver et al., 2014). Aπ(s, a) represents an advantage function (Schulman et al., 2016). In this work, we use the Positive Temporal Difference (PTD) update proposed by (Van Hasselt, 2012) for Aπ(s, a): ', 'original_lines': 't=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) represents the initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps (Silver et al., 2014). Aπ(s, a) represents an advantage function (Schulman et al., 2016). In this work, we use the Positive Temporal Difference (PTD) update proposed by (Van Hasselt, 2012) for Aπ(s, a): ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'minimize ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 't=0 γtrt | s0 = s where Vπ(s) = E ', 'modified_lines': 'is the value function, which gives the expected discounted cumulative reward from following policy π starting in state s. PTD has the benefit of being insen- sitive to the advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of actions that have a positive advantage has been found to increase the stability of learning (Van Hasselt, 2012). Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: 3 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'is the value function, which gives the expected discounted cumu- lative reward from following policy π starting in state s. PTD has the benefit of being insensitive to the advantage function scale. Furthermore, limiting policy updates in this way to be only in the direction of ac- tions that have a positive advantage has been found to increase the stability of learning (Van Hasselt, 2012). Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}, {'section': '3.2 POLICY DISTILLATION', 'after_section': None, 'context_after': '3.3 TRANSFER LEARNING 4 PROGRESSIVE LEARNING π0 ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '3.2 POLICY DISTILLATION ', 'modified_lines': 'Given a set of expert agents that have solved/mastered different tasks we may want to combine the skills of these different experts into a single multi-skilled agent. This process is referred to as distillation. Distillation does not necessarily produce an optimal mix of the given experts but instead tries to produce an expert that best matches the action distributions produced by all experts. This method functions independent of the reward functions used to train each expert. Distillation also scales well with respect to the number of tasks or experts that are being combined. Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a new task efficiently. This problem falls in the area of Transfer Learning (Pan & Yang, 2010). Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. Although we focus on the problem of being presented with tasks sequentially, there exist other meth- ods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills, the first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately (Parisotto et al., 2015). The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned to- gether. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. 4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION In this section, we detail our proposed learning framework for continual policy transfer and distil- lation (PLAID). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here transfer can be beneficial if the task structure is somewhat similar to previous tasks ωi. We adopt the TL strategy of using an existing policy network and fine-tuning it to a new task. Since we are not concerned with retaining previous skills in this step, we can update this policy without concern for forgetting. As the agent learns it will develop more skills and the addition of every new skill can increase the probability of transferring knowledge to assist the learning of the next skill. In the integration (distillation) step, we are interested in combining all past skills (π0, . . . , πi) with the newly acquired skill πi+1. Traditional approaches have used policy regression where data is generated by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is caused by the student experiences a different distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, portions of the trajectories should be generated by the student. This allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning a model to reproduce a given distribution of 4 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'Given a set of expert agents that have solved/mastered different tasks we may want to combine the skills of these different experts into a single multi-skilled agent. This process is referred to as distillation. Distillation does not necessarily produce an optimal mix of the given experts but instead tries to produce an expert that best matches the action distributions produced by all experts. This method functions independent of the reward functions used to train each expert. Distillation also scales well with respect to the number of tasks or experts that are being combined. Given an expert that has solved/mastered a task we want to reuse that expert knowledge in order to learn a new task efficiently. This problem falls in the area of Transfer Learning (Pan and Yang, 2010). Considering the state distribution expert is skilled at solving, (Dωi the source distribution) it can be advantageous to start learning a new, target task ωi+1 with target distribution Dωi+1 using assistance from the expert. The agent learning how to solve the target task with domain Dωi+1 is referred to as the student. When the expert is used to assist the student in learning the target task it can be referred to as the teacher. The success of these methods are dependent on overlap between the Dωi and Dωi+1 state distributions. 3 Under review as a conference paper at ICLR 2018 Although we focus on the problem of being presented with tasks sequentially, there exist other methods for learning a multi-skilled character. We considered 3 overall integration methods for learning multiple skills, the first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately (Parisotto et al., 2015). The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS', 'after_section': None, 'context_after': '4.2 HIGH LEVEL EXPERIMENT DESIGN 5 RESULTS (a) flat ', 'paragraph_idx': 29, 'before_section': None, 'context_before': '(c) PLAID ', 'modified_lines': 'Figure 1: Different curriculum learning process. The red box with a D in it denotes a distillation step that combines policies. Each gray box denotes one iteration of learning a new policy. The larger red boxes with an Lterrain−type denotes a learning step where a new skill is learned. trajectories (Ross et al., 2010; Bengio et al., 2015; Martinez et al., 2017; Lamb et al., 2016). We use a method similar to the DAGGER algorithm (Ross et al., 2010) which is useful for distilling policies (Parisotto et al., 2015). As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. The results presented in this work cover a range of tasks that share a similar action space and state space. Our focus is to demonstrate continual learning between related tasks. In addition, the con- ceptual framework allows for extensions that would permit differing state spaces, described later in Section: 5.2. In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker learns to traverse. A bipedal character is trained to navigate multiple types of terrain including flat in (Figure 4a), incline (Figure 4b), steps (Figure 4c), slopes (Figure 4d), gaps (Figure 4e) and a com- bination of all terrains mixed (Figure 4f) on which agents are trained. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng & van de Panne, 2016). The 2D humanoid receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed de- scription of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. We evaluate our approach against two baselines. First, we compare the above learning curricu- lum from learning new tasks in PLAID with learning new tasks from randomly initialized con- troller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively trans- ferred after distillation steps. Second, we compare to the MultiTasker to demonstrate that iterated distillation is effective for the retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. The results of the PLAID controller are displayed in the accompany- ing Video 1 1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 5 Under review as a conference paper at ICLR 2018 5.1 TRANSFER LEARNING First, the pd-biped is trained to produce a walking motion on flat ground (flat). In Figure 2a PLAID is compared to the two baselines for training on incline. The Scratch method learns the slowest as it is given no information about how to perform similar skills. The first MultiTasker for the incline task is initialized from a terrain injected controller that was trained to walk on flat ground. Any subsequent MultiTasker is initialized from the final MultiTasker model of the preceding task. This controller has to learn multiple tasks together, which can complicate the learning process, as simulation for each task is split across the training and the overall RL task can be challenging. This is in contrast to using PLAID, that is also initialized with the same policy trained on flat, that will integrate skills together after each new skill is learned. In Figure 2b the MultiTasker is learning the new task (steps) with similar speed to PLAID. However, after adding more tasks the MultiTasker is beginning to struggle in Figure 2c and fails in in Figure 2d, with the number of tasks it must learn at the same time. While PLAID learns the new tasks faster and is able to integrate the new skill required to solve the task robustly. (a) incline (b) steps (c) slopes (d) gaps Figure 2: TL comparison over each of the environments. The learning for PLAID is split into two steps, with TL (in green) going first followed by the distillation part (in red). Using TL assists in the learning of new tasks. 5.2 INPUT INJECTION An appealing property of using distillation in PLAID is that the combined policy model need not resemble that of the individual expert controllers. For example, two different experts lacking state features and trained without a local map of the terrain can be combined into a single policy that has new state features for the terrain. These new terrain features can assist the agent the task domain in which it operates. We introduce the idea of input injection for this purpose. We augment a policy with additional input features while allowing it to retain its original functional behaviour similar to (Chen et al., 2015). This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. 5.3 DISTILLING MULTIPLE POLICIES Training over multiple tasks at the same time my help the agent learn skills quicker, but this may not scale with respect to the number of tasks. When training the MultiTasker over two or even three tasks (Figure 3a) the method displays good results, however when learning a fourth or more tasks the method struggles, as shown in Figure 3b and 3b. Part of the reason for this struggle is when new tasks are added the MultiTasker has to make trade-offs between more tasks to maximizes. As more tasks are added, this trade-off becomes increasingly complex resulting in the MultiTasker favouring easier tasks. Using PLAID to combine the skills of many policies appears to scale better with respect to the number of skills being integrated. This is likely because distillation is a semi-supervised method which is more stable than the un-supervised RL solution. This can be seen in Figure 3d, 3e and especially in 3f where PLAID combines the skills faster and can find higher value policies in practice. PLAID also presents zero-shot training on tasks which it has never trained. In Figure 5 this generalization is shown as the agent navigate across the mixed environment. 6 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAID on 3 tasks (e) PLAID on 4 tasks (f) PLAID on 5 tasks Figure 3: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAID (f) are trained on gaps. There are some indications that distillation is hindering training during the initial few iterations. We are initializing the network used in distillation with the most recently learning policy after TL. The large change in the initial state distribution from the previous seen distribution during TL could be causing larger gradients to appear, disrupting some of the structure learned during the TL step, shown in Figure 3d and 3e. There also might not exist a smooth transition in policy space between the newly learned policy and the previous policy distribution. 6 DISCUSSION MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall average reward, but in practice constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks, the MultiTasker can favor this task with higher rewards, as these tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker may also favor tasks that are easier than other tasks in general. We have shown that the distiller scales better with respect to the number of tasks than the MultiTasker. We expect PLAID would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar. In our evaluation we compare the number of iterations PLAID uses to the number the MultiTasker uses on only the new task, which is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that 7 Under review as a conference paper at ICLR 2018 are needed to learn new tasks then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioral cloning. We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. Transfer Learning: Because we are using an actor-critic learning method, we also studied the possibility of using the value functions for TL. We did not discover any empirical evidence that this assisted the learning process. When transferring to a new task, the state distribution has changed and the reward function maybe be completely different. This makes it unlikely that the value function will be accurate on this new task. In addition, value functions are in general easier and faster to learn than policies, implying that value function reuse is less important to transfer. We also find that helpfulness of TL depends on not only the task difficulty task but the reward function as well. Two tasks may overlap in state space but the area they overlap could be easily reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. 6.1 LIMITATIONS: Once integrated, the skills for our locomotion tasks are self-selecting based on their context, i.e., the knowledge of the upcoming terrain. It may be that other augmentation and distillation strategies are better for situations where the current reward function or a one-hot vector is used to select the currently active expert. In our transfer learning results we could be over fitting the initial expert for the particular task it was learning. Making it more challenging for the policy to learning a new task, resulting in negative transfer. After learning many new tasks the previous tasks may not receive a large enough potion of the distillation training process to preserve the experts skill well enough. How best to chose which data should be trained on next to best preserve the behaviour of experts is a general problem with multi-task learning. Distillation treats all tasks equally independent of their reward. This can result in very low value tasks, receiving potentially more distribution than desired and high value tasks receiving not enough. We have not needed the use a one-hot vector to indicate what task the agent is performing. We want the agent to be able to recognize which task it is being given but we do realize that some tasks could be too similar to differentiate, such as, walking vs jogging on flat ground. 6.2 FUTURE WORK: It would be interesting to develop a method to prioritize tasks during the distillation step. This could assist the agent with forgetting issues or help with relearning tasks. While we currently use the Mean Squared Error (MSE) to pull the distributions of student policies in line with expert polices for distillation, better distance metrics would likely be helpful. Previous methods have used KL Divergence in the discrete action space domain where the state-action value function encodes the policy, e.g., as with Deep Q-Network (DQN). In this work we do not focus on producing the best policy from a mixture of experts, but instead we match the distributions from a number of experts. The difference is subtle but in practice it can be more difficult to balance many experts with respect to their reward functions. It could also be beneficial to use a KL penalty while performing distillation, i.e., something similar to the work in (Teh et al., 2017) in order to keep the policy from changing too rapidly during training. 7 CONCLUSION We have proposed and evaluated a method for the progressive learning and integration (via distilla- tion) of motion skills. The method exploits transfer learning to speed learning of new skills, along with input injection where needed, as well as continuous-action distillation, using DAGGER-style learning. This compares favorably to baselines consisting of learning all skills together, or learning all the skills individually before integration. We believe that there remains much to learned about the best training and integration methods for movement skill repertoires, as is also reflected in the human motor learning literature. 8 Under review as a conference paper at ICLR 2018 REFERENCES Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning en- vironment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253–279, 2013. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for se- quence prediction with recurrent neural networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Sys- tems 28, pp. 1171–1179. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/ 5956-scheduled-sampling-for-sequence-prediction-with-recurrent-neural-networks.pdf. Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015. Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, and Sergey Levine. Learning mod- ular neural network policies for multi-task and multi-robot transfer. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 2169–2176. IEEE, 2017. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017. Nicolas Heess, Gregory Wayne, Yuval Tassa, Timothy P. Lillicrap, Martin A. Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. CoRR, abs/1610.05182, 2016. URL http://arxiv.org/abs/1610.05182. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hass- abis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017. doi: 10.1073/pnas.1611835114. URL http://www.pnas.org/content/114/13/3521.abstract. Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in Neural Information Processing Systems 29, pp. 3675–3683. 2016. training recurrent networks. Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, A new algorithm forcing: In D. D. Lee, M. Sugiyama, U. V. Luxburg, Information Processing Systems URL http://papers.nips.cc/paper/ Aaron C Courville, for I. Guyon, and R. Garnett 29, pp. 4601–4609. Curran Associates, 6099-professor-forcing-a-new-algorithm-for-training-recurrent-networks.pdf. (eds.), Advances in Neural and Yoshua Bengio. Inc., 2016. Professor Zhizhong Li and Derek Hoiem. Learning without forgetting. CoRR, abs/1606.09282, 2016. URL http://arxiv.org/abs/1606.09282. Julieta Martinez, Michael J. Black, and Javier Romero. On human motion prediction using recurrent neural networks. CoRR, abs/1705.02445, 2017. URL http://arxiv.org/abs/1705.02445. Josh Merel, Yuval Tassa, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. arXiv preprint arXiv:1707.02201, 2017. S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, Oct 2010. ISSN 1041-4347. doi: 10.1109/TKDE.2009.191. Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015. Xue Bin Peng and Michiel van de Panne. Learning locomotion skills using deeprl: Does the choice of action space matter? CoRR, abs/1611.01055, 2016. URL http://arxiv.org/abs/1611.01055. Xue Bin Peng, Glen Berseth, Kangkang Yin, and Michiel Van De Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG), 36(4):41, 2017. 9 Under review as a conference paper at ICLR 2018 J. Rajendran, A. S. Lakshminarayanan, M. M. Khapra, P Prasanna, and B. Ravindran. Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain. arXiv preprint arXiv:1510.02879, October 2015. Ryan T Roemmich and Amy J Bastian. Two ways to save a newly learned motor pattern. Journal of neurophysiology, 113(10):3519–3530, 2015. St´ephane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. No-regret reductions for imitation learning and structured prediction. CoRR, abs/1011.0686, 2010. URL http://arxiv.org/abs/1011. 0686. Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk- patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil- lation. arXiv preprint arXiv:1511.06295, 2015. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive Neural Networks. arXiv, 2016. URL http://arxiv.org/abs/1606.04671. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In International Con- ference on Learning Representations (ICLR 2016), 2016. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. arXiv preprint arXiv:1705.08690, 2017. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural informa- tion processing systems, pp. 1057–1063, 2000. Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research, 10(Jul):1633–1685, 2009. Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. arXiv preprint arXiv:1707.04175, 2017. Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A Deep Hi- erarchical Approach to Lifelong Learning in Minecraft. arXiv, pp. 1–6, 2016. URL http: //arxiv.org/abs/1604.07255. Hado Van Hasselt. Reinforcement learning in continuous state and action spaces. In Reinforcement Learning, pp. 207–251. Springer, 2012. Daniel M Wolpert and J Randall Flanagan. Computations underlying sensorimotor learning. Current opinion in neurobiology, 37:7–11, 2016. 8 APPENDIX 8.1 NETWORK MODELS We used two different Network models for the experiments in this paper. The first model is a blind model that does not have any terrain features. The blind policy is a Neural Network with 2 hidden layers (512 × 256) with ReLU activations. The output layer of the policy network has linear activations. The network used for the value function has the same design except there is 1 output on the final layer. This design is used for the flat and incline tasks. We augment the blind network design by adding features for terrain to create an agent with sight. This network with terrain features has a single convolution layer with 8 filters of width 3. This constitutional layer is followed by a dense layer of 32 units. The dense layer is then concatenated twice, once along each of the original two hidden layers in the blind version of the policy. 10 Under review as a conference paper at ICLR 2018 8.2 HYPER PARAMETERS AND TRAINING The policy network models a Gaussian distribution by outputting a state dependant mean. We use a state independent standard deviation that normalized with respect to the action space and multiplied by 0.1. We also use a version of epsilon greedy exploration where with (cid:15) probability an exploration action is generated. For all of our experiments we linearly anneal (cid:15) from 0.2 to 0.1 in 100, 000 iterations and leave it from that point on. Each training simulation takes approximately 5 hours across 8 threads. For network training we use Stochastic Gradient Decent (SGD) with momentum. During the distillation step we use gradually anneal the probability of selecting an expert action from 1 to 0 over 10, 000 iterations. For the evaluation of each model on a particular task we use the average reward achieved by the agent over at most 100 seconds of simulation time. We average this over running the agent over a number of randomly generated simulation runs. 8.3 INPUT FEATURE INJECTION In order to add input features to network we construct a new network. This new network has a portion of it that is the same design as the previous network plus additional parameters. First initialize the new network with random parameters. Then we copy over the values from the previous network into the new one for the portion of the network design the matches the old. Then the weight for the layers that connect the old portion of the network to the new are set to 0. This will allow the network to preserve the previous distribution it modeled. Having the parameters from the old network will also help generate gradients to train the new 0 valued network parameters. 8.4 AGENT DESIGN The agent used in the simulation models the dimensions and masses of the average adult. The size of the character state is 50 parameters that include the relative position and velocity of the links in the agent. The action space consists of 11 parameters that indicate target joint positions for the agent. The target joint positions (pd-targets) are turned into joint torques via proportional derivative controllers at each joint. The reward function for the agent consists of 3 primary terms. The first is a velocity term the rewards the agent for going at velocity of 1 m/s The second term is the difference between the pose of the agent and the current pose of a kinematic character controlled via a motion capture clip. The difference between the agent and the clip consists of the rotational difference between each corresponding joint and the difference in angular velocity. The angular velocity for the clip is approximated via finite differences between the current pose of the clip and it’s last pose. The last term is an L2 penalty on the torques generated by the agent to help reduce spastic motions. We also impose torque limits on the joints to reduce unrealistic behaviour, limits: Hips 150, knees 125, ankles 100, shoulders 100, elbows 75 and neck 50 N/m. Terrain Types All terrain types are randomly generated per episode, except for the flat terrain. The incline terrain is slanted and the slant of the terrain is randomly sampled between 20 and 25 degrees. The steps terrain consists of flat segments with widths randomly sampled from 1.0 m to 1.5 m followed by sharp steps that have randomly generated heights between 5 cm and 15 cm. The slopes terrain is randomly generated by updating the slope of the previous point in the ground with a value sampled from −20 and 20 degrees to generate a new portion of the ground every 10 cm. The gaps terrain generate gaps of width 25 - 30 cm separated by flat segments of widths sampled from 2.0 m to 2.5 m. The mixed terrain is a combination of the above terrains where a portion is randomly chosen from the above terrain types. 8.5 MULTITASKER In certain cases the MultiTasker can learn new task faster than PLAID. In Figure 6a we present the MultiTasker and compare it to PLAID. In this case the MultiTasker splits its training time across multiple tasks, here we compare the two methods with respect to the time spent learning on the single new task. This is a good baseline to compare our method against but in some way this is not 11 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'Figure 1: Different curriculum learning process. The red box with a D in it denotes a distillation step that combines policies. Each gray box denotes one iteration of learning a new policy. The larger red boxes with an Lterrain−type denotes a learning step where a new skill is learned. 4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION In this section, we detail our proposed learning framework for continual policy transfer and distillation (PLAID). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here transfer can be beneficial if the task structure is somewhat similar to previous tasks ωi. We adopt the TL strategy of using an existing policy network and fine-tuning it to a new task. Since we are not concerned with retaining previous skills in this step, we can update this policy without concern for forgetting. As the agent learns it will develop more skills and the addition of every new skill can increase the probability of transferring knowledge to assist the learning of the next skill. In the integration (distillation) step, we are interested in combining all past skills (π0, . . . , πi) with the newly acquired skill πi+1. Traditional approaches have used policy regression where data is generated by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is caused by the student experiences a different distribution of trajectories than the expert during evaluation. To compensate for this distribution difference, portions of the trajectories should be generated by the student. This allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning a model to reproduce a given distribution of trajectories (Ross et al., 2010; Bengio et al., 2015; Martinez et al., 2017; Lamb et al., 2016). We use a method similar to the DAGGER algorithm (Ross et al., 2010) which is useful for distilling policies (Parisotto et al., 2015). As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. The results presented in this work cover a range of tasks that share a similar action space and state space. Our focus is to demonstrate continual learning between related tasks. In addition, the conceptual framework allows for extensions that would permit differing state spaces, described later in Section: 5.2. 4 Under review as a conference paper at ICLR 2018 In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker learns to traverse. A bipedal character is trained to navigate multiple types of terrain including flat in (Figure 2a), incline (Figure 2b), steps (Figure 2c), slopes (Figure 2d), gaps (Figure 2e) and a combination of all terrains mixed (Figure 2f) on which agents are trained. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng and van de Panne, 2016). The 2D humanoid receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally- spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed description of the experimental setup is included in Section: 8.4. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-11-02 13:31:13
|
ICLR.cc/2018/Conference
|
BkKXCq_0b
|
rJ1M_DaXG
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Understanding the time course of sensorimotor learning in human motor control is an open research problem (Wolpert & Flanagan, 2016) that exists concurrently with recent advances in deep rein- ', 'paragraph_idx': 3, 'before_section': 'Abstract', 'context_before': 'integration of new skills. Given an existing controller that is capable of one-or-more skills, we wish to: (a) efficiently learn a new skill or movement pattern in a way that is informed by the existing control policy, and (b) to reintegrate that into a single controller that is capable of the full motion ', 'modified_lines': 'repertoire. This process can then be repeated as necessary. We view PLAID as a continual learning method, in that we consider a context where all tasks are not known in advance and we wish to learn any new task in an efficient manner. However, it is also proves surprisingly effective as a multitask solution, given the three specific benchmarks that we compare against. In the process of acquiring a new skill, we also allow for a control policy to be augmented with additional inputs, without adversely impacting its performance. This is a process we refer to as input injection. ', 'original_lines': 'repertoire. This process can then be repeated as necessary. In the process of acquiring a new skill, we also allow for a control policy to be augmented with additional inputs, without adversely impacting its performance. This is a process we refer to as input injection. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 RELATED WORK ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'initialized model (Rajendran et al., 2015). Additionally, while learning a new skill, the control policy should not forget how to perform old skills. ', 'modified_lines': 'The core contribution of this paper is a method Progressive Learning and Integration via Distilla- tion (PLAiD) to repeatedly expand and integrate a motion repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the con- text of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against three alternative baselines. We also introduce input injection, a con- venient mechanism for adding inputs to control policies in support of new skills, while preserving existing capabilities. ', 'original_lines': 'The core contribution of this paper is a method (PLAID) to repeatedly expand and integrate a motion repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against two alternative baselines. We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while preserving existing capabilities. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks (Tessler et al., 2016) This allows for the substitution of network modules for different robot types over a ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'make use of them as building block. Transfer Learning Transfer learning exploits the structure learned from a previous task in learn- ', 'modified_lines': 'ing a new task. Our focus here is on transfer learning in environments consisting of continuous con- trol tasks. The concept of appending additional network structure while keeping the previous struc- ture to reduce catastrophic forgetting has worked well on Atari games (Rusu et al., 2015; Parisotto et al., 2015; Rusu et al., 2016; Chen et al., 2015) Other methods reproduce data from all tasks to re- duce the possibility of forgetting how to perform previously learned skills e.g, (Shin et al., 2017; Li & Hoiem, 2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters (Kirkpatrick et al., 2017). A different approach to combining policies is to use a hierarchical structure (Tessler et al., 2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve con- trol policies that can quickly adapt their behaviour according to current rewards (Finn et al., 2017). This work is demonstrated on parameterized task domains. The Powerplay method provides a gen- eral framework for training an increasingly general problem solver Schmidhuber (2011); Srivastava et al. (2012). It is based on iteratively: inventing a new task using play or invention; solving this task; and, lastly, demonstrating the ability to solve all the previous tasks. The last two stages are broadly similar to our PLAID approach, although to the best of our knowledge, there are no experiments on motor control tasks of comparable complexity to the ones we tackle. In our work, we develop a specific progressive learning-and-distillation methodology for motor skills, and provide a detailed evaluation as compared to three other plausible baselines. We are specifically interested in under- 2 Under review as a conference paper at ICLR 2018 standing issues that arise from the interplay between transfer from related tasks and the forgetting that may occur. ', 'original_lines': 'ing a new task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The concept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games (Rusu et al., 2015; Parisotto et al., 2015; Rusu et al., 2016; Chen et al., 2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, (Shin et al., 2017; Li & Hoiem, 2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters (Kirkpatrick et al., 2017). A different approach to combin- ing policies is to use a hierarchical structure (Tessler et al., 2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve control policies that can quickly adapt their behavior according to current rewards (Finn et al., 2017). This work is demonstrated on parameterized task domains. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3 FRAMEWORK In this section we outline the details of the Reinforcement Learning (RL) framework. We also give ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'learned motions that can be shaped by prior mocap clips (Merel et al., 2017), and that these can then be integrated in a hierarchical controller. ', 'modified_lines': '', 'original_lines': '2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'minimize ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'in the direction of actions that have a positive advantage has been found to increase the stability of learning (Van Hasselt, 2012). Because the true value function is unknown, an approximation Vπ(· | θv) with parameters θv is learned, which is formulated as the regression problem: ', 'modified_lines': '', 'original_lines': ' 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': None, 'context_after': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION In this section, we detail our proposed learning framework for continual policy transfer and distil- transfer can be beneficial if the task structure is somewhat similar to previous tasks ωi. We adopt the TL strategy of using an existing policy network and fine-tuning it to a new task. Since we are not concerned with retaining previous skills in this step, we can update this policy without concern ', 'paragraph_idx': 17, 'before_section': '4 PROGRESSIVE LEARNING', 'context_before': 'policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, ', 'modified_lines': 'leading to a more sequential method. The last version (PLAiD) learns each task sequentially using TL from the previous, most skilled policy, in the end resulting in a policy capable of solving all tasks Figure 1c. This method works well for both combining learned skills and learning new skills. lation (PLAiD). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here 4 Under review as a conference paper at ICLR 2018 π0 L0 L1 . . . Lω−1 Lω π1 π0 L0 L1 . . . Lω−1 Lω π1 π2 . . . πω πω+1 D πω+1 (a) MultiTasker (b) Parallel Learning and Distillation π0 L0 π1 L1 D π2 ˆπ2 . . . D . . . ˆ. . . Lω−1 D πω ˆπω Lω D πω+1 ˆπω+1 (c) PLAiD Figure 1: Different curriculum learning process. The red box with a D in it denotes a distillation step that combines policies. Each gray box denotes one iteration of learning a new policy. The larger red boxes with an Lterrain−type denotes a learning step where a new skill is learned. ', 'original_lines': 'leading to a more sequential method. The last version learns each task sequentially using TL from the previous, most skilled policy Figure 1c. This method works well for both combining learned skills and learning new skills. lation (PLAID). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here ', 'after_paragraph_idx': None, 'before_paragraph_idx': 17}, {'section': 'Abstract', 'after_section': None, 'context_after': 'trajectories (Ross et al., 2010; Bengio et al., 2015; Martinez et al., 2017; Lamb et al., 2016). We use a method similar to the DAGGER algorithm (Ross et al., 2010) which is useful for distilling policies (Parisotto et al., 2015). As our RL algorithm is an actor-critic method, we also perform ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'for this distribution difference, portions of the trajectories should be generated by the student. This allows the expert to suggest behavior that will pull the state distribution of the student closer to the expert’s. This is a common problem in learning a model to reproduce a given distribution of ', 'modified_lines': '', 'original_lines': ' 4 Under review as a conference paper at ICLR 2018 π0 L0 L1 . . . Lω−1 Lω π1 π0 L0 L1 . . . Lω−1 Lω π1 π2 . . . πω πω+1 D πω+1 (a) MultiTasker (b) Parallel Learning and Distillation π0 L0 π1 L1 D π2 ˆπ2 . . . D . . . ˆ. . . Lω−1 D πω ˆπω Lω D πω+1 ˆπω+1 (c) PLAID Figure 1: Different curriculum learning process. The red box with a D in it denotes a distillation step that combines policies. Each gray box denotes one iteration of learning a new policy. The larger red boxes with an Lterrain−type denotes a learning step where a new skill is learned. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS', 'after_section': '5 RESULTS', 'context_after': 'to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng & van de Panne, 2016). The 2D humanoid receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed de- sequentially and the goal is to progressively learn to traverse all terrain types. We evaluate our approach against two baselines. First, we compare the above learning curricu- troller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively trans- ferred after distillation steps. Second, we compare to the MultiTasker to demonstrate that iterated distillation is effective for the retention of learned skills. The MultiTasker is also used as a baseline ing Video 1 5.1 TRANSFER LEARNING is compared to the two baselines for training on incline. The Scratch method learns the slowest as it is given no information about how to perform similar skills. The first MultiTasker for the incline task is initialized from a terrain injected controller that was trained to walk on flat ground. Any subsequent MultiTasker is initialized from the final MultiTasker model of the preceding task. This controller has to learn multiple tasks together, which can complicate the learning process, as simulation for each task is split across the training and the overall RL task can be challenging. This integrate skills together after each new skill is learned. after adding more tasks the MultiTasker is beginning to struggle in Figure 2c and fails in in Figure 2d, and is able to integrate the new skill required to solve the task robustly. 5.2 resemble that of the individual expert controllers. For example, two different experts lacking state features and trained without a local map of the terrain can be combined into a single policy that has new state features for the terrain. These new terrain features can assist the agent the task domain in ', 'paragraph_idx': 28, 'before_section': '5 RESULTS', 'context_before': 'In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker learns to traverse. A bipedal character is trained to navigate multiple types of terrain including flat in ', 'modified_lines': '(Figure 6a), incline (Figure 6b), steps (Figure 6c), slopes (Figure 6d), gaps (Figure 6e) and a com- bination of all terrains mixed (Figure 6f) on which agents are trained. The goal in these tasks is 5 Under review as a conference paper at ICLR 2018 scription of the experimental setup is included in Section: 8.5. The tasks are presented to the agent lum from learning new tasks in PLAiD with learning new tasks from randomly initialized con- for comparing learning speed. The results of the PLAiD controller are displayed in the accompany- First, the pd-biped is trained to produce a walking motion on flat ground (flat). In Figure 2a PLAiD is in contrast to using PLAiD, that is also initialized with the same policy trained on flat, that will In Figure 2b the MultiTasker is learning the new task (steps) with similar speed to PLAiD. However, with the number of tasks it must learn at the same time. While PLAiD learns the new tasks faster INPUT FEATURE INJECTION An appealing property of using distillation in PLAiD is that the combined policy model need not ', 'original_lines': '(Figure 4a), incline (Figure 4b), steps (Figure 4c), slopes (Figure 4d), gaps (Figure 4e) and a com- bination of all terrains mixed (Figure 4f) on which agents are trained. The goal in these tasks is scription of the experimental setup is included in Section: 8.4. The tasks are presented to the agent lum from learning new tasks in PLAID with learning new tasks from randomly initialized con- for comparing learning speed. The results of the PLAID controller are displayed in the accompany- 1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 5 Under review as a conference paper at ICLR 2018 First, the pd-biped is trained to produce a walking motion on flat ground (flat). In Figure 2a PLAID is in contrast to using PLAID, that is also initialized with the same policy trained on flat, that will In Figure 2b the MultiTasker is learning the new task (steps) with similar speed to PLAID. However, with the number of tasks it must learn at the same time. While PLAID learns the new tasks faster (a) incline (b) steps (c) slopes (d) gaps Figure 2: TL comparison over each of the environments. The learning for PLAID is split into two steps, with TL (in green) going first followed by the distillation part (in red). Using TL assists in the learning of new tasks. INPUT INJECTION An appealing property of using distillation in PLAID is that the combined policy model need not ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 28}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '5.3 DISTILLING MULTIPLE POLICIES not scale with respect to the number of tasks. When training the MultiTasker over two or even three tasks (Figure 3a) the method displays good results, however when learning a fourth or more tasks the method struggles, as shown in Figure 3b and 3b. Part of the reason for this struggle is when new tasks are added the MultiTasker has to make trade-offs between more tasks to maximizes. As more tasks are added, this trade-off becomes increasingly complex resulting in the MultiTasker favouring to the number of skills being integrated. This is likely because distillation is a semi-supervised 6 There are some indications that distillation is hindering training during the initial few iterations. We are initializing the network used in distillation with the most recently learning policy after TL. ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are ', 'modified_lines': 'initialized random without changing the functional behavior. This is performed when distilling the flat and incline experts. Further details can be found in the appendix Training over multiple tasks at the same time ma y help the agent learn skills quicker, but this may easier tasks. Using PLAiD to combine the skills of many policies appears to scale better with respect method which is more stable than the un-supervised RL solution. This can be seen in Figure 3d, 3e and especially in 3f where PLAiD combines the skills faster and can find higher value policies 1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 Under review as a conference paper at ICLR 2018 (a) incline (b) steps (c) slopes (d) gaps Figure 2: TL comparison over each of the environments. The learning for PLAiD is split into two steps, with TL (in green) going first followed by the distillation part (in red). Using TL assists in the learning of new tasks. in practice. PLAiD also presents zero-shot training on tasks which it has never been trained on . In Figure 7 this generalization is shown as the agent navigates across the mixed environment. ', 'original_lines': 'initialized random without changing the functional behavior. This performed when distilling the flat and incline experts. Training over multiple tasks at the same time my help the agent learn skills quicker, but this may easier tasks. Using PLAID to combine the skills of many policies appears to scale better with respect method which is more stable than the un-supervised RL solution. This can be seen in Figure 3d, 3e and especially in 3f where PLAID combines the skills faster and can find higher value policies in practice. PLAID also presents zero-shot training on tasks which it has never trained. In Figure 5 this generalization is shown as the agent navigate across the mixed environment. 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAID on 3 tasks (e) PLAID on 4 tasks (f) PLAID on 5 tasks Figure 3: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAID (f) are trained on gaps. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': None, 'context_after': 'average reward, but in practice constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks, the MultiTasker can favor this task with higher rewards, as these tasks may receive higher advantage. It is also a non-trivial task to normalize the reward functions for each task in order to combine them. The MultiTasker may also favor tasks that are easier than other tasks in general. We have shown that the distiller scales better with respect to the if the tasks were more difficult and the reward functions dissimilar. uses on only the new task, which is not necessarily fair. The MultiTasker gains its benefits from training on the other tasks together. If the idea is to reduce the number of simulation samples that are needed to learn new tasks then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for behavioral cloning. We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. Transfer Learning: Because we are using an actor-critic learning method, we also studied the possibility of using the value functions for TL. We did not discover any empirical evidence that this will be accurate on this new task. In addition, value functions are in general easier and faster to learn than policies, implying that value function reuse is less important to transfer. We also find that helpfulness of TL depends on not only the task difficulty task but the reward function as well. Two ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '6 DISCUSSION ', 'modified_lines': 'MultiTasker vs PLAiD: The MultiTasker may be able to produce a policy that has higher overall number of tasks than the MultiTasker. We expect PLAiD would further outperform the MultiTasker In our evaluation we compare the number of iterations PLAiD uses to the number the MultiTasker 7 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAiD on 3 tasks (e) PLAiD on 4 tasks (f) PLAiD on 5 tasks Figure 3: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAiD (f) are trained on gaps. assisted the learning process. When transferring to a new task, the state distribution has changed and the reward function may be completely different. This makes it unlikely that the value function ', 'original_lines': 'MultiTasker vs PLAID: The MultiTasker may be able to produce a policy that has higher overall number of tasks than the MultiTasker. We expect PLAID would further outperform the MultiTasker In our evaluation we compare the number of iterations PLAID uses to the number the MultiTasker 7 Under review as a conference paper at ICLR 2018 assisted the learning process. When transferring to a new task, the state distribution has changed and the reward function maybe be completely different. This makes it unlikely that the value function ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'REFERENCES Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning en- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the best training and integration methods for movement skill repertoires, as is also reflected in the human motor learning literature. ', 'modified_lines': '', 'original_lines': '8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'J. Rajendran, A. S. Lakshminarayanan, M. M. Khapra, P Prasanna, and B. Ravindran. Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain. arXiv preprint arXiv:1510.02879, October 2015. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG), 36(4):41, 2017. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '8.2 HYPER PARAMETERS AND TRAINING ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'This network with terrain features has a single convolution layer with 8 filters of width 3. This constitutional layer is followed by a dense layer of 32 units. The dense layer is then concatenated twice, once along each of the original two hidden layers in the blind version of the policy. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 21:03:03
|
ICLR.cc/2018/Conference
|
rJ1M_DaXG
|
r1kGBfeHM
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'Deep reinforcement learning has demonstrated increasing capabilities for con- tinuous control problems, including agents that can move with skill and agility ', 'modified_lines': 'through their environment. An open problem in this setting is that of develop- ing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state dis- tribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine expert policies, as evaluated in the do- main of simulated bipedal locomotion across different classes of terrain. We also introduce an input injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods al- lows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against three alternative baselines. ', 'original_lines': 'through their environment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each individual skill is a specialist in a specific skill and its associated state distribution. We extend policy distillation methods to the continuous action setting and leverage this technique to combine expert policies, as evaluated in the domain of simulated bipedal locomotion across different classes of terrain. We also introduce an in- put injection method for augmenting an existing policy network to exploit new input features. Lastly, our method uses transfer learning to assist in the efficient acquisition of new skills. The combination of these methods allows a policy to be incrementally augmented with new skills. We compare our progressive learning and integration via distillation (PLAID) method against two alternative baselines. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'existing capabilities. 2 RELATED WORK ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'initialized model (Rajendran et al., 2015). Additionally, while learning a new skill, the control policy should not forget how to perform old skills. ', 'modified_lines': 'The core contribution of this paper is a method Progressive Learning and Integration via Distillation (PLAiD) to repeatedly expand and integrate a motion control repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the context of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against three alternative baselines. We also introduce input injection, a convenient mechanism for adding inputs to control policies in support of new skills, while preserving ', 'original_lines': 'The core contribution of this paper is a method Progressive Learning and Integration via Distilla- tion (PLAiD) to repeatedly expand and integrate a motion repertoire. The main building blocks consist of policy transfer and multi-task policy distillation, and the method is evaluated in the con- text of a continuous motor control problem, that of robust locomotion over distinct classes of terrain. We evaluate the method against three alternative baselines. We also introduce input injection, a con- venient mechanism for adding inputs to control policies in support of new skills, while preserving ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 6}, {'section': '4 PROGRESSIVE LEARNING', 'after_section': '4 PROGRESSIVE LEARNING', 'context_after': 'multiple skills, the first being a controller that learns multiple tasks at the same time (MultiTasker), where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately (Parisotto et al., 2015). The curriculum for TL from the previous, most skilled policy, in the end resulting in a policy capable of solving all 4 ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '4 PROGRESSIVE LEARNING Although we focus on the problem of being presented with tasks sequentially, there exist other meth- ', 'modified_lines': 'ods for learning a multi-skilled character. We considered 4 overall integration methods for learning using this method is shown in Figure 1a were during a single RL simulation all tasks are learned together. It is also possible to randomly initialize controllers and train in parallel (Parallel) and then combine the resulting policies Figure 1b. We found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with the Parallel model it would occur outside of the original parallel learning, leading to a more sequential method. A TL-Only method that uses TL while learning tasks in a sequence Fig- ure 1c, possibly ending with a distillation step to combine the learned policies to decrease forgetting. For more details see Appendix: 8.4. The last version (PLAiD) learns each task sequentially using tasks Figure 1d. This method works well for both combining learned skills and learning new skills. ', 'original_lines': 'ods for learning a multi-skilled character. We considered 3 overall integration methods for learning using this method is shown in Figure 1a were during a single RL simulation all tasks are learned to- gether. It is also possible to learn each task separately but in parallel and then combine the resulting policies Figure 1b. We attempted to evaluate this method as well but we found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be learned with this model it would occur outside of the original parallel learning, leading to a more sequential method. The last version (PLAiD) learns each task sequentially using tasks Figure 1c. This method works well for both combining learned skills and learning new skills. 4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION In this section, we detail our proposed learning framework for continual policy transfer and distil- lation (PLAiD). In the acquisition (TL) step, we are interested in learning a new task ωi+1. Here ', 'after_paragraph_idx': 17, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '. . . Lω−1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'L0 L1 ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'after_section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'context_after': 'for this distribution difference, portions of the trajectories should be generated by the student. This the expert’s. This is a common problem in learning a model to reproduce a given distribution of trajectories (Ross et al., 2010; Bengio et al., 2015; Martinez et al., 2017; Lamb et al., 2016). We use a method similar to the DAGGER algorithm (Ross et al., 2010) which is useful for distilling 4.2 HIGH LEVEL EXPERIMENT DESIGN ', 'paragraph_idx': 28, 'before_section': '4.1 PROGRESSIVE LEARNING AND INTEGRATION VIA DISTILLATION', 'context_before': 'the newly acquired skill πi+1. Traditional approaches have used policy regression where data is generated by collecting trajectories of the expert policy on a task. Training the student on these trajectories does not always result in robust behaviour. This poor behaviour is caused by the student ', 'modified_lines': 'experiencing a different distribution of trajectories than the expert during evaluation. To compensate allows the expert to suggest behaviour that will pull the state distribution of the student closer to policies (Parisotto et al., 2015). See Appendix: 8.2.1 for more details. As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. ', 'original_lines': 'experiences a different distribution of trajectories than the expert during evaluation. To compensate allows the expert to suggest behavior that will pull the state distribution of the student closer to policies (Parisotto et al., 2015). As our RL algorithm is an actor-critic method, we also perform regression on the critic by fitting both in the same step. ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 28}, {'section': '5 RESULTS', 'after_section': None, 'context_after': '5 Under review as a conference paper at ICLR 2018 5.1 TRANSFER LEARNING First, the pd-biped is trained to produce a walking motion on flat ground (flat). In Figure 2a PLAiD In Figure 2b the MultiTasker is learning the new task (steps) with similar speed to PLAiD. However, 5.2 ', 'paragraph_idx': 31, 'before_section': None, 'context_before': '5 RESULTS ', 'modified_lines': 'In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker (pd- biped) learns to traverse. The humanoid walker is trained to navigate multiple types of terrain including flat in (Figure 6a), incline (Figure 6b), steps (Figure 6c), slopes (Figure 6d), gaps (Fig- ure 6e) and a combination of all terrains mixed (Figure 6f) on which agents are trained. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng & van de Panne, 2016). The pd-biped receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the charac- ter. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed description of the experimental setup is included in Section: 8.5. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. We evaluate our approach against three baselines. First, we compare the above learning curriculum from learning new tasks in PLAiD with learning new tasks in Parallel. This will demonstrate that knowledge from previous tasks can be effectively transferred after distillation steps. Second, we compare to the MultiTasker to demonstrate that iterated distillation is effective for the retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. Last, a method that performs TL between tasks and concludes with a distillation step is evaluated to illustrate the result of different TL and distillation schedules. The results of the PLAiD controller are displayed in the accompanying Video 1 is compared to the three baselines for training on incline. The TL-Only method learns fast as it is given significant information about how to perform similar skills. The Parallel method is given no prior information leading to a less skilled policy. The first MultiTasker for the incline task is initialized from a terrain injected controller that was trained to walk on flat ground. Any subsequent MultiTasker is initialized from the final MultiTasker model of the preceding task. This controller has to learn multiple tasks together, which can complicate the learning process, as simulation for each task is split across the training and the overall RL task can be challenging. This is in contrast to using PLAiD, that is also initialized with the same policy trained on flat, that will integrate skills together after each new skill is learned. after adding more tasks the MultiTasker is beginning to struggle in Figure 2c and starts to forget in Figure 2d, with the number of tasks it must learn at the same time. While PLAiD learns the new tasks faster and is able to integrate the new skill required to solve the task robustly. TL-Only is also able to learn the new tasks very efficiently. ', 'original_lines': 'In this experiment, our set of tasks consists of 5 different terrains that a 2D humanoid walker learns to traverse. A bipedal character is trained to navigate multiple types of terrain including flat in (Figure 6a), incline (Figure 6b), steps (Figure 6c), slopes (Figure 6d), gaps (Figure 6e) and a com- bination of all terrains mixed (Figure 6f) on which agents are trained. The goal in these tasks is to maintain a consistent forward velocity traversing various terrains, while also matching a motion capture clip of a natural human walking gait on flat ground, similar to (Peng & van de Panne, 2016). The 2D humanoid receives as input both a character and (eventually) a terrain state representation, consisting of the terrains heights of 50 equally-spaced points in front of the character. The action space is 11-dimensional, corresponding to the joints. Reasonable torque limits are applied, which helps produce more natural motions and makes the control problem more difficult. A detailed de- scription of the experimental setup is included in Section: 8.5. The tasks are presented to the agent sequentially and the goal is to progressively learn to traverse all terrain types. We evaluate our approach against two baselines. First, we compare the above learning curricu- lum from learning new tasks in PLAiD with learning new tasks from randomly initialized con- troller (Scratch). This will demonstrate that knowledge from previous tasks can be effectively trans- ferred after distillation steps. Second, we compare to the MultiTasker to demonstrate that iterated distillation is effective for the retention of learned skills. The MultiTasker is also used as a baseline for comparing learning speed. The results of the PLAiD controller are displayed in the accompany- ing Video 1 is compared to the two baselines for training on incline. The Scratch method learns the slowest as it is given no information about how to perform similar skills. The first MultiTasker for the incline task is initialized from a terrain injected controller that was trained to walk on flat ground. Any subsequent MultiTasker is initialized from the final MultiTasker model of the preceding task. This controller has to learn multiple tasks together, which can complicate the learning process, as simulation for each task is split across the training and the overall RL task can be challenging. This is in contrast to using PLAiD, that is also initialized with the same policy trained on flat, that will integrate skills together after each new skill is learned. after adding more tasks the MultiTasker is beginning to struggle in Figure 2c and fails in in Figure 2d, with the number of tasks it must learn at the same time. While PLAiD learns the new tasks faster and is able to integrate the new skill required to solve the task robustly. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2', 'after_section': '5.2', 'context_after': 'We introduce the idea of input injection for this purpose. We augment a policy with additional input features while allowing it to retain its original functional behaviour similar to (Chen et al., 2015). This is achieved by adding additional inputs to the neural network and initializing the connecting layer weights and biases to 0. By only setting the weights and biases in the layer connecting the new features to the original network to 0, the gradient can still propagate to any lower layers which are 5.3 DISTILLING MULTIPLE POLICIES ', 'paragraph_idx': 35, 'before_section': '5.2', 'context_before': 'An appealing property of using distillation in PLAiD is that the combined policy model need not resemble that of the individual expert controllers. For example, two different experts lacking state features and trained without a local map of the terrain can be combined into a single policy that has ', 'modified_lines': 'new state features for the terrain. These new terrain features can assist the agent in the task domain in which it operates. initialized random without changing the functional behaviour. This is performed when distilling the flat and incline experts. Further details can be found in Appendix: 8.3. ', 'original_lines': 'new state features for the terrain. These new terrain features can assist the agent the task domain in which it operates. initialized random without changing the functional behavior. This is performed when distilling the flat and incline experts. Further details can be found in the appendix ', 'after_paragraph_idx': 36, 'before_paragraph_idx': 35}, {'section': '5.3 DISTILLING MULTIPLE POLICIES', 'after_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_after': 'tasks are added the MultiTasker has to make trade-offs between more tasks to maximizes. As more tasks are added, this trade-off becomes increasingly complex resulting in the MultiTasker favouring easier tasks. Using PLAiD to combine the skills of many policies appears to scale better with respect to the number of skills being integrated. This is likely because distillation is a semi-supervised method which is more stable than the un-supervised RL solution. This can be seen in Figure 3d, 3e and especially in 3f where PLAiD combines the skills faster and can find higher value policies in practice. PLAiD also presents zero-shot training on tasks which it has never been trained on . In Figure 7 this generalization is shown as the agent navigates across the mixed environment. There are some indications that distillation is hindering training during the initial few iterations. We are initializing the network used in distillation with the most recently learning policy after TL. The large change in the initial state distribution from the previous seen distribution during TL could be causing larger gradients to appear, disrupting some of the structure learned during the TL step, shown in Figure 3d and 3e. There also might not exist a smooth transition in policy space between the newly learned policy and the previous policy distribution. ', 'paragraph_idx': 37, 'before_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_before': 'not scale with respect to the number of tasks. When training the MultiTasker over two or even three tasks (Figure 3a) the method displays good results, however when learning a fourth or more tasks the method struggles, as shown in Figure 3b and 3b. Part of the reason for this struggle is when new ', 'modified_lines': ' 1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 6 Under review as a conference paper at ICLR 2018 (a) incline (b) steps (c) slopes (d) gaps Figure 2: Learning comparison over each of the environments. These plots show the mean and std over 5 simulations, each initialized with different random seeds. The learning for PLAiD is split into two steps, with TL (in green) going first followed by the distillation part (in yellow). Tasks PLAiD TL-Only TL-Only (with Distill) MultiTasker incline 0.155 flat 0.054 −0.065 −0.044 0.068 −0.001 −0.053 0.039 steps 0.001 −0.235 −0.030 −0.030 slopes 0.043 −0.242 −0.062 0.119 gaps −0.083 0.000 −0.133 0.000 average 0.063 −0.147 −0.024 0.009 Table 1: These values are relative percentage changes in the average reward, where a value of 0 is no forgetting and a value of −1 corresponds to completely forgetting how to perform the task. A value > 0 corresponds to the agent learning how to better perform a task after training on other tasks. Here, the final policy after training on gaps compared to the original polices produced at the end of training for the task noted in the column heading. The TL-Only baseline forgets more than PLAiD. The MultiTasker forgets less than PLAiD but has a lower average reward over the tasks. This is also reflected in Table 1, that shows the final average reward when comparing methods before and after distillation. The TL-Only is able to achieve high performance but much is lost when learning new tasks. A final distillation step helps mitigate this issue but does not work as well as PLAiD. It is possible performing a large final distillation step can lead to over-fitting. 7 Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAiD on 3 tasks (e) PLAiD on 4 tasks (f) PLAiD on 5 tasks Figure 3: These figures show the average reward a particular policy achieves over a number of tasks. ', 'original_lines': ' 1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 6 Under review as a conference paper at ICLR 2018 (a) incline (b) steps (c) slopes (d) gaps Figure 2: TL comparison over each of the environments. The learning for PLAiD is split into two steps, with TL (in green) going first followed by the distillation part (in red). Using TL assists in the learning of new tasks. ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 37}, {'section': '5.3 DISTILLING MULTIPLE POLICIES', 'after_section': None, 'context_after': 'rewards, as these tasks may receive higher advantage. It is also a non-trivial task to normalize the number of tasks than the MultiTasker. We expect PLAiD would further outperform the MultiTasker if the tasks were more difficult and the reward functions dissimilar. ', 'paragraph_idx': 38, 'before_section': None, 'context_before': '6 DISCUSSION MultiTasker vs PLAiD: The MultiTasker may be able to produce a policy that has higher overall ', 'modified_lines': 'average reward, but in practise constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks, the MultiTasker can favour a task with higher reward functions for each task in order to combine them. The MultiTasker may also favour tasks that are easier than other tasks in general. We have shown that the PLAiD scales better with respect to the ', 'original_lines': 'average reward, but in practice constraints can keep the method from combining skills gracefully. If the reward functions are different between tasks, the MultiTasker can favor this task with higher reward functions for each task in order to combine them. The MultiTasker may also favor tasks that are easier than other tasks in general. We have shown that the distiller scales better with respect to the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 DISCUSSION', 'after_section': None, 'context_after': 'Transfer Learning: Because we are using an actor-critic learning method, we also studied the possibility of using the value functions for TL. We did not discover any empirical evidence that this assisted the learning process. When transferring to a new task, the state distribution has changed and the reward function may be completely different. This makes it unlikely that the value function will be accurate on this new task. In addition, value functions are in general easier and faster to learn than policies, implying that value function reuse is less important to transfer. We also find that 6.1 LIMITATIONS: ', 'paragraph_idx': 43, 'before_section': '6 DISCUSSION', 'context_before': 'are needed to learn new tasks then the MultiTasker would fall far behind. Distillation is also very efficient with respect to the number of simulation steps needed. Data could be collected from the simulator in groups and learned from in many batches before more data is needed as is common for ', 'modified_lines': 'behavioural cloning. We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. 8 Under review as a conference paper at ICLR 2018 helpfulness of TL depends on not only the task difficulty but the reward function as well. Two tasks may overlap in state space but the area they overlap could be easily reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. ', 'original_lines': 'behavioral cloning. We expect another reason distillation benefits learning multiple tasks is that the integration process assists in pulling policies out of the local minima RL is prone to. 7 0100000200000300000400000500000600000Iterations05001000150020002500RewardGapsMultiTaskerScratchTransferDistill Under review as a conference paper at ICLR 2018 (a) MultiTasker on 3 tasks (b) MultiTasker on 4 tasks (c) MultiTasker on 5 tasks (d) PLAiD on 3 tasks (e) PLAiD on 4 tasks (f) PLAiD on 5 tasks Figure 3: These figures show the average reward a particular policy achieves over a number of tasks. After learning an expert for flat + incline a new steps task is trained. Figure (a) shows the performance for the MultiTasker and figure (c) for the distiller. The distiller learns the combined tasks fast, however the MultiTasker achieves marginally better average reward over the tasks Figures (b,e) show the performance of an expert on flat + incline + steps trained to learn the new task slopes for the MultiTasker and distiller. Last the MultiTasker (c) and PLAiD (f) are trained on gaps. helpfulness of TL depends on not only the task difficulty task but the reward function as well. Two tasks may overlap in state space but the area they overlap could be easily reachable. In this case TL may not give significant benefit because the overall RL problem is easy. The greatest benefit is gained from TL when the state space that overlaps for two tasks is difficult to reach and in that difficult to reach area is where the highest rewards are achieved. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 43}, {'section': '6.1 LIMITATIONS:', 'after_section': '6.1 LIMITATIONS:', 'context_after': 'How best to chose which data should be trained on next to best preserve the behaviour of experts is a general problem with multi-task learning. Distillation treats all tasks equally independent of their reward. This can result in very low value tasks, receiving potentially more distribution than desired and high value tasks receiving not enough. We have not needed the use a one-hot vector to indicate 6.2 FUTURE WORK: ', 'paragraph_idx': 47, 'before_section': '6.1 LIMITATIONS:', 'context_before': 'the particular task it was learning. Making it more challenging for the policy to learning a new task, resulting in negative transfer. After learning many new tasks the previous tasks may not receive a large enough potion of the distillation training process to preserve the experts skill well enough. ', 'modified_lines': 'what task the agent is performing. We want the agent to be able to recognize which task it is given but we do realize that some tasks could be too similar to differentiate, such as, walking vs jogging on flat ground. ', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 what task the agent is performing. We want the agent to be able to recognize which task it is being given but we do realize that some tasks could be too similar to differentiate, such as, walking vs jogging on flat ground. ', 'after_paragraph_idx': 47, 'before_paragraph_idx': 47}, {'section': 'Abstract', 'after_section': None, 'context_after': 'James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hass- abis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Silver. Learning and transfer of modulated locomotor controllers. CoRR, abs/1610.05182, 2016. URL http://arxiv.org/abs/1610.05182. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In International Con- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'J¨urgen Schmidhuber. POWERPLAY: training an increasingly general problem solver by continually searching for the simplest still unsolvable problem. CoRR, abs/1112.5309, 2011. URL http: //arxiv.org/abs/1112.5309. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8.2.1 DISTILLATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'For the evaluation of each model on a particular task we use the average reward achieved by the agent over at most 100 seconds of simulation time. We average this over running the agent over a number of randomly generated simulation runs. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8.5 AGENT DESIGN ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(c) gaps Figure 5: Transfer learning only baselines for each of the new tasks. ', 'modified_lines': '', 'original_lines': ' Tasks PLAiD TL-Only (before Distill) TL-Only (after Distill) flat −0.090 −0.065 0.070 incline −0.039 −0.044 −0.250 steps −0.181 −0.235 −0.359 slopes −0.065 −0.242 −0.304 average −0.094 −0.147 −0.211 Table 1: Final TL-Only baselines compared to PLAiD policies. These values are relative percentage changes in the average reward, where a value of 0 is no forgetting and a value of −1 corresponds to completely forgetting how to perform the task. Here, the final policy after training on gaps compared to the original polices produced at the end of training for the task noted in the column heading. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-20 01:01:58
|
ICLR.cc/2018/Conference
|
r1kGBfeHM
|
HkjnjjyR-
|
[]
|
2018-01-25 15:42:27
|
ICLR.cc/2018/Conference
|
HkjnjjyR-
|
BJ1svLFLf
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'learning, forgetting, and scalability are all in play for both human motor control models and the learning curricula proposed in reinforcement learning. While the development of hierarchical mod- els for skills offers one particular solution that supports scalability and that avoids problems related to forgetting, we eschew this approach in this work and instead investigate a progressive approach to integration into a control policy defined by a single deep network. Distillation refers to the problem of combining the policies of one or more experts in order to create one single controller that can perform the tasks of a set of experts. It can be cast as a supervised ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'Understanding the time course of sensorimotor learning in human motor control is an open research problem (Wolpert & Flanagan, 2016) that exists concurrently with recent advances in deep rein- forcement learning. Issues of generalization, context-dependent recall, transfer or ”savings” in fast ', 'modified_lines': ' * These authors contributed equally to this work. 1 Published as a conference paper at ICLR 2018 ', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 Hierarchical RL further uses modularity to achieve transfer learning for robotic tasks (Tessler et al., 2016) This allows for the substitution of network modules for different robot types over a ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'make use of them as building block. Transfer Learning Transfer learning exploits the structure learned from a previous task in learn- ', 'modified_lines': 'ing a new task. Our focus here is on transfer learning in environments consisting of continuous control tasks. The concept of appending additional network structure while keeping the previous structure to reduce catastrophic forgetting has worked well on Atari games (Rusu et al., 2015; Parisotto et al., 2015; Rusu et al., 2016; Chen et al., 2015) Other methods reproduce data from all tasks to reduce the possibility of forgetting how to perform previously learned skills e.g, (Shin et al., 2017; Li & Hoiem, 2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters (Kirkpatrick et al., 2017). A different approach to combin- ing policies is to use a hierarchical structure (Tessler et al., 2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve control policies that can quickly adapt their behaviour according to current rewards (Finn et al., 2017). This work is demonstrated on parameterized task domains. The Powerplay method provides a general framework for training an increasingly general problem solver (Schmidhuber, 2011; Srivastava et al., 2012). It is based on iteratively: inventing a new task using play or inven- tion; solving this task; and, lastly, demonstrating the ability to solve all the previous tasks. The last Published as a conference paper at ICLR 2018 two stages are broadly similar to our PLAID approach, although to the best of our knowledge, there are no experiments on motor control tasks of comparable complexity to the ones we tackle. In our work, we develop a specific progressive learning-and-distillation methodology for motor skills, and provide a detailed evaluation as compared to three other plausible baselines. We are specifically interested in understanding issues that arise from the interplay between transfer from related tasks and the forgetting that may occur. ', 'original_lines': 'ing a new task. Our focus here is on transfer learning in environments consisting of continuous con- trol tasks. The concept of appending additional network structure while keeping the previous struc- ture to reduce catastrophic forgetting has worked well on Atari games (Rusu et al., 2015; Parisotto et al., 2015; Rusu et al., 2016; Chen et al., 2015) Other methods reproduce data from all tasks to re- duce the possibility of forgetting how to perform previously learned skills e.g, (Shin et al., 2017; Li & Hoiem, 2016). Recent work seeks to mitigate this issue using selective learning rates for specific network parameters (Kirkpatrick et al., 2017). A different approach to combining policies is to use a hierarchical structure (Tessler et al., 2016). In this setting, previously-learned policies are available as options to execute for a policy trained on a new task. However, this approach assumes that the new tasks will be at least a partial composition of previous tasks, and there is no reintegration of newly learned tasks. A recent promising approach has been to apply meta-learning to achieve con- trol policies that can quickly adapt their behaviour according to current rewards (Finn et al., 2017). This work is demonstrated on parameterized task domains. The Powerplay method provides a gen- eral framework for training an increasingly general problem solver Schmidhuber (2011); Srivastava et al. (2012). It is based on iteratively: inventing a new task using play or invention; solving this task; and, lastly, demonstrating the ability to solve all the previous tasks. The last two stages are broadly similar to our PLAID approach, although to the best of our knowledge, there are no experiments on motor control tasks of comparable complexity to the ones we tackle. In our work, we develop a specific progressive learning-and-distillation methodology for motor skills, and provide a detailed evaluation as compared to three other plausible baselines. We are specifically interested in under- Under review as a conference paper at ICLR 2018 standing issues that arise from the interplay between transfer from related tasks and the forgetting that may occur. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 REINFORCEMENT LEARNING', 'after_section': '3.1 REINFORCEMENT LEARNING', 'context_after': 'where dθ = (cid:82) t=0 γtp0(s0)(s0 → s | t, π0) ds0 is the discounted state distribution, p0(s) rep- resents the initial state distribution, and p0(s0)(s0 → s | t, π0) models the likelihood of reaching state s by starting at state s0 and following the policy π(a, s|θπ) for T steps (Silver et al., 2014). Aπ(s, a) represents an advantage function (Schulman et al., 2016). In this work, we use the Positive Temporal Difference (PTD) update proposed by (Van Hasselt, 2012) for Aπ(s, a): Aπ(st, at) = I [δt > 0] = ', 'paragraph_idx': 15, 'before_section': '3.1 REINFORCEMENT LEARNING', 'context_before': '(4) ', 'modified_lines': ' (cid:80)T S 3 Published as a conference paper at ICLR 2018 ', 'original_lines': 'S (cid:80)T 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '6 DISCUSSION', 'after_section': None, 'context_after': 'then combine the resulting policies Figure 1b. We found that learning many skills from scratch was challenging, we were only able to get fair results for the flat task. Also, when a new task is to be ure 1c, possibly ending with a distillation step to combine the learned policies to decrease forgetting. For more details see Appendix: 8.4. The last version (PLAiD) learns each task sequentially using TL from the previous, most skilled policy, in the end resulting in a policy capable of solving all ', 'paragraph_idx': 44, 'before_section': None, 'context_before': 'where a number of skills are learned at the same time. It has been shown that learning many tasks together can be faster than learning each task separately (Parisotto et al., 2015). The curriculum for using this method is shown in Figure 1a were during a single RL simulation all tasks are learned ', 'modified_lines': 'together. It is also possible to randomly initialize controllers and train in parallel (Parallel) and learned with the Parallel model it would occur outside of the original parallel learning, leading to a more sequential method. A TL-Only method that uses TL while learning tasks in a sequence Fig- ', 'original_lines': 'together. It is also possible to randomly initialize controllers and train in parallel (Parallel) and learned with the Parallel model it would occur outside of the original parallel learning, leading to a more sequential method. A TL-Only method that uses TL while learning tasks in a sequence Fig- ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.3 DISTILLING MULTIPLE POLICIES', 'after_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_after': 'not scale with respect to the number of tasks. When training the MultiTasker over two or even three tasks (Figure 3a) the method displays good results, however when learning a fourth or more tasks the method struggles, as shown in Figure 3b and 3b. Part of the reason for this struggle is when new 6 (a) incline ', 'paragraph_idx': 38, 'before_section': None, 'context_before': '5.3 DISTILLING MULTIPLE POLICIES ', 'modified_lines': 'Training over multiple tasks at the same time may help the agent learn skills quicker, but this may 1https : //youtu.be/DjHbHCXGk0 Published as a conference paper at ICLR 2018 ', 'original_lines': 'Training over multiple tasks at the same time ma y help the agent learn skills quicker, but this may 1https : //www.dropbox.com/s/kbb4145yd1s9s3p/P rogresiveLearning.mp4?dl = 0 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 38, 'before_paragraph_idx': None}, {'section': '5.3 DISTILLING MULTIPLE POLICIES', 'after_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_after': 'In Figure 7 this generalization is shown as the agent navigates across the mixed environment. This is also reflected in Table 1, that shows the final average reward when comparing methods ', 'paragraph_idx': 38, 'before_section': '5.3 DISTILLING MULTIPLE POLICIES', 'context_before': 'to the number of skills being integrated. This is likely because distillation is a semi-supervised method which is more stable than the un-supervised RL solution. This can be seen in Figure 3d, 3e and especially in 3f where PLAiD combines the skills faster and can find higher value policies ', 'modified_lines': 'in practice. PLAiD also presents zero-shot training on tasks which it has never been trained on. ', 'original_lines': 'in practice. PLAiD also presents zero-shot training on tasks which it has never been trained on . ', 'after_paragraph_idx': 38, 'before_paragraph_idx': 38}, {'section': '6.1 LIMITATIONS:', 'after_section': '6.1 LIMITATIONS:', 'context_after': 'a large enough potion of the distillation training process to preserve the experts skill well enough. How best to chose which data should be trained on next to best preserve the behaviour of experts is a general problem with multi-task learning. Distillation treats all tasks equally independent of their ', 'paragraph_idx': 48, 'before_section': '6.1 LIMITATIONS:', 'context_before': 'Once integrated, the skills for our locomotion tasks are self-selecting based on their context, i.e., the knowledge of the upcoming terrain. It may be that other augmentation and distillation strategies ', 'modified_lines': 'are better for situations where either the reward functions are different or a one-hot vector is used to select the currently active expert. In our transfer learning results we could be over fitting the initial expert for the particular task it was learning. Making it more challenging for the policy to learn a new task, resulting in negative transfer. After learning many new tasks the previous tasks may not receive ', 'original_lines': 'are better for situations where the current reward function or a one-hot vector is used to select the currently active expert. In our transfer learning results we could be over fitting the initial expert for the particular task it was learning. Making it more challenging for the policy to learning a new task, resulting in negative transfer. After learning many new tasks the previous tasks may not receive ', 'after_paragraph_idx': 48, 'before_paragraph_idx': 48}, {'section': 'Abstract', 'after_section': None, 'context_after': '(a) flat (b) incline ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2.0 m to 2.5 m. The mixed terrain is a combination of the above terrains where a portion is randomly chosen from the above terrain types. ', 'modified_lines': '', 'original_lines': '13 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-08 04:52:39
|
ICLR.cc/2018/Conference
|
HJKAUhATW
|
rk-KU2eC-
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Occlusion sensitivity proposed by Zeiler & Fergus (2013) is another method which covers parts of the image with a grey box, mapping the resultant change in prediction. This produces a heatmap where features important to the final prediction are highlighted as they are occluded. Another well- known method of generating visual heatmaps is global average pooling. Using fully convolutional A novel analysis method by Ribeiro et al. (2016) known as locally interpretable model-agnostic explanations (LIME) attempts to explain individual predictions by simulating model predictions in LIME’s solution to this is to use superpixel based algorithms to oversegment images, and to perturb the image by replacing each superpixel by its average value, or a fixed pre-determined value. While this produces more plausible looking images as opposed to occlusion or changing individual pixels, current research. 2 METHODS Our method comprises of three main steps — we first use generative adversarial networks to train a generator on an unlabelled dataset. Secondly, we use the trained generator as the decoder section producing high resolution images. Lastly, we train simple supervised classifiers on the encoded representations of a smaller, labelled dataset. We optimize over the latent space surrounding each encoded instance with the objective of changing the instance’s predicted class while penalizing ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'relies on the end user being able to understand and trust the algorithm, as incorrect implementation and errors may have significant consequences. Hence, there has recently been much interest in inter- pretability in machine learning as this is a key aspect of implementing machine learning algorithms ', 'modified_lines': 'in practice. We propose a novel method of creating visual rationales to help explain individual predictions and explore a specific application to classifying chest radiographs. There are several well-known techniques in the literature for generating visual heatmaps. Gradient based methods were first proposed in 2013 described as a saliency map in Simonyan et al. (2013), where the derivative of the final class predictions is computed with respect to the input pixels, gen- erating a map of which pixels are considered important. However, these saliency maps are often unintelligible as convolutional neural networks tend to be sensitive to almost imperceptible changes in pixel intensities, as demonstrated by recent work in adversarial examples. In fact, obtaining the saliency map is often the first step in generating adversarial examples as in Goodfellow et al. (2014). Other recent developments in gradient based methods such as Integrated Gradients from Sundarara- jan et al. (2017) have introduced fundamental axioms, including the idea of sensitivity which helps focus gradients on relevant features. neural networks with a global average pooling layer as described in Zhou et al. (2015), we can 1 Under review as a conference paper at ICLR 2018 examine the class activation map for the final convolutional output prior to pooling, providing a low resolution heatmap for activations pertinent to that class. the local neighbourhood around this example. Gradient based methods and occlusion sensitivity can also be viewed in this light — attempting to explain each classification by changing individual input pixels or occluding square areas. However, sampling the neighbourhood surrounding an example in raw feature space can often be tricky, especially for image data. Image data is extremely complex and high-dimensional — hence real examples are sparsely distributed in pixel space. Sampling randomly in all directions around pixel space is likely to produce non-realistic images. it is still sensitive to the parameters and the type of oversegmentation used — as features larger than a superpixel and differences in global statistics may not be represented in the set of perturbed images. This difficulty in producing high resolution visual rationales using existing techniques motivates our We introduce a novel method utilizing recent developments in generative adversarial networks (GANs) to generate high resolution visual rationales. We demonstrate the use of this method on a large dataset of frontal chest radiographs by training a classifier to recognize heart failure on chest radiographs, a common task for doctors. of an autoencoder. This enables us to encode and decode, to and from the latent space while still ', 'original_lines': 'in practice. We propose a novel model-specific method of creating visual rationales to help explain individual predictions and explore a specific application with chest radiographs. There are several well-known techniques in the literature for generating visual heatmaps.One method proposed in 2013 described as a saliency map in Simonyan et al. (2013), where the derivative of the input pixels is computed with respect to the final class prediction, generating a map of which pixels are considered important. However, these saliency maps are often noisy as convolutional neural networks tend to be sensitive to almost imperceptible changes in pixel intensities, as demonstrated by recent work in adversarial examples. In fact, obtaining the saliency map is often the first step in generating adversarial examples as in Goodfellow et al. (2014) neural networks with a global average pooling layer as the model as described in Zhou et al. (2015), we can examine the class activation map for the final convolutional output prior to pooling, providing a low resolution heatmap for activations pertinent to that class. the local neighbourhood around this example. Saliency maps and occlusion sensitivity can also be viewed in this light – attempting to explain each example by perturbing each image by changing individual pixels or occluding squares respectively. 1 Under review as a conference paper at ICLR 2018 However, sampling from feature space surrounding the example can often be tricky, especially for image data. Image data is extremely high-dimensional and hence real examples are sparsely dis- tributed in feature space. Sampling randomly in all directions around feature space is likely to produce non-realistic images. it is still sensitive to the parameters and the type of oversegmentation used – as features larger than each superpixel or even global statistics may not be represented in the set of perturbed images. This difficulty in producing high resolution visual rationales using existing techniques instigates our We introduce a novel method utilizing recent developments in generative adversarial networks to generate high resolution visual rationales. We demonstrate the use of this method on a large dataset of frontal chest radiographs by training a classifier to recognize heart failure on chest radiographs, a common task for doctors. of an autoencoder. This enables us to encode and decode to and from the latent space while still ', 'after_paragraph_idx': None, 'before_paragraph_idx': 3}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'visually acceptable generated images were produced. ADAM was used as the optimizer with the generator and discriminator learning rates both set to 5 x 10-5. In the next step, we use the trained generator as the decoder for the autoencoder. We fix the weights of the decoder during training and train our autoencoder to reproduce each of the images from the unlabelled dataset. The unlabelled dataset was split by patient in a 15 to 1 ratio into a training Bojanowski et al. (2017). Minimal overfitting was observed during the training process even when We then train a classifier on a smaller labelled dataset consisting of 7,391 chest radiograph images paired with a B-type natriuretic peptide (BNP) blood test that is correlated with heart failure. This test is measured in nanograms per litre, and higher readings indicate heart failure. We perform a natural logarithm on the actual value and divide the resultant number by 10 to scale these readings to between 0 and 1. We augment each labelled image and encode it into the latent space using our previously trained autoencoder. To prevent contamination, we separate our images by patient into a ', 'paragraph_idx': 11, 'before_section': '2 METHODS', 'context_before': 'visualize what that instance would appear as if it belonged in a different class. Firstly, we use the Wasserstein GAN formulation by Arjovsky et al. (2017) and find that the addition ', 'modified_lines': 'of the gradient penalty term helps to stabilize training as introduced by Gulrajani et al. (2017). Our unlabelled dataset comprises of a set of 98,900 chest radiograph images, which are scaled to 128 by 128 pixels while maintaining their original aspect ratio through letterboxing, and then randomly translated by up to 8 pixels. We use a 100 dimensional latent space. Our discriminator and generator both use the DCGAN architecture while excluding the batch normalization layers and using Scaled Exponential Linear Units described in Klambauer et al. (2017) as activations except for the final layer of the generator which utilized a Tanh layer. We train the critic for 4 steps for each generator training step. The GAN training process was run for 200k generator iterations before and validation set. We minimize the Laplacian loss between the input and the output, inspired by the autoencoder was trained for over 1000 epochs, the reconstruction loss on the validation set was similar to the reconstruction loss on the validation set. 2 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'of the gradient penalty term helps to stabilize training as introduced by Gulrajani et al. (2017). Our unlabelled dataset comprises of a set of 98,900 chest radiograph images, which are scaled to 128 by 128 pixels while maintaining their original aspect ratio through letterboxing, and then randomly translated by up to 8 pixels. We use a 100 dimensional latent space. Our discriminator and generator both use the DCGAN architecture while excluding the batch normalization layers and using Scaled Exponential Linear Units described in Klambauer et al. (2017) as non-linear activations except for the final layer of the generator which utilized a Tanh layer. We train the critic for 4 steps for each generator training step. The GAN training process was run for 200k generator iterations before and testing set. We minimize the Laplacian loss between the input and the output, inspired by the autoencoder was trained for 1000 epochs. ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 10}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'and a linearly weighted mean squared error term between the decoded latent representation and the decoded original representation. We cap the maximum number of iterations at 5000 and set our learning rate at 0.1. We stop the iteration process early if the cutoff value for that class is achieved. The full algorithm is described in Algorithm 1. This generates a latent representation with a different prediction from the initial representation. The difference between the decoded generated representa- tion and the decoded original representation is scaled and overlaid over the original image to create Algorithm 1 Visual rationale generation Require: α, learning rate ', 'paragraph_idx': 14, 'before_section': '2 METHODS', 'context_before': 'To obtain image specific rationales, we optimize over the latent space starting with the latent rep- resentation of the given example. We fix the weights of the entire model and apply the ADAM optimizer on a composite objective comprising of the output value of the original predicted class ', 'modified_lines': 'the visual rationale for that image. We use gradient descent to optimize the following objective: z target = arg min z L target (z) + α(cid:107)X − G(z)(cid:107)2 X target = G (z target) (1) (2) Where X is the reconstructed input image (having been passed through the autoencoder); X target and z target are the output image and its latent representation. G is our trained generator neural network. α is a coefficient that trades-off the classification and reconstruction objectives. L target is a target objective which can be a negative class probability or in the case of heart failure, predicted BNP level. The critical difference between our objective and the one used for adversarial example generation is that optimization is performed in the latent space, not the image space. ', 'original_lines': ' 2 Under review as a conference paper at ICLR 2018 the visual rationale for that image. ', 'after_paragraph_idx': 14, 'before_paragraph_idx': 14}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'To evaluate the usefulness of the generated visual rationales, we conduct an experiment where we compare visual rationales generated by a classifier to one which is contaminated. We train the classifier directly on the testing examples and over train until almost perfect accuracy on this set is hence will not be able to produce useful rationales. 3 ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'y ← h(z) z ← z + α ∗ ADAM (z, y + γd) ', 'modified_lines': 'We also apply our method to external datasets and demostrate good cross-dataset generalization, in particular the National Institutes of Health (NIH) ChestX-ray8 dataset recently released by Wang et al. (2017) We downsize the provided images to work with our autoencoder and split this by patient into a training, validation and testing set in the 7:1:2 ratio used by the dataset’s authors. We encode these images into the latent space and apply a 6 layer fully connected neural network with 100 nodes in each layer utilizing residual connections. This architecture is fully described in figure 1. achieved. We reason that the contaminated classifier will simply memorize the testing examples and ', 'original_lines': 'We also apply our method to external datasets, in particular the National Institutes of Health (NIH) chest radiograph dataset recently released by Wang et al. (2017) We downsize the provided images to work with our autoencoder and split this by patient into a training, validation and testing set in the 7:1:2 ratio used by the dataset’s authors. We encode these images into the latent space and apply a 6 layer fully connected neural network with 100 nodes in each layer utilizing residual connections. This architecture is fully described in figure 1. Figure 1: Classifier used for chest radiograph 8 dataset achieved. We reason that the contaminated classifier will simply memorise the testing examples and ', 'after_paragraph_idx': 16, 'before_paragraph_idx': None}, {'section': '2 METHODS', 'after_section': None, 'context_after': 'We also apply our method to the well known MNIST dataset and apply a linear classifier with a 10 we have chosen to transform the digit 9 to the digit 4 as these bear physical resemblance. We alter our optimization objective by adding a negatively weighted term for the predicted probability of the target class as described in Algorithm 2. ', 'paragraph_idx': 18, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Figure 1: Classifier used for ChestX-ray8 dataset way softmax. In order to generate our visual rationales we select an initial class and a target class — ', 'original_lines': 'way softmax. In order to generate our visual rationales we select an initial class and a target class - ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RESULTS', 'after_section': None, 'context_after': 'Figure 2: Left to right: Original images, reconstructed images Figure 3: ROC plot for BNP prediction ', 'paragraph_idx': 20, 'before_section': None, 'context_before': '3 RESULTS ', 'modified_lines': 'To illustrate the fidelity of our autoencoder we reconstruct each image in a smaller labelled set which has not been seen during training. The reconstructed images are show in Fig. 2. These images are obtained by simply encoding the input image into the latent representation and subsequently decoding this representation again. 4 Under review as a conference paper at ICLR 2018 In the heart failure classification task, we threshold the known BNP values at 100ng/L to get bi- nary labels as suggested by Lokuge et al. (2009). Our semi-supervised model achieves an AUC of 0.837 using a linear regressor as our final classifier with an AUC curve as shown in Fig 3. This is comparable to the AUC obtained by a multilayer perceptron. ', 'original_lines': 'To illustrate the fidelity of our autoencoder we demonstrate in Fig 2. the autoencoder on our smaller labelled set which it has not seen during training. These images are obtained by simply encoding the input image into the latent representation and subsequently decoding this representation again. With a cut-off of 100ng/L for heart failure, as suggested by Lokuge et al. (2009), we achieve an AUC of 0.837 using a linear regressor as our final classifier with an AUC curve as shown in Fig 3. This is comparable to the AUC obtained by a multilayer perceptron. 4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RESULTS', 'after_section': '3 RESULTS', 'context_after': 'We apply our classifier as described above to the chest radiograph dataset released by the NIH recently and achieve results similar to or exceeding that of the baseline results reported in the original ', 'paragraph_idx': 25, 'before_section': None, 'context_before': 'reconstructed image and superimpose this as a heatmap on the original image to demonstrate the visual rationale for this prediction. ', 'modified_lines': 'For the same image, we apply the saliency map method, integrated gradients, the occlusion sensitiv- ity method with a window size of 8, as well as LIME to obtain Fig. 5 for comparison. ', 'original_lines': 'For the same image, we apply the saliency map method, the occlusion sensitivity method with a window size of 8, as well as LIME to obtain Fig. 5 for comparison. ', 'after_paragraph_idx': 26, 'before_paragraph_idx': None}, {'section': '3 RESULTS', 'after_section': None, 'context_after': 'We apply our method to the MNIST dataset and demonstrate class switching between digits 9 and 4. Figure 7. demonstrates the visual rationales for why each digit has been classified as a 9 rather than a 4, as well as the transformed versions of each digit. As expected, the top horizontal line in the digit 9 is removed to make each digit appear as a 4. Interestingly, the algorithm failed to convert several digits into a 4 and instead converts them into other digits which are presumably more similar to that instance, despite the addition of the weighted term encouraging the latent representation to prefer the target class. This behaviour is not noted in our chest radiograph dataset as we are able to convert every image from the predicted class to the converse. ', 'paragraph_idx': 29, 'before_section': None, 'context_before': '0.6333 0.7891 ', 'modified_lines': 'Table 1: Comparison AUC results for ChestX-ray8 dataset 6 Under review as a conference paper at ICLR 2018 Figure 5: Top left to bottom right: Saliency map, saliency map overlaid on original image, heatmap generated via occlusion sensitivity method, Integrated gradients, integrated gradients overlaid on original image, LIME output Figure 6: ROC curves for Chest X-Ray8 dataset ', 'original_lines': 'Table 1: Comparison AUC results for Chest radiograph 8 dataset ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'We also demonstrate the generation of rationales with the MNIST dataset where the digit 9 is trans- formed into 4 while retaining the appearance of the original digit. We can see that the transformation ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'For chest radiographs, common signs of heart failure are an enlarged heart or congested lung fields, which appear as increased opacities in the parts of the image corresponding to the lungs. The rationales generated by the normally trained classifier in Fig 9 appear to be consistent with features ', 'modified_lines': 'described in the medical literature while the contaminated classifier is unable to generate these rationales. ', 'original_lines': 'described in the medical literature while the contaminated classifier is unable to generate any of these rationales. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RESULTS', 'after_section': None, 'context_after': 'There are obvious limitations in this paper in that we do not have a rigorous definition of what Future work could focus on the measurement of interpretability by judging how much data a second model requires when learning from the predictions and interpretations provided by another pre- ', 'paragraph_idx': 27, 'before_section': None, 'context_before': 'small gradients in the target class prediction, preventing gradient descent from achieving the target class. ', 'modified_lines': 'We compare the visual rationale generated by our method to various other methods including inte- grated gradients, saliency maps, occlusion sensitivity as well as LIME in Fig. 5. All of these methods share similarities in that they attempt to perturb the original image to examine the impact of changes in the image on the final prediction, thereby identifying the most salient elements. In the saliency map approach, each individual pixel is perturbed, while in the occlusion sensitivity method, squares of the image are perturbed. LIME changes individual superpixels in an image by changing all the pixels in a given superpixel to the average value. This approach fails on images where the superpixel classification is too coarse, or where the classification is not dependent on high resolution details within the superpixel. To paraphrase Sundararajan et al. (2017), attribution or explanation for humans relies upon counterfactual intuition — or altering the image to remove the cause of the predicted outcome. Model agnostic methods such as gradient based methods, while fulfilling the sensitivity and implementation invariance axioms, do not acknowledge the natural structure of the inputs. For instance, this often leads to noisy pixel-wise attribution as seen in Fig. 5. This does not fit well with our human intuition as for many images, large continuous 9 Under review as a conference paper at ICLR 2018 objects dominate our perception and we often do not expect attributions to differ drastically between neighbouring pixels. Fundamentally these other approaches suffer from their inability to perturb the image in a realistic fashion, whereas our approach perturbs the image’s latent representation, enabling each perturbed image to look realistic as enforced by the GAN’s constraints. Under the manifold hypothesis, natural images lie on a low dimensional manifold embedded in pixel space. Our learned latent space serves as a approximate but useful coordinate system for the manifold of natural images. More specifically the image (pardon the pun) of the generator G[Rd] is approximately the set of ‘natural images’ (in this case radiographs) and small displacements in latent space around a point z closely map into the tangent space of natural images around G(z). Performing optimization in latent space is implicitly constraining the solutions to lie on the manifold of natural images, which is why our output images remain realistic while being modified under almost the same objective used for adversarial image generation. Hence, our method differs from these previously described methods as it generates high resolution rationales by switching the predicted class of an input image while observing the constraints of the input structure. This can be targeted at particular classes, enabling us answer the question posed to our trained model — ’Why does this image represent Class A rather than Class B?’ interpretability entails, as pointed out by Sundararajan et al. (2017). An intuitive understanding of the meaning of interpretability can be obtained from its colloquial usage — as when a teacher attempts to teach by example, an interpretation or explanation for each image helps the student to learn faster and generalize broadly without needing specific examples. ', 'original_lines': '8 Under review as a conference paper at ICLR 2018 We compare the visual rationale generated by our method to various other methods including saliency maps, occlusion sensitivity as well as LIME in Fig. 5. Our method differs from these previously described methods as it generates high resolution rationales that can be targeted at partic- ular classes, meaning that our solution provides more visually meaningful results. This enables us answer the question posed to a trained model - ‘Why does this image represent Class A rather than Class B’. As pointed out in our introduction, all of these methods share similarities in that they attempt to perturb the original image to examine the impact of changes in the image on the final prediction, thereby picking the most salient features. In the saliency map approach, each individual pixel is per- turbed, while in the occlusion sensitivity method, squares of the image are perturbed. LIME changes individual superpixels in an image by changing all the pixels in a given superpixel to the average value. This approach fails on images where the superpixel classification is too coarse, or where the classification is not dependent on high resolution details within the superpixel. Fundamentally these other approaches suffer from their inability to perturb the image in a realistic fashion, whereas our approach perturbs the image’s latent representation, enabling each perturbed image to look realistic as enforced by the GAN’s constraints. interpretability entails. An intuitive understanding of the meaning of interpretability can be obtained from its colloquial usage - as when a teacher attempts to teach by example, an interpretation or explanation for each image helps the student to learn faster and generalize broadly without needing specific examples. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 METHODS', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'with GANs using manifold invariances. May 2017. ', 'modified_lines': '10 ', 'original_lines': '9 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 13:36:57
|
ICLR.cc/2018/Conference
|
rk-KU2eC-
|
ryLKU3l0Z
|
[]
|
2017-10-27 13:37:02
|
ICLR.cc/2018/Conference
|
ryLKU3l0Z
|
B159kp1Xz
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'the image with a grey box, mapping the resultant change in prediction. This produces a heatmap where features important to the final prediction are highlighted as they are occluded. Another well- known method of generating visual heatmaps is global average pooling. Using fully convolutional ', 'modified_lines': 'neural networks with a global average pooling layer as described in Zhou et al. (2016), we can ', 'original_lines': 'neural networks with a global average pooling layer as described in Zhou et al. (2015), we can ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'We then train a classifier on a smaller labelled dataset consisting of 7,391 chest radiograph images paired with a B-type natriuretic peptide (BNP) blood test that is correlated with heart failure. This 2 Under review as a conference paper at ICLR 2018 To obtain image specific rationales, we optimize over the latent space starting with the latent rep- resentation of the given example. We fix the weights of the entire model and apply the ADAM ', 'paragraph_idx': 12, 'before_section': '2 METHODS', 'context_before': 'unlabelled dataset. The unlabelled dataset was split by patient in a 15 to 1 ratio into a training and validation set. We minimize the Laplacian loss between the input and the output, inspired by Bojanowski et al. (2017). Minimal overfitting was observed during the training process even when ', 'modified_lines': 'the autoencoder was trained for over 1000 epochs, as demonstrated in 2. test is measured in nanograms per litre, and higher readings indicate heart failure. This is a task of real-world medical interest as BNP test readings are not often available immediately and offered at all laboratories. Furthermore, the reading of chest radiograph images can be complex, as suggested by the widely varying levels of accuracy amongst doctors of different seniority levels reported by Kennedy et al. (2011). We perform a natural logarithm on the actual BNP value and divide the resultant number by 10 to scale these readings to between 0 and 1. This task can be viewed as either a regression or classification task, as a cut-off value is often chosen as a diagnostic threshold. In this paper, we train our network to predict the actual BNP value but evaluate its AUC at the threshold of 100ng/L. We choose AUC at this threshold as this is the cut-off suggested by Lokuge et al. (2009), and AUC is a widely used metric of comparison in the medical literature. We augment each labelled image and encode it into the latent space using our previously trained autoencoder. To prevent contamination, we separate our images by patient into a training and testing set with a ratio of 4 to 1 prior to augmentation and encoding. We demonstrate the success of simple classifiers upon this latent representation, including a 2 layer multilayer perceptron with 256 hidden units as well as a linear regressor. ', 'original_lines': 'the autoencoder was trained for over 1000 epochs, the reconstruction loss on the validation set was similar to the reconstruction loss on the validation set. test is measured in nanograms per litre, and higher readings indicate heart failure. We perform a natural logarithm on the actual value and divide the resultant number by 10 to scale these readings to between 0 and 1. We augment each labelled image and encode it into the latent space using our previously trained autoencoder. To prevent contamination, we separate our images by patient into a training and testing set with a ratio of 4 to 1 prior to augmentation and encoding. We demonstrate the success of simple classifiers upon this latent representation, including a 2 layer multilayer perceptron as well as a linear regressor. ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 12}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'X target = G (z target) ', 'paragraph_idx': 15, 'before_section': '2 METHODS', 'context_before': 'z ', 'modified_lines': 'L target (z) + γ(cid:107)X − G(z)(cid:107)2 ', 'original_lines': 'L target (z) + α(cid:107)X − G(z)(cid:107)2 ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'Algorithm 1 Visual rationale generation Require: α, learning rate ', 'paragraph_idx': 15, 'before_section': '2 METHODS', 'context_before': '(2) ', 'modified_lines': 'Where X is the reconstructed input image (having been passed through the autoencoder); X target and z target are the output image and its latent representation. G is our trained generator neural network. γ is a coefficient that trades-off the classification and reconstruction objectives. L target is a target objective which can be a class probability or a regression target. The critical difference between our objective and the one used for adversarial example generation is that optimization is performed in the latent space, not the image space. ', 'original_lines': 'Where X is the reconstructed input image (having been passed through the autoencoder); X target and z target are the output image and its latent representation. G is our trained generator neural network. α is a coefficient that trades-off the classification and reconstruction objectives. L target is a target objective which can be a negative class probability or in the case of heart failure, predicted BNP level. The critical difference between our objective and the one used for adversarial example generation is that optimization is performed in the latent space, not the image space. ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '2 METHODS', 'after_section': '2 METHODS', 'context_after': 'To evaluate the usefulness of the generated visual rationales, we conduct an experiment where we compare visual rationales generated by a classifier to one which is contaminated. We train the classifier directly on the testing examples and over train until almost perfect accuracy on this set is achieved. We reason that the contaminated classifier will simply memorize the testing examples and hence will not be able to produce useful rationales. We also apply our method to the well known MNIST dataset and apply a linear classifier with a 10 way softmax. In order to generate our visual rationales we select an initial class and a target class — ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'y ← h(z) z ← z + α ∗ ADAM (z, y + γd) ', 'modified_lines': 'We also apply our method to external datasets and demostrate good cross-dataset generalization, in particular the National Institutes of Health (NIH) ChestX-ray8 dataset comprising of 108,948 frontal chest radiographs, recently released by Wang et al. (2017). We downsize the provided images to work with our autoencoder and split this by patient into a training, validation and testing set in the 7:1:2 ratio used by the dataset’s authors. We encode these images into the latent space and 3 Under review as a conference paper at ICLR 2018 apply a 6 layer fully connected neural network with 100 hidden units in each layer utilizing residual connections. We train this with a batch size of 2048. This architecture is fully described in figure 1. Figure 1: Classifier used for ChestX-ray8 dataset ', 'original_lines': 'We also apply our method to external datasets and demostrate good cross-dataset generalization, in particular the National Institutes of Health (NIH) ChestX-ray8 dataset recently released by Wang et al. (2017) We downsize the provided images to work with our autoencoder and split this by patient into a training, validation and testing set in the 7:1:2 ratio used by the dataset’s authors. We encode these images into the latent space and apply a 6 layer fully connected neural network with 100 nodes in each layer utilizing residual connections. This architecture is fully described in figure 1. 3 Under review as a conference paper at ICLR 2018 Figure 1: Classifier used for ChestX-ray8 dataset ', 'after_paragraph_idx': 17, 'before_paragraph_idx': None}, {'section': '3 RESULTS', 'after_section': None, 'context_after': '6 Under review as a conference paper at ICLR 2018 Lastly, we contaminate our heart failure classifier as described in the methods section and compare 7 Under review as a conference paper at ICLR 2018 4 DISCUSSION ', 'paragraph_idx': 26, 'before_section': None, 'context_before': 'Table 1: Comparison AUC results for ChestX-ray8 dataset ', 'modified_lines': 'We apply our method to the MNIST dataset and demonstrate class switching between digits from 9 to 4 and 3 to 2. Figure 8. demonstrates the visual rationales for why each digit has been classified as a 9 rather than a 4, as well as the transformed versions of each digit. As expected, the top hor- izontal line in the digit 9 is removed to make each digit appear as a 4. Interestingly, the algorithm failed to convert several digits into a 4 and instead converts them into other digits which are presum- ably more similar to that instance, despite the addition of the weighted term encouraging the latent representation to prefer the target class. This type of failure is observed more in digits that are less similar to each other, such as from converting from the digits 3 to 2, as simply removing the lower curve of the digit may not always result in a centered ”two” digit. This precludes the simple interpretation that we are able to attribute to the 9 to 4 task. This behaviour is not noted in our chest radiograph dataset as we are able to convert every image from the predicted class to the converse, which is presumably due to the smaller differences between chest X-rays with and without heart failure. Similarly, the time taken to generate a visual rationale depends on the confidence of the classifier in its prediction, as the algorithm runs until the input has been altered sufficiently or a maximum Figure 5: Top left: original image. Top right: reconstructed image. Bottom left: image visualized without heart failure. Bottom right: superimposed visual rationale on original image number of steps (in our case 500) have been completed. In the case of converting digit 9s to 4s - we were able to generate 1000 visual rationales in 1 minute and 58 seconds. We compare this with the occlusion sensitivity and saliency map method demonstrated in Fig. 9. visual rationales generated by the contaminated classifier with those generated previously. Fig 10. demonstrates images where both classifiers predict the presence of heart failure. The rationales from the contaminated classifier focus on small unique aspects of the image and largely do not correspond to our notion of what makes a chest radiograph more likely to represent heart failure, namely enlarged hearts and congested lung fields. To demonstrate this we present 100 images classified as having a BNP level of over 100ng/L to two expert reviewers, equally split between a contaminated or a normally trained classifier. Each image and the associated visual rationale was presented to the reviewers who were blinded to the origin of the classifier. Reviewers were tasked in selecting features from a provided list which they felt corresponded with the visual rationales. Each reviewer rated each image twice. Aggregated results from experts are presented in Table 2. This clearly shows that the contaminated classifier indeed produces less interpretable visual rationales. Figure 6: Comparison of other methods - top left to bottom right: Saliency map, saliency map over- laid on original image, heatmap generated via occlusion sensitivity method, Integrated gradients, integrated gradients overlaid on original image, LIME output Figure 7: ROC curves for Chest X-Ray8 dataset ', 'original_lines': 'We apply our method to the MNIST dataset and demonstrate class switching between digits 9 and 4. Figure 7. demonstrates the visual rationales for why each digit has been classified as a 9 rather than a 4, as well as the transformed versions of each digit. As expected, the top horizontal line in the digit 9 is removed to make each digit appear as a 4. Interestingly, the algorithm failed to convert several digits into a 4 and instead converts them into other digits which are presumably more similar Figure 5: Top left to bottom right: Saliency map, saliency map overlaid on original image, heatmap generated via occlusion sensitivity method, Integrated gradients, integrated gradients overlaid on original image, LIME output Figure 6: ROC curves for Chest X-Ray8 dataset to that instance, despite the addition of the weighted term encouraging the latent representation to prefer the target class. This behaviour is not noted in our chest radiograph dataset as we are able to convert every image from the predicted class to the converse. We compare this with the occlusion sensitivity and saliency map method demonstrated in Fig. 8. visual rationales generated by the contaminated classifier with those generated previously. Fig 9. demonstrates images where both classifiers predict the presence of heart failure. The rationales from the contaminated classifier focus on small unique aspects of the image and largely do not correspond to our notion of what makes a chest radiograph more likely to represent heart failure, namely enlarged hearts and congested lung fields. Figure 7: From left to right: original images with visual rationale overlaid, transformed digits Figure 8: From left to right: visual rationale generated by our method, saliency map, occlusion sensitivity ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 DISCUSSION', 'after_section': None, 'context_after': 'classifier examples of the rationales this method produces and inspect these manually to check if these are similar to our understanding of how to interpret these images. The ability to autoencode inputs is essential to our rationale generation although we have not explored in-depth in this paper the effect of different autoencoding algorithms (for instance variational autoencoders) upon the quality of the For chest radiographs, common signs of heart failure are an enlarged heart or congested lung fields, which appear as increased opacities in the parts of the image corresponding to the lungs. The described in the medical literature while the contaminated classifier is unable to generate these rationales. We also demonstrate the generation of rationales with the MNIST dataset where the digit 9 is trans- formed into 4 while retaining the appearance of the original digit. We can see that the transformation generally removes the upper horizontal line of the 9 to convert this into a 4. Interestingly, some dig- small gradients in the target class prediction, preventing gradient descent from achieving the target class. We compare the visual rationale generated by our method to various other methods including inte- All of these methods share similarities in that they attempt to perturb the original image to examine the impact of changes in the image on the final prediction, thereby identifying the most salient ', 'paragraph_idx': 42, 'before_section': '4 DISCUSSION', 'context_before': 'Our primary contribution in this paper however is not the inversion of the generator but rather the ability to generate useful visual rationales. For each prediction of the model we generate a corre- ', 'modified_lines': 'sponding visual rationale with a target class different to the original prediction. We display some 9 Under review as a conference paper at ICLR 2018 Figure 9: From left to right: visual rationale generated by our method, saliency map, occlusion sensitivity Figure 10: Left: Rationales from contaminated classifier. Right: rationales from normally trained generated rationales, as our initial experiments with variational and vanilla autoencoders were not able to reconstruct the level of detail required. rationales generated by the normally trained classifier in Fig 10 appear to be consistent with features its are not successfully converted. Even with different permutations of delta and gamma weights in Algorithm 2 some digits remain resistant to conversion. We hypothesize that this may be due to the relative difficulty of the chest radiograph dataset compared to MNIST — leading to the extreme confidence of the MNIST model that some digits are not the target class. This may cause vanishingly 10 Under review as a conference paper at ICLR 2018 grated gradients, saliency maps, occlusion sensitivity as well as LIME in Fig. 6. ', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 Figure 9: Left: Rationales from contaminated classifier. Right: rationales from normally trained sponding visual rationale with a target class different to the original prediction. We display some generated rationales. rationales generated by the normally trained classifier in Fig 9 appear to be consistent with features its are not successfully converted. Even with different permutations of delta and gamma weights in Algorithm 2 some digits remain resistant to conversion. We hypothesize that this may be due to the relative difficulty of the chest radiograph dataset compared to MNIST - leading to the extreme con- fidence of the MNIST model that some digits are not the target class. This may cause vanishingly grated gradients, saliency maps, occlusion sensitivity as well as LIME in Fig. 5. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 42}, {'section': '4 DISCUSSION', 'after_section': '4 DISCUSSION', 'context_after': 'objects dominate our perception and we often do not expect attributions to differ drastically between neighbouring pixels. ', 'paragraph_idx': 41, 'before_section': '4 DISCUSSION', 'context_before': 'to remove the cause of the predicted outcome. Model agnostic methods such as gradient based methods, while fulfilling the sensitivity and implementation invariance axioms, do not acknowledge the natural structure of the inputs. For instance, this often leads to noisy pixel-wise attribution as ', 'modified_lines': 'seen in Fig. 6. This does not fit well with our human intuition as for many images, large continuous ', 'original_lines': 'seen in Fig. 5. This does not fit well with our human intuition as for many images, large continuous 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': '2 METHODS', 'after_section': None, 'context_after': 'There are obvious limitations in this paper in that we do not have a rigorous definition of what interpretability entails, as pointed out by Sundararajan et al. (2017). An intuitive understanding ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'Hence, our method differs from these previously described methods as it generates high resolution rationales by switching the predicted class of an input image while observing the constraints of the input structure. This can be targeted at particular classes, enabling us answer the question posed to ', 'modified_lines': 'our trained model — ‘Why does this image represent Class A rather than Class B?’ ', 'original_lines': 'our trained model — ’Why does this image represent Class A rather than Class B?’ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 DISCUSSION', 'after_section': None, 'context_after': 'Other technical limitations include the difficulty of training a GAN capable of generating realistic images larger than 128 by 128 pixels. This limits the performance of subsequent classifiers in ', 'paragraph_idx': 44, 'before_section': '4 DISCUSSION', 'context_before': 'model requires when learning from the predictions and interpretations provided by another pre- trained model. Maximizing the interpretability of a model may be related to the ability of models to transfer information between each other, facilitating learning without resorting to the use of large ', 'modified_lines': 'scale datasets. Such an approach could help evaluate non-image based visual explanations such as sentences, as described in Hendricks et al. (2016). ', 'original_lines': 'scale datasets. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 44}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A. Lokuge, L. Lam, P. Cameron, H. Krum, de Villiers Smit, A. Bystrzycki, M. T. Naughton, D. Ec- cleston, G. Flannery, J. Federman, and et al. B-type natriuretic peptide testing and the accu- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Abhishek Kumar, Prasanna Sattigeri, and P Thomas Fletcher. Improved semi-supervised learning with GANs using manifold invariances. May 2017. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-26 12:24:50
|
ICLR.cc/2018/Conference
|
B159kp1Xz
|
Bys4hLsNM
|
[{'section': '3 RESULTS', 'after_section': None, 'context_after': 'number of steps (in our case 500) have been completed. In the case of converting digit 9s to 4s - we were able to generate 1000 visual rationales in 1 minute and 58 seconds. We compare this with the occlusion sensitivity and saliency map method demonstrated in Fig. 9. Lastly, we contaminate our heart failure classifier as described in the methods section and compare visual rationales generated by the contaminated classifier with those generated previously. Fig 10. ', 'paragraph_idx': 34, 'before_section': None, 'context_before': 'Similarly, the time taken to generate a visual rationale depends on the confidence of the classifier in its prediction, as the algorithm runs until the input has been altered sufficiently or a maximum ', 'modified_lines': 'Figure 8: From left to right: original images with visual rationale overlaid, transformed digits 8 Under review as a conference paper at ICLR 2018 Figure 9: From left to right: visual rationale generated by our method, saliency map, occlusion sensitivity Figure 10: Left: Rationales from contaminated classifier. Right: rationales from normally trained classifier ', 'original_lines': ' 6 Under review as a conference paper at ICLR 2018 Figure 5: Top left: original image. Top right: reconstructed image. Bottom left: image visualized without heart failure. Bottom right: superimposed visual rationale on original image ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'training. By training upon the loss between the real input and generated output images we overcome this. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'but rather the ability to recreate images from our dataset. It is suggested in previous work in Kumar et al. (2017) that directly training a encoder to reverse the mapping learnt by the generator in a decoupled fashion does not yield good results as the encoder never sees any real images during ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 Figure 8: From left to right: original images with visual rationale overlaid, transformed digits Correctly Trained A1 A2 B1 B2 46 35 Cardiomegaly 17 13 Effusion 1 1 Pacemaker 9 9 Airspace opacity 34 22 1 6 44 14 1 3 Contaminated Cardiomegaly Effusion Pacemaker Airspace opacity A1 A2 B1 B2 22 22 8 3 3 3 4 4 18 6 3 3 19 6 4 3 Table 2: Expert evaluation of visual rationales from contaminated and normal classifiers ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'examples of the rationales this method produces and inspect these manually to check if these are similar to our understanding of how to interpret these images. The ability to autoencode inputs is essential to our rationale generation although we have not explored in-depth in this paper the effect ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Our primary contribution in this paper however is not the inversion of the generator but rather the ability to generate useful visual rationales. For each prediction of the model we generate a corre- sponding visual rationale with a target class different to the original prediction. We display some ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 Figure 9: From left to right: visual rationale generated by our method, saliency map, occlusion sensitivity Figure 10: Left: Rationales from contaminated classifier. Right: rationales from normally trained classifier ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-16 11:04:18
|
ICLR.cc/2018/Conference
|
Bys4hLsNM
|
HJh409uTW
|
[]
|
2018-01-25 15:42:52
|
ICLR.cc/2018/Conference
|
BJW5sOJRW
|
B1Mcj_kAb
|
[]
|
2017-10-26 15:13:13
|
ICLR.cc/2018/Conference
|
B1Mcj_kAb
|
Hkm9j_k0Z
|
[]
|
2017-10-26 15:13:15
|
ICLR.cc/2018/Conference
|
Hkm9j_k0Z
|
S16HndkRW
|
[]
|
2017-10-26 15:16:20
|
ICLR.cc/2018/Conference
|
S16HndkRW
|
BJ4E1K1Ab
|
[]
|
2017-10-26 15:28:43
|
ICLR.cc/2018/Conference
|
BJ4E1K1Ab
|
rkWTjDlRb
|
[{'section': '4.2 DEDUPLICATION', 'after_section': '4.2 DEDUPLICATION', 'context_after': 'is then the count. Notice how when through ', 'paragraph_idx': 28, 'before_section': None, 'context_before': '| E | ', 'modified_lines': '| ', 'original_lines': '', 'after_paragraph_idx': 28, 'before_paragraph_idx': None}, {'section': '4.2 DEDUPLICATION', 'after_section': None, 'context_after': '˜A(cid:48) ', 'paragraph_idx': 30, 'before_section': None, 'context_before': '= Figure 2: Removal of intra-object edges by masking the edges of the attention matrix A with the ', 'modified_lines': 'distance matrix D. The black vertices now form a graph without self-loops. The self-loops need to be added back in later. ', 'original_lines': 'distance matrix D. All black vertices now form a graph without self-loops. The self-loops need to be added back in later. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 (3)', 'after_section': '4 (3)', 'context_after': 'Keep in mind that ˜A has no self-loops nor edges between proposals of the same object. As a consequence, two nonzero rows in ˜A are the same if and only if the proposals are the same. If the two rows differ in at least one entry, then one proposal overlaps a proposal that the other proposal ', 'paragraph_idx': 39, 'before_section': '4 (3)', 'context_before': 'averages over the proposals within each underlying object because we only use the sum over the edge weights to compute the count at the end. Conceptually, this reduces multiple proposals of an object down to one as desired. Since we do not know how many proposals belong to an object, we have to ', 'modified_lines': 'estimate this. We do this by using the fact that proposals of the same object are similar. ', 'original_lines': 'estimate this. We do this by using the fact that proposals of the same object will be similar. ', 'after_paragraph_idx': 39, 'before_paragraph_idx': 39}, {'section': '4 (3)', 'after_section': None, 'context_after': 'C = ˜A ', 'paragraph_idx': 42, 'before_section': None, 'context_before': 'operations to compute the similarity of any pair of rows. Since these scaling factors apply to each vertex, we have to expand s into a matrix using the outer ', 'modified_lines': 'product in order to scale both incoming and outgoing edges of each vertex. We can also add self-loops back in, which need to be scaled by s as well. Then, the count matrix C is ', 'original_lines': 'product to scale both incoming and outgoing edges of each vertex. We can also add self-loops back in, which also need to be scaled by s. Then, the count matrix C is ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': 'In summary, we only used diffentiable operations to deduplicate object proposals and obtain a feature vector that represents the predicted count. This allows easy integration into any model with soft 5 EXPERIMENTS ', 'paragraph_idx': 50, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Figure 4: Accuracies on the toy task as side length l and noise q are varied in 0.01 step sizes. attention, enabling a model to count from an attention map. ', 'original_lines': 'Figure 4: Accuracies on the toy task as noise q and side length l are varied in 0.01 step sizes. attention, enabling the model to count from an attention map. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '8 ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'We also evaluate our models on the validation set of VQA v2, shown in Table 2. This allows us to consider only the counting questions within number questions, since number questions include ', 'modified_lines': 'questions such as ”what time is it?” as well. We treat any question starting with the words ”how many” as a counting question. As we expect, the benefit of using the counting module on the counting question subset is higher than on number questions in general. ', 'original_lines': 'questions such as ”what time is it?”. We treat any question starting with the words ”how many” as a counting question. As we expect, the benefit of using the counting module on the counting question subset is higher than on number questions in general. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 COUNTING COMPONENT', 'after_section': None, 'context_after': 'greatly reduced compared to their respective VQA accuracy. More importantly, the absolute accuracy improvement of the counting module is still fully present with the more challenging metric, which is further evidence that the component can properly count rather than simply fitting better to dataset ', 'paragraph_idx': 18, 'before_section': None, 'context_before': 'Additionally, we can evaluate the accuracy over balanced pairs as proposed by Teney et al. (2017): the ratio of balanced pairs on which the VQA accuracy for both questions is 1.0. This is a much more difficult metric, since it requires the model to find the subtle details between images instead of ', 'modified_lines': 'being able to rely on question biases in the dataset. First, notice how all binary pair accuracies are ', 'original_lines': 'being able to rely on dataset biases in the questions. First, notice how all binary pair accuracies are ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': '6 CONCLUSION ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'characteristics of them are shared with high-noise parametrizations of the toy dataset. This suggests that the current attention mechanisms and object proposal network are still very inaccurate, which explains the perhaps small-seeming increase in counting performance. This provides further evidence ', 'modified_lines': 'that the balanced pair accuracy is maybe a more reflective measure of how well current VQA models perform than the overall VQA accuracies of over 70% of the current top models. ', 'original_lines': 'that the balanced pair accuracy is perhaps a more reflective measure of how well current VQA models perform than the overall accuracies of over 70% of the current top models. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 08:18:00
|
ICLR.cc/2018/Conference
|
rkWTjDlRb
|
BJYvewWMM
|
[{'section': '2 RELATED WORK', 'after_section': None, 'context_after': '3 PROBLEMS WITH SOFT ATTENTION ', 'paragraph_idx': 10, 'before_section': '2 RELATED WORK', 'context_before': '∼ ', 'modified_lines': 'More traditional approaches based on Lempitsky & Zisserman (2010) learn to produce a target density map, from which a count is computed by integrating over it. In this setting, Cohen et al. (2017) make use of overlaps of convolutional receptive fields to improve counting performance. Chattopadhyay et al. (2017) use an approach that divides the image into smaller non-overlapping chunks, each of which is counted individually and combined together at the end. In both of these contexts, the convolutional receptive fields or chunks can be seen as sets of bounding boxes with a fixed structure in their positioning. Note that while Chattopadhyay et al. (2017) evaluate their models on a small subset of counting questions in VQA, major differences in training setup make their results not comparable to our work. ', 'original_lines': 'More traditional approaches based on Lempitsky & Zisserman (2010) learn to produce a target density map, from which a count is computed by integrating over it. In this setting, Cohen et al. (2017) make use of overlaps of convolutional receptive fields to improve counting performance. Chattopadhyay et al. (2017) use an approach that divides the image into smaller non-overlapping chunks, each of which is counted individually and combined together at the end. In both of these contexts, the convolutional receptive fields or chunks can be seen as sets of bounding boxes with a fixed structure in their positioning. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 10}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': '2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the feature vector obtained after the weighted sum is exactly the same between the two images and we have lost all information about a possible count from the attention map. Any method that normalizes the weights to sum to 1 suffers from this issue. ', 'modified_lines': '', 'original_lines': ' Multiple glimpses (Larochelle & Hinton, 2010) – sets of attention weights that the attention mecha- nism outputs – or several steps of attention (Yang et al., 2016; Lu et al., 2016) do not circumvent this problem. Each glimpse or step can not separate out an object each, since the attention weight ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': '3 PROBLEMS WITH SOFT ATTENTION', 'after_section': '3 PROBLEMS WITH SOFT ATTENTION', 'context_after': 'given to one feature vector does not depend on the other feature vectors to be attended over. Hard attention (Ba et al., 2015; Mnih et al., 2014) and structured attention (Kim et al., 2017) may be possible solutions to this, though no significant improvement in counting ability has been found for ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'inter-object duplicate edges. In graph form, the object groups, coloring of edges, and shading of vertices serve illustration purposes only; the model does not have these access to these directly. ', 'modified_lines': 'Multiple glimpses (Larochelle & Hinton, 2010) – sets of attention weights that the attention mecha- nism outputs – or several steps of attention (Yang et al., 2016; Lu et al., 2016) do not circumvent this problem. Each glimpse or step can not separate out an object each, since the attention weight ', 'original_lines': '', 'after_paragraph_idx': 15, 'before_paragraph_idx': None}, {'section': '3 PROBLEMS WITH SOFT ATTENTION', 'after_section': None, 'context_after': '4 COUNTING COMPONENT ', 'paragraph_idx': 16, 'before_section': '3 PROBLEMS WITH SOFT ATTENTION', 'context_before': 'Without normalization of weights to sum to one, the scale of the output features depends on the number of objects detected. In an image with 10 cats, the output feature vector is scaled up by 10. Since deep neural networks are typically very scale-sensitive – the scale of weight initializations ', 'modified_lines': 'and activations is generally considered quite important (Mishkin & Matas, 2016) – and the classifier would have to learn that joint scaling of all features is somehow related to count, this approach is not reasonable for counting objects. This is evidenced in Teney et al. (2017) where they provide evidence that sigmoid normalization not only degrades accuracy on non-number questions slightly, but also does not help with counting. ', 'original_lines': 'and activations is generally considered quite important (Mishkin & Matas, 2016) – this approach is not scalable for counting an arbitrary number of objects. Teney et al. (2017) provide evidence that sigmoid normalization not only degrades accuracy on non-number questions slightly, but also does not help with counting. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 16}, {'section': '4 (2)', 'after_section': None, 'context_after': '˜A no longer has self-loops, so we need to add them back in at a later point to still satisfy 2. ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'vertex is computed by counting how many vertices have outgoing edges to the same set of vertices; all edges of the two proposals on the right are scaled by 0.5. This can be seen as averaging proposals within each object and is equivalent to removing duplicate proposals altogether under a sum. ', 'modified_lines': ' D can also be interpreted as an adjacency matrix. It represents a graph that has edges everywhere except when the two bounding boxes that an edge connects would overlap. Intra-object edges are removed by elementwise multiplying ( matrix (Figure 2). (cid:12) ) the distance matrix with the attention ˜A = f1(A) f2(D) (cid:12) (3) ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Now that we can check how similar two proposals are, we count the number of times any row is the same as any other row and compute a scaling factor si for each vertex i. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'aj − ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '− pD = ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.5 | ', 'modified_lines': '', 'original_lines': 'pa = (cid:88) (9) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 (2)', 'after_section': '4 (2)', 'context_after': '(10) (11) In summary, we only used diffentiable operations to deduplicate object proposals and obtain a feature vector that represents the predicted count. This allows easy integration into any model with soft ', 'paragraph_idx': 43, 'before_section': '4 (2)', 'context_before': '· ', 'modified_lines': '(9) ', 'original_lines': '6 Under review as a conference paper at ICLR 2018 Figure 4: Accuracies on the toy task as side length l and noise q are varied in 0.01 step sizes. ', 'after_paragraph_idx': 43, 'before_paragraph_idx': 43}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '(0, 1] are placed in a square image with unit side length. The x and y coordinates of their top left corners are uniformly drawn from U (0, 1 extend beyond the image border. l is used to control the overlapping of bounding boxes: a larger l leads to the fixed number of objects to be more tightly packed, increasing the chance of overlaps. ˆc number of these boxes are randomly chosen to be true bounding boxes. The score of a bounding box ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'The classification task is to predict an integer count ˆc of true objects, uniformly drawn from 0 to 10 inclusive, from a set of bounding boxes and the associated attention weights. 10 square bounding ', 'modified_lines': 'boxes with side length l l) so that the boxes do not ', 'original_lines': 'boxes with side length l l) so that the boxes do not ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'over requiring a high pairwise distance when l is low – when partial overlaps are most likely spurious – and considering small distances enough for proposals to be considered different when l is high. At the highest values for l, there is little signal in the overlaps left since everything overlaps with ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'When increasing the side length, the height of the “step” in f1 decreases to compensate for the generally greater degree of overlapping bounding boxes. A similar effect is seen with f2: it varies ', 'modified_lines': '', 'original_lines': ' 7 0.000.250.500.751.00l0.000.250.500.751.00q=0.00.000.250.500.751.00l0.000.250.500.751.00q=0.50.000.250.500.751.00q0.000.250.500.751.00l=10−60.000.250.500.751.00q0.000.250.500.751.00l=0.5CountingmoduleBaseline Under review as a conference paper at ICLR 2018 Figure 5: Shapes of trained activation functions f1 (attention weights) and f2 (bounding box distances) for varying bounding box side lengths (left) or the noise (right) in the dataset, varied in 0.01 step sizes. Best viewed in color. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8 00.250.50.751x0.000.250.500.751.00f(x)f1,q=0.500.250.50.751xf2,q=0.50.000.250.500.751.00l00.250.50.751x0.000.250.500.751.00f(x)f1,l=0.500.250.50.751xf2,l=0.50.000.250.500.751.00q ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'component is linearly projected into the same space as the hidden layer of the classifier, followed by ReLU activation, batch normalization, and addition with the features in the hidden layer. ', 'modified_lines': '', 'original_lines': '5.2.1 RESULTS Table 1 shows the results on the official VQA v2 leaderboard. We find that the baseline with our component has a significantly higher accuracy on number questions without compromising accuracy on other categories compared to the baseline result. On the number category, our single-model even outperforms the ensembles of state-of-the-art methods. We expect further improvements in number accuracy when incorporating these techniques to improve the quality of attention weights. We also evaluate our models on the validation set of VQA v2, shown in Table 2. This allows us to consider only the counting questions within number questions, since number questions include questions such as ”what time is it?” as well. We treat any question starting with the words ”how many” as a counting question. As we expect, the benefit of using the counting module on the counting question subset is higher than on number questions in general. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '5 EXPERIMENTS', 'after_section': '5 EXPERIMENTS', 'context_after': 'Zhou et al. (2017) (Ens.) Baseline + counting module ', 'paragraph_idx': 56, 'before_section': None, 'context_before': 'Model Teney et al. (2017) Teney et al. (2017) (Ens.) ', 'modified_lines': 'Zhou et al. (2017) ', 'original_lines': '', 'after_paragraph_idx': 56, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'REFERENCES Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'models do not have. To make progress on this dataset, we advocate focusing on understanding of what the current shortcomings of models are and finding ways to mitigate them. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'boltzmann machine. In NIPS, 2010. Victor Lempitsky and Andrew Zisserman. Learning to count objects in images. In NIPS, 2010. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-15 14:46:56
|
ICLR.cc/2018/Conference
|
BJYvewWMM
|
ryiyj_yRW
|
[]
|
2018-01-25 15:42:33
|
ICLR.cc/2018/Conference
|
ryiyj_yRW
|
rkF3f5gLM
|
[{'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'differentiable variants such as by Azadi et al. (2017), Hosang et al. (2017), and Henderson & Ferrari (2017) exist. The main difference is that, since we are interested in counting, our component does not Methods such as in Santoro et al. (2017) and Perez et al. (2017) can count on the synthetic CLEVR VQA dataset (Johnson et al., 2017) successfully without bounding boxes and supervision of where the ', 'paragraph_idx': 7, 'before_section': '2 RELATED WORK', 'context_before': 'Usually, greedy non-maximum suppression (NMS) is used to eliminate duplicate bounding boxes. The main problem with using it as part of a model is that its gradient is piecewise constant. Various ', 'modified_lines': ' 1Our implementation is available at https://github.com/Cyanogenoid/vqa-counting. 1 Published as a conference paper at ICLR 2018 need to make discrete decisions about which bounding boxes to keep; it outputs counting features, not a smaller set of bounding boxes. Our component is also easily integrated into standard VQA models that utilize soft attention without any need for other network architecture changes and can be used without using true bounding boxes for supervision. On the VQA v2 dataset (Goyal et al., 2017) that we apply our method on, only few advances on counting questions have been made. The main improvement in accuracy is due to the use of object proposals in the visual processing pipeline, proposed by Anderson et al. (2017). Their object proposal network is trained with classes in singular and plural forms, for example “tree” versus “trees”, which only allows primitive counting information to be present in the object features after region-of-interest pooling. Our approach differs in the way that instead of relying on counting features being present in the input, we create counting features using information present in the attention map over object proposals. This has the benefit of being able to count anything that the attention mechanism can discriminate instead of only objects that belong to the predetermined set of classes that had plural forms. Using these object proposals, Trott et al. (2018) train a sequential counting mechanism with a reinforcement learning loss on the counting question subsets of VQA v2 and Visual Genome. They achieve a small increase in accuracy and an interpretable set of objects that their model counted, but it is unclear whether their method can be integrated into traditional VQA models due to their loss not applying to non-counting questions. Since they evaluate on their own dataset, their results can not be easily compared to traditional VQA models capable of answering non-counting questions. ', 'original_lines': ' 1To maintain anonymity in the double-blind review format, we will release our implementation for reprodu- cability only after the review period has ended. 1 Under review as a conference paper at ICLR 2018 need to make binary decisions about which bounding boxes to keep; it outputs counting features, not a smaller set of bounding boxes. Our component is also easily integrated into standard VQA models that utilize soft attention without any need for other network architecture changes and can be used without using true bounding boxes for supervision. On the VQA v2 dataset (Goyal et al., 2017) that we apply our method on, only few advances on counting questions have been made, none explicitly targeting counting questions. The main improvement in accuracy is due to the use of object proposals in the visual processing pipeline, proposed by Anderson et al. (2017). Their object proposal network is trained with classes in singular and plural forms, for example “tree” versus “trees”, which only allows primitive counting information to be present in the object features after region-of-interest pooling. Our approach differs in the way that instead of relying on counting features being present in the input, we create counting features using information present in the attention map over object proposals. This has the benefit of being able to count anything that the attention mechanism can discriminate instead of only objects that belong to the predetermined set of classes that had plural forms. ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 7}, {'section': '3 PROBLEMS WITH SOFT ATTENTION', 'after_section': '3 PROBLEMS WITH SOFT ATTENTION', 'context_after': 'Multiple glimpses (Larochelle & Hinton, 2010) – sets of attention weights that the attention mecha- nism outputs – or several steps of attention (Yang et al., 2016; Lu et al., 2016) do not circumvent ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '0). Red edges mark intra-object edges between duplicate proposals and blue edges mark the main inter-object duplicate edges. In graph form, the object groups, coloring of edges, and shading of vertices serve illustration purposes only; the model does not have these access to these directly. ', 'modified_lines': ' we are effectively averaging the two cats in the second image back to a single cat. As a consequence, the feature vector obtained after the weighted sum is exactly the same between the two images and we have lost all information about a possible count from the attention map. Any method that normalizes the weights to sum to 1 suffers from this issue. ', 'original_lines': '', 'after_paragraph_idx': 16, 'before_paragraph_idx': None}, {'section': '4.2 DEDUPLICATION', 'after_section': '4.2 DEDUPLICATION', 'context_after': 'D can also be interpreted as an adjacency matrix. It represents a graph that has edges everywhere except when the two bounding boxes that an edge connects would overlap. Intra-object edges are removed by elementwise multiplying ( matrix (Figure 2). ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'all edges of the two proposals on the right are scaled by 0.5. This can be seen as averaging proposals within each object and is equivalent to removing duplicate proposals altogether under a sum. ', 'modified_lines': 'To compare two bounding boxes, we use the usual intersection-over-union (IoU) metric. We define the distance matrix D Rn×n to be ∈ − Dij = 1 IoU(bi, bj) (2) ', 'original_lines': '', 'after_paragraph_idx': 31, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '| − ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'aj ) ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where X = f4(A) compares the rows of proposals i and j. Using this term instead of f4(1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '− | (4) ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': ') term handles the edge case when there is only one proposal to | count. Since X does not have self-loops, X contains only zeros in that case, which causes the row corresponding to ai = 1 to be incorrectly similar to the rows where aj(cid:54)=i = 0. By comparing the ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'ai ', 'modified_lines': 'Note that the f3(1 ', 'original_lines': 'Note that the f3(1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 TOY TASK', 'after_section': '5.1 TOY TASK', 'context_after': 'boxes with side length l l) so that the boxes do not extend beyond the image border. l is used to control the overlapping of bounding boxes: a larger l leads to the fixed number of objects to be more tightly packed, increasing the chance of overlaps. ˆc number of these boxes are randomly chosen to be true bounding boxes. The score of a bounding box ', 'paragraph_idx': 47, 'before_section': '5.1 TOY TASK', 'context_before': 'The classification task is to predict an integer count ˆc of true objects, uniformly drawn from 0 to 10 inclusive, from a set of bounding boxes and the associated attention weights. 10 square bounding ', 'modified_lines': '(0, 1] are placed in a square image with unit side length. The x and y coordinates of their top left corners are uniformly drawn from U (0, 1 ', 'original_lines': '(0, 1] are placed in a square image with unit side length. The x and y coordinates of their top left corners are uniformly drawn from U (0, 1 ', 'after_paragraph_idx': 47, 'before_paragraph_idx': 47}, {'section': 'Abstract', 'after_section': None, 'context_after': 'significantly so. Particularly when the noise is low, the component can deal with high values for l very successfully, showing that it accomplishes the goal of increased robustness to overlapping proposals. The component also handles moderate noise levels decently as long as the overlaps are ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'The results of varying l while keeping q fixed at various values and vice versa are shown in Figure 4. Regardless of l and q, the counting component performs better than the baseline in most cases, often ', 'modified_lines': '', 'original_lines': ' 7 0.000.250.500.751.00l0.000.250.500.751.00q=0.00.000.250.500.751.00l0.000.250.500.751.00q=0.50.000.250.500.751.00q0.000.250.500.751.00l=10−60.000.250.500.751.00q0.000.250.500.751.00l=0.5CountingmoduleBaseline Under review as a conference paper at ICLR 2018 Figure 5: Shapes of trained activation functions f1 (attention weights) and f2 (bounding box distances) for varying bounding box side lengths (left) or the noise (right) in the dataset, varied in 0.01 step sizes. Best viewed in color. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1', 'after_section': None, 'context_after': '5.2.1 RESULTS ', 'paragraph_idx': 23, 'before_section': None, 'context_before': '± ± ± ', 'modified_lines': ' applying a logistic function. Since object proposal features from Anderson et al. (2017) vary from 10 to 100 per image, a natural choice for the number of top-n proposals to use is 10. The output of the component is linearly projected into the same space as the hidden layer of the classifier, followed by ReLU activation, batch normalization, and addition with the features in the hidden layer. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-01 13:14:25
|
ICLR.cc/2018/Conference
|
rkF3f5gLM
|
rybpaWF8z
|
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': '2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'and two cats in the second image, producing the same feature vector for all three detections. The attention mechanism then assigns all three instances of the same cat the same weight. ', 'modified_lines': '', 'original_lines': 'The usual normalization used for the attention weights is the softmax function, which normalizes the weights to sum to 1. Herein lies the problem: the cat in the first image receives a normalized weight of 1, but the two cats in the second image now each receive a weight of 0.5. After the weighted sum, ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': '3 PROBLEMS WITH SOFT ATTENTION', 'after_section': '3 PROBLEMS WITH SOFT ATTENTION', 'context_after': 'we are effectively averaging the two cats in the second image back to a single cat. As a consequence, the feature vector obtained after the weighted sum is exactly the same between the two images and we have lost all information about a possible count from the attention map. Any method that normalizes the weights to sum to 1 suffers from this issue. Multiple glimpses (Larochelle & Hinton, 2010) – sets of attention weights that the attention mecha- Without normalization of weights to sum to one, the scale of the output features depends on the number of objects detected. In an image with 10 cats, the output feature vector is scaled up by 10. ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'inter-object duplicate edges. In graph form, the object groups, coloring of edges, and shading of vertices serve illustration purposes only; the model does not have these access to these directly. ', 'modified_lines': 'The usual normalization used for the attention weights is the softmax function, which normalizes the weights to sum to 1. Herein lies the problem: the cat in the first image receives a normalized weight of 1, but the two cats in the second image now each receive a weight of 0.5. After the weighted sum, nism outputs – or several steps of attention (Yang et al., 2016; Lu et al., 2016) do not circumvent this problem. Each glimpse or step can not separate out an object each, since the attention weight given to one feature vector does not depend on the other feature vectors to be attended over. Hard attention (Ba et al., 2015; Mnih et al., 2014) and structured attention (Kim et al., 2017) may be possible solutions to this, though no significant improvement in counting ability has been found for the latter so far (Zhu et al., 2017). Ren & Zemel (2017) circumvent the problem by limiting attention to only work within one bounding box at a time, remotely similar to our approach of using object proposal features. ', 'original_lines': 'nism outputs – or several steps of attention (Yang et al., 2016; Lu et al., 2016) do not circumvent this problem. Each glimpse or step can not separate out an object each, since the attention weight given to one feature vector does not depend on the other feature vectors to be attended over. Hard attention (Ba et al., 2015; Mnih et al., 2014) and structured attention (Kim et al., 2017) may be possible solutions to this, though no significant improvement in counting ability has been found for the latter so far (Zhu et al., 2017). Ren & Zemel (2017) circumvent the problem by limiting attention to only work within one generated bounding box at a time, remotely similar to our approach of using object proposal features. ', 'after_paragraph_idx': 16, 'before_paragraph_idx': None}, {'section': '4.1', 'after_section': '4.1', 'context_after': 'In the extreme cases that we explicitly handle, we assume that the attention mechanism assigns a value of 1 to ai whenever the ith proposal contains a relevant object and a value of 0 whenever it ', 'paragraph_idx': 25, 'before_section': '4.1', 'context_before': 'Given a set of features from object proposals, an attention mechanism produces a weight for each proposal based on the question. The counting component takes as input the n largest attention weights a = [a1, . . . , an]T and their corresponding bounding boxes b = [b1, . . . , bn]T. We assume that the ', 'modified_lines': 'weights lie in the interval [0, 1], which can easily be achieved by applying a logistic function. ', 'original_lines': 'weights lie in the interval [0, 1], which can easily be achieved by applying a logistic function to them. ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 25}, {'section': '5.2 VQA', 'after_section': '5.2 VQA', 'context_after': 'Additionally, we can evaluate the accuracy over balanced pairs as proposed by Teney et al. (2017): the ratio of balanced pairs on which the VQA accuracy for both questions is 1.0. This is a much ', 'paragraph_idx': 64, 'before_section': '5.2 VQA', 'context_before': 'many” as a counting question. As we expect, the benefit of using the counting module on the counting question subset is higher than on number questions in general. Additionally, we try an approach where we simply replace the counting module with NMS, using the average of the attention glimpses ', 'modified_lines': 'as scoring, and one-hot encoding the number of proposals left. The NMS-based approach, using an IoU threshold of 0.5 and no score thresholding based on validation set performance, does not improve on the baseline, which suggests that the piecewise gradient of NMS is a major problem for learning to count in VQA and that conversely, there is a substantial benefit to being able to differentiate through the counting module. ', 'original_lines': 'as scoring, and one-hot encoding the number of proposals left. Also, the NMS-based approach, using an IoU threshold of 0.5 and no score thresholding based on validation set performance, does not improve on the baseline, which suggests that the piecewise gradient of NMS is a major problem for learning to count in VQA and that conversely, there is a substantial benefit of being able to differentiate through the counting module. ', 'after_paragraph_idx': 64, 'before_paragraph_idx': 64}]
|
2018-02-07 23:37:29
|
ICLR.cc/2018/Conference
|
rybpaWF8z
|
SktCQ7ywM
|
[]
|
2018-02-12 14:25:21
|
ICLR.cc/2025/Conference
|
pbTVNlX8Ig
|
yfHQOp5zWc
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 RELATED WORK ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 tograd system in PyTorch, specifically tailored for our experimental setup, which is available at ', 'modified_lines': 'https://github.com/stephane-rivaud/PETRA. ', 'original_lines': 'https://github.com/streethagore/PETRA. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2025-03-14 15:35:37
|
ICLR.cc/2017/conference
|
ryuxYmvel
|
SJdZSdr7e
|
[{'context_before': '2010; Blanchette et al., 2015) and billions of individual proof steps. While the general direction of the proofs is specified by humans (by providing the goal to prove, specifying intermediate steps, or applying certain automated tactics), the majority of such proof steps are actually found by automated ', 'context_after': 'learning techniques so far. At the same time, fast progress has been unfolding in machine learning applied to tasks that involve ', 'original_lines': 'reasoning-based proof search (Kaliszyk & Urban, 2015), with very little application of machine ', 'modified_lines': 'reasoning-based proof search (Kaliszyk & Urban, 2015b), with very little application of machine ', 'section': '1 INTRODUCTION', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'before_paragraph_idx': 4, 'after_section': '1 INTRODUCTION', 'after_paragraph_idx': 4}, {'context_before': 'eliminated, bound variables are represented by their de Bruijn indices and free variables are renamed canonically. Since the Hindley-Milner type inference mechanisms will be sufficient to reconstruct the most-general types of the expressions well enough for automated-reasoning techniques Kaliszyk ', 'context_after': 'http://cl-informatik.uibk.ac.at/cek/holstep/ 3 MACHINE LEARNING TASKS ', 'original_lines': 'et al. (2015) we erase all type information. Table 1 presents some dataset statistics. The dataset together with the description of the used format is available from: ', 'modified_lines': 'et al. (2015) we erase all type information. Table 1 presents some dataset statistics. The dataset, the description of the used format, the scripts used to generate it and baseline models code are available: ', 'section': '2 DATASET EXTRACTION', 'paragraph_idx': 18, 'before_section': '2 DATASET EXTRACTION', 'before_paragraph_idx': 18, 'after_section': '2 DATASET EXTRACTION', 'after_paragraph_idx': 18}, {'context_before': 'Figure 1: Unconditioned classification model architectures. model, which makes running these experiments accessible to most people (they could even be run ', 'context_after': '4.1 UNCONDITIONED CLASSIFICATION MODELS ', 'original_lines': 'on a laptop CPU). We are releasing all of our benchmark code as open-source software1 so as to allow others to reproduce our results and improve upon our models. ', 'modified_lines': 'on a laptop CPU). We are releasing all of our benchmark code as open-source software so as to allow others to reproduce our results and improve upon our models. ', 'section': '4 BASELINE MODELS', 'paragraph_idx': 25, 'before_section': None, 'before_paragraph_idx': None, 'after_section': None, 'after_paragraph_idx': None}, {'context_before': 'with shared weights), with one branch processing the proof step statement being considered, and the other branch processing the conjecture. Each branch outputs an embedding; these two embeddings (step embedding and conjecture embedding) are then concatenated and the classified by a fully- ', 'context_after': '5 ', 'original_lines': 'connected network. See figure 2 for a layer-by-layer description of these models. 1A link to the code is upcoming. It is going through a separate Google release process. ', 'modified_lines': 'connected network. See figure 2 for a layer-by-layer description of these models. ', 'section': '4.2 CONDITIONED CLASSIFICATION MODELS', 'paragraph_idx': 29, 'before_section': '4.2 CONDITIONED CLASSIFICATION MODELS', 'before_paragraph_idx': 29, 'after_section': None, 'after_paragraph_idx': None}, {'context_before': '6 CONCLUSIONS Our baseline deep learning models, albeit fairly weak, are still able to predict statement usefulness ', 'context_after': '7 ', 'original_lines': 'with a remarkably high accuracy, making them valuable for a number of practical theorem proving applications. This includes making tableaux-based (Paulson, 1999) and superposition-based (Hurd, 2003) internal ITP proof search significantly more efficient in turn making formalization easier. However, our models do not appear to be able to leverage order in the input sequences, nor do they ', 'modified_lines': 'with a remarkably high accuracy. Such methods already help first-order automated provers (Kaliszyk & Urban, 2015a) and as the branching factor is higher in HOL the predictions are valuable for a number of practical proving applications. This includes making tableaux-based (Paulson, 1999) and superposition-based (Hurd, 2003) internal ITP proof search significantly more efficient in turn ', 'section': '6 CONCLUSIONS', 'paragraph_idx': 41, 'before_section': None, 'before_paragraph_idx': None, 'after_section': None, 'after_paragraph_idx': None}, {'context_before': 'Figure 6: Training profile of the three condi- tioned baseline models with token input. ', 'context_after': 'This shows the need to focus future efforts on different models that can do reasoning, or alternatively, on systems that blend explicit reasoning with deep learning-based feature learning. ', 'original_lines': 'appear to be able to leverage conditioning on the conjectures. This is due to the fact that these models are not doing any form of logical reasoning on their input statements; rather they are doing simple pattern matching at the level of n-grams of characters or tokens. ', 'modified_lines': 'making formalization easier. However, our models do not appear to be able to leverage order in the input sequences, nor do they appear to be able to leverage conditioning on the conjectures. This is due to the fact that these models are not doing any form of logical reasoning on their input statements; rather they are doing simple pattern matching at the level of n-grams of characters or tokens. ', 'section': '6 CONCLUSIONS', 'paragraph_idx': 41, 'before_section': None, 'before_paragraph_idx': None, 'after_section': '6 CONCLUSIONS', 'after_paragraph_idx': 42}, {'context_before': 'The dataset focuses on one interactive theorem prover. It would be interesting if the proposed tech- niques generalize, primarily across ITPs that use the same foundational logic, for example using ', 'context_after': 'Finally, two of the proposed task for the dataset have been premise selection and intermediate sen- tence generation. It would be interesting to define more ATP-based ways to evaluate the selected ', 'original_lines': 'OpenTheory (Hurd, 2011), and secondarily across fundamentally different ITPs or even ATPs. Most of the unused steps originate from trying to fulfill the conditions for rewriting and from calls to in- tuitionistic tableaux. A further ITP analysis of these steps, as well as translating to HOL the steps performed by model elimination in FOL could give further insights and interesting data for training. ', 'modified_lines': 'OpenTheory (Hurd, 2011), and secondarily across fundamentally different ITPs or even ATPs. A significant part of the unused steps originates from trying to fulfill the conditions for rewriting and from calls to intuitionistic tableaux. The main focus is however on the human found proofs so the trained predictions may to an extent mimic the bias on the usefulness in the human proofs. As ATPs are at the moment very week in comparison with human intuition improving this even for the many proofs humans do not find difficult would be an important gain. ', 'section': '6.1 FUTURE WORK', 'paragraph_idx': 47, 'before_section': '6.1 FUTURE WORK', 'before_paragraph_idx': 47, 'after_section': '6.1 FUTURE WORK', 'after_paragraph_idx': 48}]
|
2016-12-07 05:39:44
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.