venue
stringclasses 9
values | original_openreview_id
stringlengths 8
17
| revision_openreview_id
stringlengths 8
11
| content
stringlengths 2
620k
| time
stringdate 2016-11-04 05:38:56
2025-05-23 04:52:50
|
|---|---|---|---|---|
ICLR.cc/2018/Conference
|
rkfIuG-CW
|
SkHjxwK7G
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Quantization using low-precision numerics (Vanhoucke et al., 2011; Zhou et al., 2016; Lin et al., 1 Under review as a conference paper at ICLR 2018 ternary precision and 4-bit precision models. In the first scheme, a low-precision network and a full-precision network are jointly trained from ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'INTRODUCTION ', 'modified_lines': 'Background: Today’s high performing deep neural networks (DNNs) for computer vision applica- tions comprise of multiple layers and involve numerous parameters. These networks have O(Giga- FLOPS) compute requirements and generate models which are O(Mega-Bytes) in storage (Canziani et al., 2016). Further, the memory and compute requirements during training and inference are quite different (Mishra et al., 2017). Training is performed on big datasets with large batch-sizes where the memory footprint of activations dominates the model memory footprint. On the other hand, the batch-size during inference is typically small and the model’s memory footprint dominates the runtime memory requirements. Because of the complexity in compute, memory and storage requirements, training phase of the networks is performed on CPU and/or GPU clusters in a distributed computing environment. Once trained, a challenging aspect is deployment of the trained models on resource constrained inference systems such as portable devices or sensor networks, and for applications in which real-time predic- tions are required. Performing inference on edge-devices comes with severe constraints on memory, compute and power. Additionally, ensemble based methods, which one can potentially use to get improved accuracy predictions, become prohibitive in resource constrained systems. 2015; Miyashita et al., 2016; Gupta et al., 2015; Zhu et al., 2016; Rastegari et al., 2016; Courbariaux et al., 2015; Umuroglu et al., 2016; Mishra et al., 2017) and model compression (Buciluˇa et al., 2006; Hinton et al., 2015; Romero et al., 2014) have emerged as popular solutions for resource constrained deployment scenarios. With quantization, a low-precision version of the network model is gener- ated and deployed on the device. Operating in lower precision mode reduces compute as well as data movement and storage requirements. However, the majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. With model compression, a smaller low memory footprint network is trained to mimic the behaviour of the original complex network. During this training, a process called, knowledge distillation is used to “transfer knowl- edge” from the complex network to the smaller network. Work by Hinton et al. (2015) shows that the knowledge distillation scheme can yield networks at comparable or slightly better accuracy than the original complex model. However, to the best of our knowledge, all prior works using model compression techniques target compression at full-precision. Our proposal: In this paper, we study the combination of network quantization with model com- pression and show that the accuracies of low-precision networks can be significantly improved by using knowledge distillation techniques. Previous studies on model compression use a large network as the teacher network and a small network as the student network. The small student network learns from the teacher network using the distillation process. The network architecture of the student net- work is typically different from that of the teacher network – for e.g. Hinton et al. (2015) investigate a student network that has fewer number of neurons in the hidden layers compared to the teacher network. In our work, the student network has similar topology as that of the teacher network, ex- cept that the student network has low-precision neurons compared to the teacher network which has neurons operating at full-precision. We call our approach Apprentice1 and study three schemes which produce low-precision net- works using knowledge distillation techniques. Each of these three schemes produce state-of-the-art ', 'original_lines': 'Today’s high performing deep neural networks (DNNs) for computer vision applications comprise of multiple layers and involve numerous parameters. These networks have O(Giga-FLOPS) compute requirements and generate models which are O(Mega-Bytes) in storage (Canziani et al., 2016). Fur- ther, the memory and compute requirements during training and inference are quite different (Mishra et al., 2017). Training is often performed on big datasets with large batch-sizes where the memory footprint of activations dominates the model memory footprint. On the other hand, the batch-size during inference is typically small and the model’s memory footprint dominates the runtime memory requirements. Because of the complexity in compute, memory and storage requirements, the training phase of the networks is performed on CPU and/or GPU clusters in the cloud or in a distributed computing environment. Once trained, a challenging aspect is deployment of the trained models on resource constrained inference systems such as portable devices or sensor networks, and for applications in which real-time predictions are required. Performing inference on edge-devices comes with severe constraints on memory, compute and power. Additionally, ensemble based methods, which one can potentially use to get improved accuracy predictions, become prohibitive in resource constrained systems. 2015; Miyashita et al., 2016; Gupta et al., 2015b; Zhu et al., 2016; Rastegari et al., 2016; Cour- bariaux et al., 2015; Umuroglu et al., 2016; Mishra et al., 2017) and model compression (Buciluˇa et al., 2006; Hinton et al., 2015; Romero et al., 2014) have emerged as popular solutions for re- source constrained deployment scenarios. With quantization, a low-precision version of the network model is generated and deployed on the device. Operating in lower precision mode reduces com- pute as well as data movement and storage requirements. However, the majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. With model com- pression, a smaller low memory footprint network is trained to mimic the behaviour of the original complex network. During this training, a process called, knowledge distillation is used to “transfer knowledge” from the complex network to the smaller network. Work by Hinton et al. (2015) shows that the knowledge distillation scheme can yield networks at comparable or slightly better accuracy than the original complex model. However, to the best of our knowledge, all prior works using model compression techniques target compression at full-precision. In this paper, we study the combination of network quantization with model compression and show that the accuracies of low-precision networks can be significantly improved by using knowledge distillation techniques. Previous studies on model compression use a large network as the teacher network and a small network as the student network. The small student network learns from the teacher network using the distillation process. The network architecture of the student network is typically different from that of the teacher network – for e.g. Hinton et al. (2015) investigate a student network that has fewer number of neurons in the hidden layers compared to the teacher network. In our work, the student network has similar topology as that of the teacher network, except that the student network has low-precision neurons compared to the teacher network which has neurons operating at full-precision. We call our approach Apprentice1 and study three schemes which produce low-precision networks using knowledge distillation techniques. Each of these three schemes produce state-of-the-art ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 MOTIVATION FOR LOW-PRECISION MODEL PARAMETERS 1Dictionary defines apprentice as a person who is learning a trade from a skilled employer, having agreed to work for a fixed period at low wages. In our work, the apprentice is a low-precision network which is learning ', 'paragraph_idx': 11, 'before_section': '1 INTRODUCTION', 'context_before': 'passes the accuracy of the equivalent low-precision model published to date. One of our schemes also helps a low-precision model converge faster. We envision these accurate low-precision models to simplify the inference deployment process on resource constrained systems and even otherwise ', 'modified_lines': 'on cloud-based deployment systems. Lowering precision of model parameters: Resource constrained inference systems impose signif- icant restrictions on memory, compute and power budget. With regard to storage, model (or weight) parameters and activation maps occupy memory during the inference phase of DNNs. During this phase memory is allocated for input (IFM) and output feature maps (OFM) required by a single layer in the DNN, and these dynamic memory allocations are reused for other layers. The total memory allocation during inference is then the maximum of IFM and maximum of OFM memory required across all the layers plus the sum of all weight tensors (Mishra et al., 2017). When infer- ence phase for DNNs is performed with a small batch size, the memory footprint of the weights exceeds the footprint of the activation maps. This aspect is shown in Figure 1 for 4 different net- works (AlexNet (Krizhevsky et al., 2012), Inception-Resnet-v2 (Szegedy et al., 2016), ResNet-50 ', 'original_lines': 'on cloud based deployment systems. Resource constrained inference systems impose significant restrictions on memory, compute and power budget. With regard to storage, model (or weight) parameters and activation maps occupy memory during the inference phase of DNNs. During this phase memory is allocated for input (IFM) and output feature maps (OFM) required by a single layer in the DNN, and these dynamic memory allocations are reused for other layers. The total memory allocation during inference is then the maximum of IFM and maximum of OFM memory required across all the layers plus the sum of all weight tensors (Mishra et al., 2017). When inference phase for DNNs is performed with a small batch size, the memory footprint of the weights exceeds the footprint of the activation maps. This aspect is shown in Figure 1 for 4 different networks (AlexNet (Krizhevsky et al., 2012), ', 'after_paragraph_idx': None, 'before_paragraph_idx': 11}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Figure 1: Memory footprint of activations (ACTs) and weights (W) during inference for mini-batch sizes 1 and 8. 3 RELATED WORK ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'and ResNet-101 (He et al., 2015)) running 224x224 image patches. Thus lowering the precision of the weight tensors helps lower the memory requirements during deployment. Benefit of low-precision compute: Low-precision compute simplifies hardware implementation. For example, the compute unit to perform the convolu- tion operation (multiplication of two operands) in- volves a floating-point multiplier when using full- precision weights and activations. The floating- point multiplier can be replaced with a much sim- pler circuitry (xnor and popcount logic elements) when using binary precision for weights and activa- tions (Courbariaux & Bengio, 2016; Rastegari et al., 2016; Courbariaux et al., 2015). Similarly, when us- ing ternary precision for weights and full-precision for activations, the multiplier unit can be replaced with a sign comparator unit (Li & Liu, 2016; Zhu et al., 2016). Simpler hardware also helps lower the inference latency and energy budget. Thus, operat- ing in lower precision mode reduces compute as well as data movement and storage requirements. The drawback of low-precision models, however, is degraded accuracy. We discuss later in the paper the network accuracies obtained using methods proposed in literature. These accuracies serve as the starting point and baselines we compare to in our work. ', 'original_lines': 'Inception-Resnet-v22 (Szegedy et al., 2016), ResNet-50 and ResNet-101 (He et al., 2015)) running 224x224 image patches. Thus lowering the precision of the weight tensors helps lower the memory requirements during deployment. Low-precision compute simplifies hardware imple- mentation. For example, the compute unit to per- form the convolution operation (multiplication of two operands) involves a floating-point multiplier when using full-precision weights and activations. The floating-point multiplier can be replaced with a much simpler circuitry (xnor and popcount logic el- ements) when using binary precision for weights and activations (Courbariaux & Bengio, 2016; Rastegari et al., 2016; Courbariaux et al., 2015). Similarly, when using ternary precision for weights and full- precision for activations, the multiplier unit can be replaced with a sign comparator unit (Li & Liu, 2016; Zhu et al., 2016). Simpler hardware also helps lower the inference latency and energy budget. Thus, operating in lower precision mode reduces compute as well as data movement and storage requirements. The drawback of low-precision models, however, is degraded accuracy. We discuss later in the paper the network accuracies obtained using methods proposed in literature. These accuracies serve as the starting point and baselines we compare to in our work. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'training. They show 16-bits to be sufficient for training on CIFAR10 dataset. Work by Seide et al. (2014) quantizes gradients in a distributed computing system. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'pipeline. While TTQ targets weight quantization, most works targeting activation quantization show that quantizing activations always hurt accuracy. XNOR-NET approach degrades Top-1 accuracy by 12% and DoReFa by 8% when quantizing both weights and activations to 1-bit (for AlexNet ', 'modified_lines': 'on ImageNet). Work by Gupta et al. (2015) advocates for low-precision fixed-point numbers for ', 'original_lines': 'on ImageNet). Work by Gupta et al. (2015a) advocates for low-precision fixed-point numbers for ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RELATED WORK', 'after_section': None, 'context_after': '3 97.7%91.3%79.6%32.8%94.1%66.5%96.5%77.6%2.3%8.7%20.4%67.2%5.9%33.5%3.5%22.4%18181818AlexnetIRv2ResNet-50ResNet-101% Memory footprint% Ws% ACTs Under review as a conference paper at ICLR 2018 then facilitates more information flowing to the model parameters during back-propagation opera- tion. FitNets (Romero et al., 2014) extend this work by using intermediate hidden layer outputs as target values for training a deeper, but thinner, student model. Net2Net (Chen et al., 2015a) also ', 'paragraph_idx': 20, 'before_section': '3 RELATED WORK', 'context_before': 'Hinton et al. (2015) propose a framework to transfer knowledge by introducing the concept of tem- perature. The key idea is to divide the logits by a temperature factor before performing a Softmax ', 'modified_lines': 'function. By using a higher temperature factor the activations of incorrect classes are boosted. This ', 'original_lines': 'function. By using a higher temperature factor the activations of incorrect classes are boosted. This ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': '3 RELATED WORK', 'after_section': '3 RELATED WORK', 'context_after': 'Sparsity and hashing: Few other popular techniques for model compression are pruning (LeCun et al., 1990; Han et al., 2015a; Wen et al., 2016; Han et al., 2015b), hashing (Weinberger et al., 2009) 4 KNOWLEDGE DISTILLATION ', 'paragraph_idx': 20, 'before_section': '3 RELATED WORK', 'context_before': 'pose an information metric using which a teacher DNN can transfer the distilled knowledge to other student DNNs. In N2N learning work, Ashok et al. (2017) propose a reinforcement learning based approach for compressing a teacher network into an equally capable student network. They achieve ', 'modified_lines': 'a compression factor of 10x for ResNet-34 on CIFAR datasets. and weight sharing (Chen et al., 2015b; Denil et al., 2013). Pruning leads to removing neurons entirely from the final trained model making the model a sparse structure. With hashing and weight sharing schemes a hash function is used to alias several weight parameters into few hash buckets, effectively lowering the parameter memory footprint. To realize benefits of sparsity and hashing schemes during runtime, efficient hardware support is required (e.g. support for irregular memory accesses (Han et al., 2016; Venkatesh et al., 2016; Parashar et al., 2017)). ', 'original_lines': 'a compression factor of 10x for ResNet-34 on Cifar datasets. and weight sharing (Chen et al., 2015b; Denil et al., 2013). Pruning leads to removing neurons en- tirely from the final trained model making the model a sparse structure. With hashing and weight sharing schemes a hash function is used to alias several weight parameters into few hash buckets, ef- fectively lowering the parameter memory footprint. To see benefits of sparsity and hashing schemes during runtime, efficient hardware support (e.g. support for irregular memory accesses (Han et al., 2016; Venkatesh et al., 2016; Parashar et al., 2017)) is required. ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 20}, {'section': '4 KNOWLEDGE DISTILLATION', 'after_section': None, 'context_after': 'j . The same image is fed to the student network and it predicts pA = ezA cost function, L, is given as: ', 'paragraph_idx': 24, 'before_section': '4 KNOWLEDGE DISTILLATION', 'context_before': 'Figure 2 shows the schematic of the knowledge distillation setup. Given an input image x, a teacher DNN maps this image to predictions pT . The C class predictions are obtained by applying Softmax ', 'modified_lines': 'function on the un-normalized log probability values z (the logits), i.e. pT = ezT ', 'original_lines': 'function on the un-normalized log probablity values z (the logits), i.e. pT = ezT ', 'after_paragraph_idx': None, 'before_paragraph_idx': 24}, {'section': '5.1 TOP-1 ERROR WITH PRIOR PROPOSALS FOR LOW-PRECISION NETWORKS', 'after_section': None, 'context_after': '5 ', 'paragraph_idx': 30, 'before_section': None, 'context_before': 'network would continuously guide the student network not only with the final trained logits, but also on what path the teacher takes towards generating those final higher accuracy logits. ', 'modified_lines': 'We implement pre-activation version of ResNet (He et al., 2016) in TensorFlow (Abadi et al., 2015). The training process closely follows the recipe mentioned in Torch implementation of ResNet - we use a batch size of 256 and no hyper-parameters are changed from what is mentioned in the recipe. For the teacher network, we experiment with ResNet-34, ResNet-50 and ResNet-101 as options. For the student network, we experiment with low-precision variants of ResNet-18, ResNet-34 and ResNet-50. ', 'original_lines': 'We implement pre-activation version of ResNet (He et al., 2016) in TensorFlow. The training process closely follows the recipe mentioned in Torch implementation of ResNet - we use a batch size of 256 and no hyper-parameters are changed from what is mentioned in the recipe. For the teacher network, we experiment with ResNet-34, ResNet-50 and ResNet-101 as options. For the student network, we experiment with low-precision variants of ResNet-18, ResNet-34 and ResNet-50. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'after_section': None, 'context_after': 'Figure 3: Difference in Top-1 error rate for low- precision variants of ResNet-18 with (blue bars) and ', 'paragraph_idx': 39, 'before_section': None, 'context_before': '29.9 32.4 ', 'modified_lines': 'Results with ResNet-18: Table 1 shows the ef- fect of lowering precision on the accuracy (Top- 1 error) of ResNet-18 with baseline (no teacher) and with ResNet-34, ResNet-50 and ResNet-101 as teachers. In the table, A denotes the precision of the activation maps (in bits) and W denotes the precision of the weights. The baseline Top-1 error for full-precision ResNet-18 is 30.4%. By low- ering the precision without using any help from a teacher network, the accuracy drops by 3.5% when using ternary and 4-bits precision (the col- umn corresponding to “Res-18 Baseline” in the table). With distillation based technique, the ac- curacy of low-precision configurations improves significantly. In fact, the accuracy of the full- precision ResNet-18 also improves when paired with a larger full-precision ResNet model (the row corresponding to “32A, 32W” in Table 1). The best full-precision accuracy was achieved with a student ResNet-18 and ResNet-101 as the teacher (improvement by 0.35% over the baseline). The gap between full-precision ResNet-18 and the best achieved ternary weight ResNet-18 is only 1% (improvement of 2% over previous best). With “8A, 4W”, we find the accuracy of the student ResNet-18 model to beat the baseline accuracy. We hy- pothesize regularization with low-precision (and distillation) to be the reason for this. “8A, 4W” improving the accuracy beyond baseline figure is only seen for ResNet-18. ', 'original_lines': 'Table 1 shows the effect of lowering precision on the accuracy of ResNet-18 with baseline (no teacher) and with ResNet-34, ResNet-50 and ResNet-101 as teachers. In the table, A denotes the precision of the activation maps (in bits) and W denotes the precision of the weights. The base- line Top-1 error for full-precision ResNet-18 is 30.4%. By lowering the precision without us- ing any help from a teacher network, the accu- racy drops by 3.5% when using ternary and 4-bits precision (the column corresponding to “Res-18 Baseline” in the table). With distillation based technique, the accuracy of low-precision config- urations improves significantly. In fact, the ac- curacy of the full-precision ResNet-18 also im- proves when paired with a larger full-precision ResNet model (the row corresponding to “32A, 32W” in Table 1). The best full-precision ac- curacy was achieved with a student ResNet-18 and ResNet-101 as the teacher (improvement by 0.35% over the baseline). The gap between full- precision ResNet-18 and the best achieved ternary weight ResNet-18 is only 1% (improvement of 2% over previous best). With “8A, 4W”, we find the accuracy of the student ResNet-18 model to beat the baseline accuracy. We hypothesize regularization with low-precision (and distillation) to be the reason for this. “8A, 4W” improving the accuracy beyond baseline figure is only seen for ResNet-18. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'after_section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'context_after': '(when trained under the guidance of a teacher network) versus not using any help from a teacher network. For this figure, the difference in Top-1 error of the best low-precision student network is calculated from the baseline full-precision network (i.e. ResNet-18 with 30.4% Top-1 error), i.e. we ', 'paragraph_idx': 42, 'before_section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'context_before': 'with full-precision numerics. Higher % difference denotes a better network configuration. ', 'modified_lines': 'Figure 3 shows the difference in Top-1 error rate achieved by our best low-precision student networks ', 'original_lines': 'Figure 3 shows the difference in Top-1 accuracy achieved by our best low-precision student networks ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 41}, {'section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 39, 'before_section': None, 'context_before': 'network (ResNet-34 for (a) and ResNet-50 for (b)) operating at full-precision. Higher % difference denotes a better network configuration. ', 'modified_lines': 'Results with ResNet-34 and ResNet-50: Table 2 and Table 3 show the effect of lowering precision on the accuracy of ResNet-34 and ResNet-50, respectively, with distillation based technique. With a student ResNet-34 network, we use ResNet-34, ResNet-50 and ResNet-101 as teachers. With a student ResNet-50 network, we use ResNet-50 and ResNet-101 as teachers. The Top-1 error for full-precision ResNet-34 is 26.4%. Our best 4-bits weight and 8-bits activation ResNet-34 is within 0.5% of this number (26.9% error rate with ResNet-34 student and ResNet-50 teacher). This significantly improves upon the previously reported error rate of 29.7%. 4-bits weight and 8-bits activation for ResNet-50 gives us a model that is within 1.5% of full-precision model accuracy (25.3% vs. 23.8%). Figure 4a and Figure 4b show the difference in Top-1 error achieved by our best low-precision ResNet-34 and ResNet-50 student networks, respectively, and compares with results obtained using methods proposed in literature. Our Apprentice scheme significantly closes the gap between full-precision baseline networks and low-precision variants of the same networks. In most cases we see our scheme to better the previously reported accuracy numbers by 1.5%-3%. ', 'original_lines': 'Table 2 and Table 3 show the effect of lowering precision on the accuracy of ResNet-34 and ResNet- 50, respectively, with distillation based technique. With a student ResNet-34 network, we use ResNet-34, ResNet-50 and ResNet-101 as teachers. With a student ResNet-50 network, we use ResNet-50 and ResNet-101 as teachers. The Top-1 error for full-precision ResNet-34 is 26.4%. Our best 4-bits weight and 8-bits activation ResNet-34 is within 0.5% of this number (26.9% with ResNet-34 student and ResNet-50 teacher). This significantly improves upon the previously re- ported error rate of 29.7%. 4-bits weight and 8-bits activation for ResNet-50 gives us a model that is within 1.5% of full-precision model accuracy (25.3% vs. 23.8%). Figure 4a and Figure 4b show the difference in Top-1 accuracy achieved by our best low-precision ResNet-34 and ResNet-50 student networks, respectively, and compares with results obtained using methods proposed in literature. Our Apprentice scheme significantly closes the gap between full-precision baseline networks and low-precision variants of the same networks. In most cases we see our scheme to better the previously reported accuracy numbers by 1.5%-3%. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 KNOWLEDGE DISTILLATION', 'after_section': None, 'context_after': 'For the third term in equation 1, we experimented with a mean-squared error loss function and also a loss function with logits from both the student and the teacher network (i.e. H(zT , zA)). We did not find any improvement in accuracy compared to our original choice of the cost function formulation. Overall, we find the distillation process to be quite effective in getting us high accuracy low-precision 5.3 SCHEME-B: DISTILLING KNOWLEDGE FROM A TEACHER ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'jointly supervising a student network. This could be because of our choice of α, β and γ values. In Section 4, we mentioned about temperature, τ , for Softmax function and hyper-parameters α = 1, ', 'modified_lines': 'β = 0.5 and γ = 0.5. Since, we train directly on the logits of the teacher network, we did not have to experiment with the appropriate value of τ . τ is required when training on the soft targets produced by the teacher network. Although we did not do extensive studies experimenting with training on soft targets as opposed to logits, we did find that τ = 1 gives us best results when training on soft targets. Hinton et al. (2015) mention that when the student network is significantly smaller than the teacher network, small values of τ are more effective than large values. For few of the low-precision configurations, we experimented with α = β = γ = 1, and, α = 0.9, β = 1 and γ = 0.1 or 0.3. Each of these configurations, yielded a lower performance model compared to our original choice for these parameters. A thorough investigation of the behavior of the networks with other values of hyper-parameters and different loss functions is an agenda for our future work. models. All our low-precision models surpass previously reported low-precision accuracy figures. For example, TTQ scheme achieves 33.4% Top-1 error rate for ResNet-18 with 2-bits weight. Our best ResNet-18 model, using scheme-A, with 2-bits weight achieves ∼31.5% error rate, improving the model accuracy by ∼2% over TTQ. Similarly, the scheme in Mellempudi et al. (2017) achieves 29.2% Top-1 error with 2-bits weight and 8-bits activation. The best performing Apprentice net- work at this precision achieves 27.2% Top-1 error. For Scheme-B and Scheme-C, which we describe next, Scheme-A serves as the new baseline. ', 'original_lines': 'β = 0.5 and γ = 0.5. We found τ = 1 to give us best results (although we did not do extensive studies experimenting with this parameter). Hinton et al. (2015) mention that when the student network is significantly smaller than the teacher network, small values of τ are more effective than large values. For few of the low-precision configurations, we experimented with α = β = γ = 1, and, α = 0.9, β = 1 and γ = 0.1 or 0.3. Each of these configurations, yielded a lower performance model compared to our original choice for these parameters. models. All our low-precision models surpass previously reported accuracy figures. For example, TTQ scheme achieves 33.4% Top-1 error rate for ResNet-18 with 2-bits weight. Our best ResNet-18 model with 2-bits weight achieves ∼31.5% error rate, improving the model accuracy by ∼2% over TTQ. Similarly, the scheme in Mellempudi et al. (2017) achieves 29.2% Top-1 error with 2-bits weight and 8-bits activation. The best performing Apprentice network at this precision achieves 27.2% Top-1 error. For Scheme-B and Scheme-C, which we describe next, Scheme-A serves as the new baseline. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 RELATED WORK', 'after_section': None, 'context_after': '5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL ', 'paragraph_idx': 20, 'before_section': None, 'context_before': 'With this scheme, the training accuracies are similar to that reported in Table 1, 2 and 3. The low-precision student networks, however, learn in fewer number of epochs. Figure 5 plots the Top-1 error rates for few of the configurations from our experiment suite. In each of these plots, the student ', 'modified_lines': 'network in scheme-B converges around 80th-85th epoch compared to about 105 epochs in scheme- A. In general, we find the student networks with scheme-B to learn in about 10%-20% fewer epochs than the student networks trained using scheme-A. ', 'original_lines': 'network in scheme-B converges around 85th epoch compared to about 105 epochs in scheme-A. In general, we find the student networks with scheme-B to learn in about 10%-20% fewer epochs than the student networks trained using scheme-A. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'We found the final accuracy of the models obtained using this scheme to be (marginally) better than those obtained using scheme-A or scheme-B. Table 4 shows error rates of few configurations of low- precision student network obtained using scheme-A (or scheme-B) and scheme-C. For ResNet-50 student network, the accuracy with ternary weights is further improved by 0.6% compared to that obtained using scheme-A. Note that the performance of ternary networks obtained using scheme- A are already state-of-the-art. Hence, for ResNet-50 ternary networks, 24.7% Top-1 error rate is ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'epochs, followed by 1e-4 for another 5 to 10 epochs, followed by 1e-5 for another 5 epochs to give us the best accuracy. Some configurations run for about 40 to 50 epochs before stabilizing. For these configurations, we found training using scheme-B with warm startup (train the student network at ', 'modified_lines': 'full-precision for about 25-30 epochs before lowering the precision) to be equally good. 9 00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-34studentwithResNet-50teacher,2W32A00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-34studentwithResNet-50teacher,4W8A00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-50studentwithResNet-101teacher,2W32A00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-50studentwithResNet-101teacher,4W8A Under review as a conference paper at ICLR 2018 ', 'original_lines': 'full-precision for about 25 epochs before lowering the precision) to be equally good. 9 00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-34studentwithResNet-50teacher,2W32A00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-34studentwithResNet-50teacher,4W8A00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-50studentwithResNet-101teacher,2W32A00.10.20.30.40.50.60.70.80.9105101520253035404550556065707580859095100105Top-1errorEpochsScheme-AScheme-BResNet-50studentwithResNet-101teacher,4W8A Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.5 DISCUSSION - TERNARY PRECISION VERSUS SPARSITY', 'after_section': None, 'context_after': '6 CONCLUSIONS 10 ', 'paragraph_idx': 63, 'before_section': None, 'context_before': '5.5 DISCUSSION - TERNARY PRECISION VERSUS SPARSITY As mentioned earlier, low-precision is a form of model compression. There are many works which ', 'modified_lines': 'target network sparsification and pruning techniques to compress a model. With ternary preci- sion models, the model size reduces by a factor of 2/32 compared to full-precision models. With Apprentice, we show how one can get a performant model with ternary precision. Many works tar- geting network pruning and sparsification target a full-precision model to implement their scheme. To be comparable in model size to ternary networks, a full-precision model needs to be sparsified by 93.75%. Further, to be effective, a sparse model needs to store a key for every non-zero value de- noting the position of the value in the weight tensor. This adds storage overhead and a sparse model needs to be about 95% sparse to be at-par in memory size as a 2-bit model. Note that ternary preci- sion also has inherent sparsity (zero is a term in the ternary symbol dictionary) – we find our ternary models to be about 50% sparse. In work by Wen et al. (2016) and Han et al. (2015b), sparsification of full-precision networks is proposed but the sparsity achieved is less than 93.75%. Further, the network accuracy using techniques in both these works lead to larger degradation in accuracy com- pared to our ternary models. Overall, we believe, our ternary precision models to be state-of-the-art not only in accuracy (we better the accuracy compared to prior ternary precision models) but also when one considers the size of the model at the accuracy level achieved by low-precision or sparse networks. While low-precision networks have system-level benefits, the drawback of such models is degraded accuracy when compared to full-precision models. We present three schemes based on knowledge distillation concept to improve the accuracy of low-precision networks and close the gap between the accuracy of these models and full-precision models. Each of the three schemes improve the accuracy of the low-precision network configuration compared to prior proposals. We motivate the need for a smaller model size in low batch, real-time and resource constrained inference deployment systems. We envision the low-precision models produced by our schemes to simplify the inference deployment process on resource constrained systems and on cloud-based deployment systems where low latency is a critical requirement. ', 'original_lines': 'target network sparsification and pruning to compress a model. With ternary precision models, the model size reduces by a factor of 2/32 compared to full-precision models. With Apprentice, we show how one can get a performant model with ternary precision. Many works targeting network pruning and sparsification target a full-precision model to implement their scheme. To be compa- rable in model size to ternary networks, a full-precision model needs to be sparsified by 93.75%. Further, to be effective, a sparse model needs to store a key for every non-zero value denoting the position of the value in the weight tensor. This adds storage overhead and a sparse model needs to be about 95% sparse to be at-par in memory size as a 2-bit model. Note that ternary precision also has inherent sparsity (zero is a term in the ternary symbol dictionary) – we find our ternary models to be about 50% sparse. In work by Wen et al. (2016) and Han et al. (2015b), sparsification of full- precision networks is proposed but the sparsity achieved is less than 93.75%. Further, the network accuracy using techniques in both these works lead to larger degradation in accuracy compared to our ternary models. Overall, we believe, our ternary precision models to be state-of-the-art not only in accuracy (we better the accuracy compared to prior ternary precision models) but also when one considers the size of the model at the accuracy level achieved by low-precision or sparse networks. We present three schemes based on knowledge distillation concept to improve the accuracy of low- precision networks. Each of the three schemes improve the accuracy of the low-precision network configuration compared to prior proposals. We motivate the need for a smaller model size in low batch, real-time and resource constrained inference deployment systems. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'G. Urban, K. J. Geras, S. Ebrahimi Kahou, O. Aslan, S. Wang, R. Caruana, A. Mohamed, M. Phili- pose, and M. Richardson. Do Deep Convolutional Nets Really Need to be Deep and Convolu- tional? ArXiv e-prints, March 2016. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'network inference. CoRR, abs/1612.07119, 2016. URL http://arxiv.org/abs/1612. 07119. ', 'modified_lines': '', 'original_lines': '12 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-02 19:42:20
|
ICLR.cc/2018/Conference
|
SkHjxwK7G
|
HJslJebAZ
|
[]
|
2018-01-25 15:41:13
|
ICLR.cc/2018/Conference
|
HJslJebAZ
|
BJrLrQcwf
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed ', 'modified_lines': 'models. In this paper, we study combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline. ', 'original_lines': 'In this paper, we study the combination of these two techniques and models. show that the performance of low-precision networks can be significantly im- proved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'systems such as portable devices or sensor networks, and for applications in which real-time predic- tions are required. Performing inference on edge-devices comes with severe constraints on memory, compute and power. Additionally, ensemble based methods, which one can potentially use to get ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'FLOPS) compute requirements and generate models which are O(Mega-Bytes) in storage (Canziani et al., 2016). Further, the memory and compute requirements during training and inference are quite different (Mishra et al., 2017). Training is performed on big datasets with large batch-sizes where ', 'modified_lines': 'memory footprint of activations dominates the model memory footprint. On the other hand, batch- size during inference is typically small and the model’s memory footprint dominates the runtime memory requirements. Because of complexity in compute, memory and storage requirements, training phase of the net- works is performed on CPU and/or GPU clusters in a distributed computing environment. Once trained, a challenging aspect is deployment of trained models on resource constrained inference ', 'original_lines': 'the memory footprint of activations dominates the model memory footprint. On the other hand, the batch-size during inference is typically small and the model’s memory footprint dominates the runtime memory requirements. Because of the complexity in compute, memory and storage requirements, training phase of the networks is performed on CPU and/or GPU clusters in a distributed computing environment. Once trained, a challenging aspect is deployment of the trained models on resource constrained inference ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 Our proposal: In this paper, we study the combination of network quantization with model com- We call our approach Apprentice1 and study three schemes which produce low-precision net- works using knowledge distillation techniques. Each of these three schemes produce state-of-the-art ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'Quantization using low-precision numerics (Vanhoucke et al., 2011; Zhou et al., 2016; Lin et al., 2015; Miyashita et al., 2016; Gupta et al., 2015; Zhu et al., 2016; Rastegari et al., 2016; Courbariaux ', 'modified_lines': 'et al., 2015; Umuroglu et al., 2016; Mishra et al., 2017) and model compression (Buciluˇa et al., 2006; Hinton et al., 2015; Romero et al., 2014) have emerged as popular solutions for resource constrained deployment scenarios. With quantization, a low-precision version of network model is generated and deployed on the device. Operating in lower precision mode reduces compute as well as data movement and storage requirements. However, majority of existing works in low-precision DNNs sacrifice accuracy over baseline full-precision networks. With model compression, a smaller Published as a conference paper at ICLR 2018 low memory footprint network is trained to mimic the behaviour of the original complex network. During this training, a process called, knowledge distillation is used to “transfer knowledge” from the complex network to the smaller network. Work by Hinton et al. (2015) shows that the knowledge distillation scheme can yield networks at comparable or slightly better accuracy than the original complex model. However, to the best of our knowledge, all prior works using model compression techniques target compression at full-precision. pression and show that accuracies of low-precision networks can be significantly improved by using knowledge distillation techniques. Previous studies on model compression use a large network as the teacher network and a small network as the student network. The small student network learns from teacher network using distillation process. The network architecture of the student network is typically different from that of the teacher network – for e.g. Hinton et al. (2015) investigate a student network that has fewer number of neurons in the hidden layers compared to the teacher net- work. In our work, the student network has similar topology as that of teacher network, except that the student network has low-precision neurons compared to the teacher network which has neurons operating at full-precision. ', 'original_lines': 'et al., 2015; Umuroglu et al., 2016; Mishra et al., 2017) and model compression (Buciluˇa et al., 2006; Hinton et al., 2015; Romero et al., 2014) have emerged as popular solutions for resource constrained deployment scenarios. With quantization, a low-precision version of the network model is gener- ated and deployed on the device. Operating in lower precision mode reduces compute as well as data movement and storage requirements. However, the majority of existing works in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. With model compression, a smaller low memory footprint network is trained to mimic the behaviour of the original complex network. During this training, a process called, knowledge distillation is used to “transfer knowl- Under review as a conference paper at ICLR 2018 edge” from the complex network to the smaller network. Work by Hinton et al. (2015) shows that the knowledge distillation scheme can yield networks at comparable or slightly better accuracy than the original complex model. However, to the best of our knowledge, all prior works using model compression techniques target compression at full-precision. pression and show that the accuracies of low-precision networks can be significantly improved by using knowledge distillation techniques. Previous studies on model compression use a large network as the teacher network and a small network as the student network. The small student network learns from the teacher network using the distillation process. The network architecture of the student net- work is typically different from that of the teacher network – for e.g. Hinton et al. (2015) investigate a student network that has fewer number of neurons in the hidden layers compared to the teacher network. In our work, the student network has similar topology as that of the teacher network, ex- cept that the student network has low-precision neurons compared to the teacher network which has neurons operating at full-precision. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1Dictionary defines apprentice as a person who is learning a trade from a skilled employer, having agreed to work for a fixed period at low wages. In our work, the apprentice is a low-precision network which is learning ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'memory allocation during inference is then the maximum of IFM and maximum of OFM memory required across all the layers plus the sum of all weight tensors (Mishra et al., 2017). When infer- ence phase for DNNs is performed with a small batch size, the memory footprint of the weights ', 'modified_lines': '', 'original_lines': 'exceeds the footprint of the activation maps. This aspect is shown in Figure 1 for 4 different net- works (AlexNet (Krizhevsky et al., 2012), Inception-Resnet-v2 (Szegedy et al., 2016), ResNet-50 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 MOTIVATION FOR LOW-PRECISION MODEL PARAMETERS', 'after_section': None, 'context_after': 'Figure 1: Memory footprint of activations (ACTs) and weights (W) during inference for ', 'paragraph_idx': 14, 'before_section': '2 MOTIVATION FOR LOW-PRECISION MODEL PARAMETERS', 'context_before': '2016; Courbariaux et al., 2015). Similarly, when us- ing ternary precision for weights and full-precision for activations, the multiplier unit can be replaced ', 'modified_lines': 'with a sign comparator unit. Simpler hardware also helps lower the inference latency and energy bud- get. Thus, operating in lower precision mode re- duces compute as well as data movement and storage requirements. ', 'original_lines': 'with a sign comparator unit (Li & Liu, 2016; Zhu et al., 2016). Simpler hardware also helps lower the inference latency and energy budget. Thus, operat- ing in lower precision mode reduces compute as well as data movement and storage requirements. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': 'Abstract', 'after_section': None, 'context_after': 'In equation 1, lowering the first term of the cost function gives a better teacher network and lowering the second term gives a better student network. The third term is the knowledge distillation term whereby the student network attempts to mimic the knowledge in the teacher network. In Hinton et al. (2015), the logits of the teacher network are divided by a temperature factor τ . Using a higher value for τ produces a softer probability distribution when taking the Softmax of the logits. In our ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'tively, y is the ground truth, H(·) denotes a loss function and, α, β and γ are weighting factors to prioritize the output of a certain loss function over the other. ', 'modified_lines': '', 'original_lines': 'Figure 2: Schematic of the knowledge distillation setup. The teacher network is a high precision network and the apprentice network is a low-precision network. 4 Input imagexsoftmaxsoftmaxTeacher networkApprentice networkzTpTzApAHard label!Knowledge distillationWTWAFilter bankFilter bank Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 2: Top-1 validation set error rate (%) on ImageNet-1K for ResNet-34 stu- dent network as precision of activations (A) and weight (W) changes. The last ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'is paired with a teacher network. They mention improving the accuracy of a small model on MNIST dataset. We show the efficacy of distillation based techniques on a much bigger model (ResNet) with much larger dataset (ImageNet). ', 'modified_lines': '', 'original_lines': ' 6 0.3% -1.0% 0.8% -1.5% 0.0% -3.0% -3.2% -3.5% -6% -4% -2% 0% 2% 32A,32W32A,2W8A,4W8A,2WDifference(!)inTop-1errorforRes-18frombaseline!from32A,32WwithoutApprentice!from32A,32WwithApprentice Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Discussion: In scheme-A, we use a teacher network that is always as large or larger in number of parameters than the student network. We experimented with a ternary ResNet-34 student network ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'obtained using methods proposed in literature. Our Apprentice scheme significantly closes the gap between full-precision baseline networks and low-precision variants of the same networks. In most cases we see our scheme to better the previously reported accuracy numbers by 1.5%-3%. ', 'modified_lines': '', 'original_lines': ' 7 0.3% -0.8% -0.5% -2.1% 0.0% -1.9% -3.3% -4.4% -6% -4% -2% 0% 2% 32A,32W32A,2W8A,4W8A,2WDifference(!)inTop-1errorforRes-34frombaseline!from32A,32WwithoutApprentice!from32A,32WwithApprentice0.3% -1.5% -1.5% -3.4% 0.0% -2.3% -4.7% -5.5% -6% -4% -2% 0% 2% 32A,32W32A,2W8A,4W8A,2WDifference(!)inTop-1errorforRes-50frombaseline!from32A,32WwithoutApprentice!from32A,32WwithApprentice Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'after_section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'context_after': 'targets. Hinton et al. (2015) mention that when the student network is significantly smaller than the teacher network, small values of τ are more effective than large values. For few of the low-precision configurations, we experimented with α = β = γ = 1, and, α = 0.9, β = 1 and γ = 0.1 or 0.3. ', 'paragraph_idx': 45, 'before_section': '5.2 SCHEME-A: JOINT TRAINING OF TEACHER-STUDENT NETWORKS', 'context_before': 'β = 0.5 and γ = 0.5. Since, we train directly on the logits of the teacher network, we did not have to experiment with the appropriate value of τ . τ is required when training on the soft targets produced by the teacher network. Although we did not do extensive studies experimenting with training on ', 'modified_lines': 'soft targets as opposed to logits, we find that τ = 1 gives us best results when training on soft ', 'original_lines': 'soft targets as opposed to logits, we did find that τ = 1 gives us best results when training on soft ', 'after_paragraph_idx': 45, 'before_paragraph_idx': 45}, {'section': '5.3 SCHEME-B: DISTILLING KNOWLEDGE FROM A TEACHER', 'after_section': '5.3 SCHEME-B: DISTILLING KNOWLEDGE FROM A TEACHER', 'context_after': 'With scheme-B, one can pre-compute and store the logit values for the input images on disk and access them during training the student network. This saves the forward pass computations in the teacher network. Scheme-B might also help the scenario where a student network attempts to learn the “dark knowledge” from a teacher network that has already been trained on some private or sensitive data (in addition to the data the student network is interested in training on). ', 'paragraph_idx': 56, 'before_section': None, 'context_before': 'scheme, the first term in equation 1 zeroes out and only the last two terms in the equation contribute toward the loss function. ', 'modified_lines': 'Figure 5: Top-1 error rate versus epochs of four student networks using scheme-A and scheme-B. ', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 Figure 5: Top-1 error rate versus epochs of four student networks using scheme-A and scheme-B. ', 'after_paragraph_idx': 57, 'before_paragraph_idx': None}]
|
2018-02-20 22:46:37
|
ICLR.cc/2018/Conference
|
BJrLrQcwf
|
BygHpNm_z
|
[{'section': '5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL', 'after_section': '5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL', 'context_after': 'We find the final accuracy of the models obtained using scheme-C to be (marginally) better than those obtained using scheme-A or scheme-B. Table 4 shows error rates of few configurations of ', 'paragraph_idx': 60, 'before_section': '5.4 SCHEME-C: FINE-TUNING THE STUDENT MODEL', 'context_before': 'Similar to scheme-B, only the final two terms in equation 1 comprise the loss function and the low- precision student network is trained with back-propagation algorithm. Since, the network starts from ', 'modified_lines': 'a good initial point, comparatively low learning rate is used throughout the training process. There is no clear recipe for learning rates (and change of learning rate with epochs) which works across all the configurations. In general, we find training with a learning rate of 1e-3 for 10 to 15 epochs, followed by 1e-4 for another 5 to 10 epochs, followed by 1e-5 for another 5 epochs to give us the best accuracy. Some configurations run for about 40 to 50 epochs before stabilizing. For these configurations, we find training using scheme-B with warm startup (train the student network at full-precision for about 25-30 epochs before lowering the precision) to be equally good. Wu (2016) investigate a similar scheme for binary precision on AlexNet. Our experiments show that distillation is an overkill for AlexNet and one can get comparable accuracies using techniques proposed in (Tang et al., 2017; Mishra et al., 2017). Further, Wu (2016) hypothesize that distillation scheme will work on larger networks, we show in this paper how to make it work. Tann et al. (2017) use a similar scheme for AlexNet and mention starting from a non-global optimal checkpoint gives better accuracy, though we did not find this observation to hold in our experiments. ', 'original_lines': 'a good initial point, comparatively low learning rate is used throughout the training process. There is no clear recipe for learning rates (and change of learning rate with epochs) which works across all the configurations. In general, we find training with a learning rate of 1e-3 for 10 to 15 epochs, followed by 1e-4 for another 5 to 10 epochs, followed by 1e-5 for another 5 epochs to give us the best accuracy. Some configurations run for about 40 to 50 epochs before stabilizing. For these configurations, we find training using scheme-B with warm startup (train the student network at full-precision for about 25-30 epochs before lowering the precision) to be equally good. Tann et al. (2017) use a similar scheme for AlexNet and mention starting from a non-global optimal checkpoint gives better accuracy, though we did not find this observation to hold in our experiments. ', 'after_paragraph_idx': 61, 'before_paragraph_idx': 60}]
|
2018-02-27 20:19:04
|
ICLR.cc/2018/Conference
|
Hkj5yAlCb
|
SJcZgClRb
|
[]
|
2017-10-27 15:25:54
|
ICLR.cc/2018/Conference
|
SJcZgClRb
|
BkgwgAlRZ
|
[]
|
2017-10-27 15:27:20
|
ICLR.cc/2018/Conference
|
BkgwgAlRZ
|
H1gDpalRZ
|
[]
|
2018-01-25 15:41:35
|
ICLR.cc/2018/Conference
|
H1gDpalRZ
|
SkldBQ9PM
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'in low-precision DNNs sacrifice accuracy over the baseline full-precision networks. Further, most prior works target reducing the precision of the model parameters (network weights). This primarily benefits the inference step only when batch sizes are small. To improve both execution efficiency and accuracy of low-precision networks, we reduce both the precision of activation maps and model parameters and increase the number of filter maps in a layer. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'such efficiency benefits, there are many existing works which propose low-precision deep neural net- works (DNNs) (Zhou et al., 2017; Lin et al., 2015; Miyashita et al., 2016; Gupta et al., 2015b; Van- houcke et al., 2011), even down to 2-bit ternary mode (Zhu et al., 2016; Li & Liu, 2016; Venkatesh ', 'modified_lines': 'et al., 2016) and 1-bit binary mode (Zhou et al., 2016; Courbariaux & Bengio, 2016; Rastegari et al., 2016; Courbariaux et al., 2015; Umuroglu et al., 2016). However, majority of existing works We observe that activation maps (neuron outputs) occupy more memory compared to the model parameters for batch sizes typical during training. This observation holds even during inference when batch size is around eight or more. Based on this observation, we study schemes for training and inference using low-precision DNNs where we reduce the precision of activation maps as well as the model parameters without sacrificing network accuracy. ', 'original_lines': 'et al., 2016) and 1-bit binary mode (Zhou et al., 2016; Courbariaux & Bengio, 2016; Rastegari et al., 2016; Courbariaux et al., 2015; Umuroglu et al., 2016). However, the majority of existing works ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': 'Abstract', 'after_section': None, 'context_after': 'binary networks and show state-of-the art results for ResNet-34 (69.85% top-1 with 2x wide) and AlexNet (48.04% top-1 with 1.3x wide). To the best of our knowledge, our reported accuracies with binary networks and 4-bit precision are highest to date. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'models while achieving similar or better accuracy than baseline network. With 4-bit activation and 2-bit weights, we find the accuracy to be at-par with baseline full-precision. Making the networks wider and operating with 1-bit precision, we close the accuracy gap between previously reported ', 'modified_lines': '', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 WRPN SCHEME AND STUDIES ON ALEXNET', 'after_section': '3 WRPN SCHEME AND STUDIES ON ALEXNET', 'context_after': 'In our work, we maintain the depth parameter same as baseline network but widen the filter maps. We call our approach WRPN - wide reduced-precision networks. In practice, we find this scheme to be very simple and effective - starting with a baseline network architecture, one can change the width of each filter map without changing any other network design parameter or hyper-parameters. Carefully reducing precision and simultaneously widening filters keeps the total compute cost of the network under or at-par with baseline cost.1 ', 'paragraph_idx': 23, 'before_section': '3 WRPN SCHEME AND STUDIES ON ALEXNET', 'context_before': 'Our widening of filter maps is inspired from Wide ResNet (Zagoruyko & Komodakis, 2016) work where the depth of the network is reduced and width of each layer is increased (the operand precision ', 'modified_lines': 'is still FP32). Wide ResNet requires a re-design of the network architecture. 3 Published as a conference paper at ICLR 2018 ', 'original_lines': 'is still FP32). Wide ResNet requires a re-design of the network architecture. 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': '4 STUDIES ON DEEPER NETWORKS', 'after_section': None, 'context_after': '4.1 RESNET ResNet-34 has 3x3 filters in each of its modular layers with shortcut connections being 1x1. The filter bank width changes from 64 to 512 as depth increases. We use the pre-activation variant of ResNet and the baseline top-1 accuracy of our ResNet-34 implementation using single-precision 32-bits data format is 73.59%. Binarizing weights and activations for all layers except the first and the last layer in this network gives top-1 accuracy of 60.5%. For binarizing ResNet we did not re-order any layer (as is done in XNOR-NET). We used the same hyper-parameters and learning rate schedule as the baseline network. As a reference, for ResNet-18, the gap between XNOR-NET ', 'paragraph_idx': 29, 'before_section': '4 STUDIES ON DEEPER NETWORKS', 'context_before': '2015) and batch-normalized Inception (Ioffe & Szegedy, 2015) and find similar trends, particularly that 2-bits weight and 4-bits activations continue to provide at-par accuracy as baseline. We use TensorFlow (Abadi et al., 2015) and tensorpack for all our evaluations and use ILSVRC-12 train ', 'modified_lines': 'and val dataset for analysis. 1Compute cost is the product of the number of FMA operations and the sum of width of the activation and weight operands. 4 Published as a conference paper at ICLR 2018 ', 'original_lines': 'and val dataset for analysis.2 1Compute cost is the product of the number of FMA operations and the sum of width of the activation and weight operands. 2We will open-source our implementation of reduced-precision AlexNet, ResNet and batch-normalized Inception networks. 4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 29}, {'section': '5 HARDWARE FRIENDLY QUANTIZATION SCHEME', 'after_section': '5 HARDWARE FRIENDLY QUANTIZATION SCHEME', 'context_after': 'circumvents this problem by defining an operator that has arbitrary forward and backward opera- tions. ', 'paragraph_idx': 39, 'before_section': None, 'context_before': '0.25x 0.13x ', 'modified_lines': 'ically, this small and finite set would have zero gradients with respect to its inputs. STE method ', 'original_lines': '', 'after_paragraph_idx': 39, 'before_paragraph_idx': None}]
|
2018-02-20 22:47:04
|
ICLR.cc/2018/Conference
|
ByI77wOmf
|
r1DCnMZCb
|
[]
|
2018-01-25 15:39:46
|
ICLR.cc/2018/Conference
|
Bk3mHAKNf
|
rkehWW-R-
|
[]
|
2018-01-25 15:40:49
|
ICLR.cc/2018/Conference
|
r1lpJNTXM
|
Sk_J1Bx0Z
|
[]
|
2018-01-25 15:42:12
|
ICLR.cc/2018/Conference
|
Sk_J1Bx0Z
|
rJHiHzADf
|
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'times. We can further reduce the number of parameter updates by increasing the learning rate (cid:15) and scaling the batch size B ∝ (cid:15). Finally, one can increase the mo- mentum coefficient m and scale B ∝ 1/(1 − m), although this tends to slightly ', 'modified_lines': 'reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes. ', 'original_lines': 'reduce the test accuracy. Crucially, our techniques allow us to repurpose exist- ing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to 77% validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images. ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'the SGD dynamics, g = (cid:15)( N • When one decays the learning rate, one simultaneously decays the scale of random fluctu- batch size during training. This strategy achieves near-identical model performance on the Our proposal does not require any fine-tuning as we follow pre-existing training schedules; when the learning rate drops by a factor of α, we instead increase the batch size by α. 1 We note that a number of recent works have discussed increasing the batch size during training (Friedlander & Schmidt, 2012; Byrd et al., 2012; Balles et al., 2016; Bottou et al., 2016; De et al., ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'batches can be parallelized across many machines, reducing training time. Unfortunately, when we increase the batch size the test set accuracy often falls (Keskar et al., 2016; Goyal et al., 2017). ', 'modified_lines': 'To understand this surprising observation, Smith & Le (2017) argued one should interpret SGD as integrating a stochastic differential equation. They showed that the scale of random fluctuations in B − 1), where (cid:15) is the learning rate, N training set size and B batch size. Furthermore, they found that there is an optimum fluctuation scale g which maximizes the test set accuracy (at constant learning rate), and this introduces an optimal batch size proportional to the learning rate when B (cid:28) N . Goyal et al. (2017) already observed this scaling rule empirically and exploited it to train ResNet-50 to 76.3% ImageNet validation accuracy in one hour. Here we show, ations g in the SGD dynamics. Decaying the learning rate is simulated annealing. We propose an alternative procedure; instead of decaying the learning rate, we increase the test set with the same number of training epochs but significantly fewer parameter updates. • As shown previously, we can further reduce the number of parameter updates by increasing the learning rate and scaling B ∝ (cid:15). One can also increase the momentum coefficient and scale B ∝ 1/(1 − m), although this slightly reduces the test accuracy. We train Inception- ResNet-V2 on ImageNet in under 2500 parameter updates, using batches of 65536 images, and reach a validation set accuracy of 77%. We also replicate the setup of Goyal et al. (2017) on TPU and train ResNet-50 on ImageNet to 76.1% accuracy in under 30 minutes. ∗Both authors contributed equally. Work performed as members of the Google Brain Residency Program. Published as a conference paper at ICLR 2018 ', 'original_lines': 'To understand this surprising observation, Smith & Le (2017) argued that one should interpret SGD as integrating a stochastic differential equation. They showed that the scale of random fluctuations in B − 1), where (cid:15) is the learning rate, N training set size and B batch size. Furthermore, they found empirically that there was an optimum fluctuation scale g which maximized the test set accuracy (at constant learning rate). This leads to the emergence of an optimal batch size proportional to the learning rate when B (cid:28) N . Goyal et al. (2017) had already observed such a scaling rule, and exploited it to train ResNet-50 on ImageNet in one hour. In this work we show, ations g in the SGD dynamics. Decaying the learning rate is simulated annealing. • We propose an alternative procedure; instead of decaying the learning rate, we increase the test set with the same number of training epochs but dramatically fewer parameter updates. • As shown previously, we can further reduce the number of parameter updates without any drop in test set accuracy by increasing the learning rate and scaling B ∝ (cid:15). As a method of last resort, one can also increase the momentum coefficient and scale B ∝ 1/(1 − m). • Combining these strategies, we train Inception-ResNet-V2 on ImageNet in under 2500 parameter updates, reaching a validation set accuracy of 77%. To achieve this, we train on vast batches of 65536 images. By contrast, Goyal et al. (2017) required 14000 parameter updates to reach 76% validation accuracy with ResNet-50, using batches of 8192 images. Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 STOCHASTIC GRADIENT DESCENT AND CONVEX OPTIMIZATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In section 2 we discuss the convergence criteria for SGD in strongly convex minima, in section 3 we interpret decaying learning rates as simulated annealing, and in section 4 we discuss the difficulties ', 'modified_lines': 'of training with large momentum coefficients. Finally in section 5 we present conclusive experimen- tal evidence that the empirical benefits of decaying learning rates in deep learning can be obtained by instead increasing the batch size during training. We exploit this observation and other tricks to achieve efficient large batch training on CIFAR-10 and ImageNet. ', 'original_lines': 'of combining decaying learning rates with large momentum coefficients. Finally in section 5 we present conclusive experimental evidence that the empirical benefits of decaying learning rates in deep learning can be obtained by increasing the batch size during training. We exploit this observa- tion and other tricks to achieve efficient large batch training on CIFAR-10 and ImageNet. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 STOCHASTIC GRADIENT DESCENT AND CONVEX OPTIMIZATION', 'after_section': '2 STOCHASTIC GRADIENT DESCENT AND CONVEX OPTIMIZATION', 'context_after': '(3) ', 'paragraph_idx': 8, 'before_section': '2 STOCHASTIC GRADIENT DESCENT AND CONVEX OPTIMIZATION', 'context_before': 'the covariances in gradient fluctuations between different parameters. They also proved that the “noise scale” g = (cid:15)( N B − 1), where (cid:15) is the learning rate, N the training set size and B the batch ', 'modified_lines': 'size. This noise scale controls the magnitude of the random fluctuations in the training dynamics. ', 'original_lines': 'size. This noise scale is a scalar which controls the magnitude of the random fluctuations. ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '3 (a) ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'Kingma & Ba (2014) proposed initialization bias correction, whereby the learning rate is increased at early times to compensate the suppressed initial value of the accumulation. However when the batch size is large, we found that this often causes instabilities during the early stages of training. ', 'modified_lines': 'We note that Goyal et al. (2017) recommended a reduced learning rate for the first few epochs. Published as a conference paper at ICLR 2018 ', 'original_lines': 'On similar grounds, Goyal et al. (2017) recommend a reduced learning rate for the first few epochs. Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '5.1 SIMULATED ANNEALING IN A WIDE RESNET ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '5 EXPERIMENTS ', 'modified_lines': 'In section 5.1, we demonstrate that decreasing the learning rate and increasing the batch size during training are equivalent. In section 5.2, we show we can further reduce the number of parameter updates by increasing the effective learning rate and scaling the batch size. In section 5.3 we apply our insights to train Inception-ResNet-V2 on ImageNet, using vast batches of up to 65536 images. Finally in section 5.4, we train ResNet-50 to 76.1% ImageNet validation accuracy within 30 minutes. ', 'original_lines': 'In section 5.1, we demonstrate the equivalence between increasing the batch size and decreasing the learning rate on CIFAR-10 with a “16-4” wide ResNet and a range of optimizers. In section 5.2, we demonstrate that we can further reduce the number of parameter updates by increasing the effective learning rate and scaling the batch size. Finally in section 5.3, we apply our approach to train Inception-ResNet-V2 on ImageNet in 2500 parameter updates, reaching 77% validation accuracy. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.2', 'after_section': None, 'context_after': '6 (a) (b) 5.3 TRAINING IMAGENET IN 2500 PARAMETER UPDATES ', 'paragraph_idx': 26, 'before_section': '5.2', 'context_before': 'a final test accuracy of 94.3% (the original paper reports 95% accuracy, which we have not been able to replicate). “Increasing batch size” requires ∼29000 updates, reaching a final accuracy of 94.4%. “Increased initial learning rate” requires under 6500 updates, reaching a final accuracy of 94.5%. ', 'modified_lines': 'Finally, “Increased momentum coefficient” requires less than 2500 parameter updates, but reaches a lower test accuracy of 93.3%. Across five additional training runs for each schedule, the median accuracies were 94.3%, 94.2%, 94.2% and 93.5% respectively. We discussed a potential explanation for the performance drop when training with large momentum coefficients in section 4. We provide additional results in appendix B, varying the initial learning rate between 0.1 and 3.2 while holding the batch size constant. We find that the test accuracy falls for initial learning rates larger than ∼0.4. Published as a conference paper at ICLR 2018 Figure 6: Inception-ResNet-V2 on ImageNet. Increasing the batch size during training achieves similar results to decaying the learning rate, but it reduces the number of parameter updates from just over 14000 to below 6000. We run each experiment twice to illustrate the variance. ', 'original_lines': 'The difference in accuracy between these runs likely arises from the random initialization. Finally, “Increased momentum coefficient” requires less than 2500 parameter updates, but reaches a lower test accuracy of 93.3%. We discussed a potential explanation for this performance drop in section 4. It is clear from figure 5 that this final schedule had not converged within the 90 epochs, so it is likely that this performance gap could be reduced, however our goal in this paper is to demonstrate the gains that can be achieved without tuning the hyper-parameters or the training schedule. Under review as a conference paper at ICLR 2018 Figure 6: Inception-ResNet-V2 on ImageNet. Increasing batch size achieves similar results to de- caying the learning rate, but it reduces the number of parameter updates from just over 14000 to below 6000. We run each experiment twice to illustrate the variance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 26}, {'section': 'Abstract', 'after_section': '1 INTRODUCTION', 'context_after': 'Goyal et al. (2017) already increased the learning rate close to its maximum stable value. To further reduce the number of parameter updates we must increase the momentum coefficient. We introduce ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'a slight drop, the difference in final test accuracies is similar to the variance between training runs. Increasing the batch size reduces the number of parameter updates during training from just over 14000 to below 6000. Note that the training curves appear unusually noisy because we reduced the ', 'modified_lines': 'number of test set evaluations to reduce the model training time. ', 'original_lines': 'number of test set evaluations to reduce the model training time. We chose not to include wall-clock times in the text, since these are not comparable across different hardware/software frameworks. Assuming near-perfect parallelism, parameter updates provide a fair measure of the training time. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 CONCLUSIONS REFERENCES Lukas Balles, Javier Romero, and Philipp Hennig. Coupling adaptive batch sizes with learning rates. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': '(cid:15). You et al. (2017a) proposed Layer-wise Adaptive Rate Scaling (LARS), which applies different ', 'modified_lines': 'learning rates to different parameters in the network, and used it to train ImageNet in 14 minutes (You et al., 2017b), albeit to a lower final accuracy of 74.9%. K-FAC (Martens & Grosse, 2015) is also gaining popularity as an efficient alternative to SGD. Wilson et al. (2017) argued that adaptive optimization methods tend to generalize less well than SGD and SGD with momentum (although they did not include K-FAC in their study), while our work reduces the gap in convergence speed. Asynchronous-SGD is another popular strategy, which enables the use of multiple GPUs even when batch sizes are small (Recht et al., 2011; Dean et al., 2012). We do not consider asynchronous-SGD in this work, since the scaling rules enabled us to use batch sizes on the order of the training set size. 8 Published as a conference paper at ICLR 2018 We can often achieve the benefits of decaying the learning rate by instead increasing the batch size during training. We support this claim with experiments on CIFAR-10 and ImageNet, and with a range of optimizers including SGD, Momentum and Adam. Our findings enable the efficient use of vast batch sizes, significantly reducing the number of parameter updates required to train a model. This has the potential to dramatically reduce model training times. We further increase the batch size B by increasing the learning rate (cid:15) and momentum parameter m, while scaling B ∝ (cid:15)/(1 − m). Combining these strategies, we train Inception-ResNet-V2 on ImageNet to 77% validation accuracy in under 2500 parameter updates, using batches of 65536 images. We also exploit increasing batch sizes to train ResNet-50 to 76.1% ImageNet validation set accuracy on TPU in under 30 minutes. Most strikingly, we achieve this without any hyper-parameter tuning, since our scaling rules enable us to directly convert existing hyper-parameter choices from the literature for large batch training. ACKNOWLEDGMENTS We thank Prajit Ramachandran, Gabriel Bender, Matthew Johnson and Martin Abadi for helpful discussions. We also thank Vijay Vasudevan, Brennan Saeta, Jonathan Hseu, Bjarke Roune and the rest of the TPU team for technical support. Takuya Akiba, Shuji Suzuki, and Keisuke Fukuda. Extremely large minibatch sgd: Training resnet- 50 on imagenet in 15 minutes. arXiv preprint arXiv:1711.04325, 2017. ', 'original_lines': 'learning rates to different parameter in the network, and used it to train ImageNet in 24 minutes (You et al., 2017b), albeit to a lower final accuracy of 58.5%. It remains to be seen whether LARS can be combined with the techniques proposed here. K-FAC (Martens & Grosse, 2015) is also gaining popularity as an efficient alternative to SGD. Wilson et al. (2017) argued that adaptive optimization methods tend to generalize less well than SGD and SGD with momentum (although they did not include K-FAC in their study), while our work reduces the gap in convergence speed. Asynchronous- SGD is another popular strategy, which enables the use of multiple GPUs even when batch sizes are small (Recht et al., 2011; Dean et al., 2012). However since the scaling rules enable the use of batch sizes on the order of the training set size, the incentive to use asynchronous-SGD is much reduced. In deep learning, we can often achieve the benefits of decaying the learning rate by instead increasing the batch size. We support this claim with experiments on CIFAR-10 and ImageNet, and with a number of optimizers including SGD, SGD with momentum and Adam. Our findings enable the efficient use of vast batch sizes, significantly reducing the number of parameter updates required to train a model. This has the potential to dramatically reduce model training times. We further increase the batch size B by increasing the learning rate (cid:15) and momentum parameter m, while scaling B ∝ (cid:15)/(1 − m). Combining these strategies, we train Inception-ResNet-V2 on ImageNet to 77% validation accuracy in under 2500 parameter updates, using batches of 65536 images. Most strikingly, we achieve this without any hyper-parameter tuning, since our scaling rules enable us to directly convert existing hyper-parameter choices from the literature for large batch training. 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Samuel L. Smith and Quoc V. Le. A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'cal statistics, pp. 400–407, 1951. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-23 22:28:44
|
ICLR.cc/2018/Conference
|
rkYwwb-CW
|
BJztwW-Rb
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'µ = ∼ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'based on a sample from µ, Dn is an appropriate loss function, such as the squared loss, L(y, y(cid:48)) := (y ', 'modified_lines': '', 'original_lines': 'pµ; i = 1, ..., n ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 RE-WEIGHTED RISK MINIMIZATION', 'after_section': None, 'context_after': '(1) . Here, (cid:96)f (x, t, y) := L(f (x, t), y) ', 'paragraph_idx': 18, 'before_section': '2.1 RE-WEIGHTED RISK MINIMIZATION', 'context_before': 'Rπ(f ) := Ex,t,y∼pπ [(cid:96)f (x, t, y)] (xi, ti, yi) ', 'modified_lines': ' pµ; i = 1, ..., n ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 18}]
|
2017-10-27 19:22:33
|
ICLR.cc/2018/Conference
|
BJztwW-Rb
|
Hk8iDWWC-
|
[]
|
2017-10-27 19:23:09
|
ICLR.cc/2018/Conference
|
Hk8iDWWC-
|
SyPNTzZ0-
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a family of algorithms to address these issues, by jointly learning a representation and a re-weighting of ', 'modified_lines': 'observed data in the induced representation. We show that our algorithms mini- mize an upper bound on the generalization error under design shift, and verify the effectiveness of this approach in causal effect estimation. ', 'original_lines': 'observed data. We show that our algorithms minimize an upper bound on the gen- eralization error under design shift, and verify the effectiveness of this approach in causal effect estimation. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 2 PREDICTING OUTCOMES UNDER DESIGN SHIFT | ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'plications, such as the treatment of patients in hospitals, experimentation is infeasible or impractical, and we are forced to learn from biased, observational data. Doing so requires adjusting for the dis- tributional shift between groups of patients that received different treatments. A related kind of ', 'modified_lines': 'distributional shift arises in unsupervised domain adaptation, the goal of which is to learn predictive models for a target domain, observing ground truth only in a source domain. In this work, we pose both domain adaptation and treatment effect estimation as special cases of prediction across shifting designs, referring to changes in both action policy and feature domain. We separate policy from domain as we wish to make causal statements about the policy, but not about the domain. Learning from observational data to predict the counterfactual outcome under treatment B for a patient who received A, one must adjust for the fact that treatment A was systematically given to patients of different characteristics from those who received treatment B. We call this predicting under a shift in policy. Furthermore, if all of our observational data comes from hospital P , but we wish to predict counterfactuals for patients in hospital Q, with a population that differs from P , an additional source of distributional shift is at play. We call this a shift in domain. Together, we refer to the combination of domain and policy as the design. The design for which we observe ground truth is called the source, and the design of interest the target. The two most common approaches for addressing distributional shift are to learn shift-invariant rep- resentations of the data (Ajakan et al., 2014) or to perform sample re-weighting or matching (Shi- modaira, 2000; Kallus, 2016). Representation learning approaches attempt to extract only informa- tion from the input that is invariant to a change in design and predictive of the variable of interest. Such representations are typically learned by fitting deep neural networks in which activations of deeper layers are regularized to be distributionally similar across designs (Ajakan et al., 2014; Long et al., 2015). Although representation learning can be shown to reduce the error associated to distri- butional shift (Long et al., 2015), standard approaches are biased, even in the limit of infinite data. Re-weighting methods correct for distributional shift by assigning higher weight to samples from the source design that are representative of the target design, often using importance sampling. This idea has been well studied in, for example, the causal inference (Rosenbaum & Rubin, 1983), do- main adaptation (Shimodaira, 2000) and reinforcement learning (Precup et al., 2001) literature. For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the probability of observed treatments. Re-weighting, with knowledge of importance sampling weights, often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples (Swaminathan & Joachims, 2015). A significant hurdle is that optimal weights are rarely known in practice. There are a variety of methods to learn these weights. Weights can be estimated as the inverse of estimated propensities (Rosenbaum & Rubin, 1983; Freedman & Berk, 2008) but this plug-in approach can lead to highly unstable estimates. More stable methods learn weights by minimizing distributional distance met- rics (Gretton et al., 2009; Kallus, 2016; 2017; Zubizarreta, 2015). Closely related, matching (Stuart, 2010) produces weights by finding units in the source design that are close in some metric to units in the target design. Specifying a distributional or unit-wise metric is challenging, especially if the input space is high-dimensional where no metric incorporating all features can ever be made small. This has inspired heuristics such as first performing variable selection and then finding a matching in the selected covariates. Our key algorithmic contribution is to show how to combine the intuition behind shift-invariant representation learning and re-weighting methods by jointly learning a representation Φ of the input space and a weighting function w(Φ) to minimize a) the re-weighted empirical risk and b) a re- weighted measure of distributional shift between designs. By letting w depend on Φ we alleviate the problem of choosing a metric by which to optimize unit weights, as Φ extracts information predictive of the outcome. At the same time, our theory still guarantees a uniform bound on true risk in the target design. This leads to a general algorithmic framework, and a natural bound on the generalization error under design shift. Main contributions We bring together two techniques used to overcome distributional shift be- tween designs—re-weighting and representation learning, with complementary robustness proper- ties, generalizing existing methods based on either technique. We give finite-sample generalization bounds for prediction under design shift, assuming that we have access to neither importance sam- pling weights nor a well-specified model, and develop an algorithmic framework to minimize these bounds. We propose a neural network architecture that jointly learns a representation of the input and a weighting function to improve balance across changing settings. We apply our proposed al- gorithm to the task of predicting causal effects from observational data, achieving state-of-the art results on a widely used benchmark. ', 'original_lines': 'distributional shift arises in domain adaptation, the goal of which is to learn predictive models for a target domain, observing ground truth only in a source domain. In this work, we pose both domain adaptation and treatment effect estimation as special cases of prediction across shifting designs, re- ferring to changes in both action policy and feature domain. We separate policy from domain as we wish to make causal statements about the policy, but not about the domain. Prediction of outcomes under interventions has a long history in clinical applications, most often to verify the efficacy of a medical treatment A by asking “how would the patient have responded under an alternative treatment, B”? To answer this question, the gold-standard solution contin- ues to be the randomized controlled trial, in which treatment is administered randomly to patients selected according to some critera. However, most clinical trials are underpowered for estimat- ing fine-grained individual-level effects. Moreover, many interventions cannot be performed in a randomized fashion due to ethical or practical considerations. Fortunately, vast amounts of observa- tional (non-randomized) clinical data is available in the form of electronic health records, lab tests, and insurance data. Learning from observational data to predict the counterfactual outcome under treatment B for a patient who received A, one must adjust for the fact that treatment A was sys- tematically given to patients of different characteristics from those who received treatment B. We call this predicting under a shift in policy. Furthermore, if all of our observational data comes from hospital P , but we wish to predict counterfactuals for patients in hospital Q, with a population that differs from P , an additional source of distributional shift is at play. We call this a shift in domain. Together, we refer to the combination of domain and policy as the design. The design for which we observe ground truth is called the source, and the design of interest the target. Broadly speaking, existing methods for generalization under distributional shift rely on one or more of the following concepts: a) learning shift-invariant representations of data (Ajakan et al., 2014), b) sample re-weighting (Shimodaira, 2000), or c) unit matching and assignment (Kallus, 2016). Representation learning approaches attempt to extract only information from the input that is invari- ant to a change in design and predictive of the variable of interest. Such representations are typically learned by fitting deep neural networks in which activations of deeper layers are regularized to be distributionally similar across designs (Ajakan et al., 2014; Long et al., 2015). With some excep- tions, see e.g. Long et al. (2015), the theoretical properties of these models are not well understood. Re-weighting methods correct for distributional shift by assigning higher weight, in the objective of interest, to samples from the source design that are representative of the target design, often using importance sampling. This idea has been well studied in, for example, the causal inference (Rosen- baum & Rubin, 1983), domain adaptation (Shimodaira, 2000) and reinforcement learning (Precup et al., 2001) literature. For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the probability of observed treatments. Re-weighting, with knowledge of im- portance sampling weights, often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples (Swaminathan & Joachims, 2015). A considerably larger hurdle, however, is that the optimal weights are rarely known in practice. There are variety of methods to learn these weights. Weights can be estimated as the inverse of estimated propensities (Rosenbaum & Rubin, 1983; Freedman & Berk, 2008) but this plug-in ap- proach can lead to highly unstable estimates. More stable methods attempt to learn weights by minimizing distributional distance metrics (Gretton et al., 2009; Kallus, 2016; Kallus; Zubizarreta, 2015). Closely related, matching (Stuart, 2010) produces weights by finding units in the source design that are close in some metric to units in the target design. Specifying either distributional or unit-wise metric is a challenging task, especially if the input space is high-dimensional where no metric incorporating all features can ever be made small. This necessitates heuristics such as first performing variable selection and then finding a matching in the selected covariates. Unfortunately, the theoretical properties of such procedures are largely unknown. We address the problem of generalization under design shift—prediction of outcomes for previ- ously unseen units, in a design observed without supervision, based on observational, labeled sam- ples from a different design. We show that this problem encompasses both standard counterfactual prediction and unsupervised domain adaptation, as well as the more general case of predicting out- comes under domain shift. To overcome limitations of existing methods, we propose to combine the intuition behind shift-invariant representation learning and re-weighting methods by jointly learning a representation Φ of the input space, and a weighting function w(Φ) to minimize a) the re-weighted empirical risk and b) a re-weighted measure of distributional shift between designs. By letting w depend on Φ we alleviate the problem of choosing a metric by which to optimize unit weights, as Φ extracts information predictive of the outcome. At the same time, our theory guarantees that this will remain a uniform bound on true risk in the target design. This leads to a general algorithmic frame- work, and a natural bound on the generalization error under design shift. We make the following contributions. Main contributions We bring together two techniques used to overcome distributional shift between designs— re-weighting and representation learning, with complementary robustness properties, gen- eralizing existing methods based on either technique. We give finite-sample generalization bounds for prediction under design shift, assuming that we have access to neither importance sampling weights nor a well-specified model, and develop an algorithmic framework to minimize these bounds. We propose a neural network architecture that jointly learns a representation of the input and a weighting function to improve balance across changing settings. We apply our proposed algorithm to the task of predicting causal effects from observational data, achieving state-of-the art results on a widely used benchmark. • • • • 2 Under review as a conference paper at ICLR 2018 | ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '2 PREDICTING OUTCOMES UNDER DESIGN SHIFT', 'after_section': '2 PREDICTING OUTCOMES UNDER DESIGN SHIFT', 'context_after': 'X), corresponding only to the factual outcome | Y (T ) of the treatment administered. Like the target design, the source design consists of a domain of contexts for which we have data and a policy, which describes the (unknown) historical admin- istration of treatment in the data. Only the factual outcomes of the treatments administered are is the observational or off-policy setting, in which interventions in the source design are performed non-randomly as a function of X, pµ(T = pµ(T ). This encapsulates both the covariate shift ', 'paragraph_idx': 9, 'before_section': '2 PREDICTING OUTCOMES UNDER DESIGN SHIFT', 'context_before': 'observations of contexts (such as patient prognostics) to interventions (such as pharmacological treatments) and the target domain pπ(X), which describes the population of contexts to which the policy will be applied. The target design is known to us only through m unlabeled sam- ', 'modified_lines': 'ples (x(cid:48) m) from pπ(X, T ). Outcomes are only available to us in labeled samples from a source domain: (x1, t1, y1), . . . , (xn, tn, yn), where (xi, ti) are draws from a source design pµ(X, T ) and yi = yi(ti) is a draw from pT (Y = ti are, naturally, unobserved. Our focus observed, while the counterfactual outcomes yi(t) for t 1), . . . , (x(cid:48) m, t(cid:48) 1, t(cid:48) | 2 (cid:54) Under review as a conference paper at ICLR 2018 ', 'original_lines': 'ples (x1, t1), . . . , (xm, tm) from pπ(X, T ). Outcomes are only available to us in labeled samples from a source domain: (x(cid:48) n, y(cid:48) i) are draws from a source design i(t(cid:48) pµ(X, T ) and y(cid:48) observed, while the counterfactual outcomes y(cid:48) i are, naturally, unobserved. Our focus ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2 (cid:54)', 'after_section': '2 (cid:54)', 'context_after': 'X) | Without additional assumptions, it is impossible to deduce the effect of an intervention based on ob- servational data alone (Pearl, 2009), as it amounts disentangling correlation and causation. Crucially, prediction of outcomes and causal effects possible, we make the following standard assumptions. Assumption 1 (Consistency, ignorability and overlap). For any unit i, assigned to intervention ti, we observe Yi = Y (ti). Further, strong ignorability: Y (t) { } { ⊥⊥ ', 'paragraph_idx': 11, 'before_section': '2 (cid:54)', 'context_before': 'to a different population of customers. We stress that we are interested in the causal effect of an intervention T on Y , conditioned on X. As such, we cannot think of X and T as a single variable. ', 'modified_lines': 'for any unit i, we can observe the potential outcome yi(t) of at most one intervention t. To make t∈T and the data-generating process pµ(X, T, Y ) satisfy } X and overlap: Prpπ (pµ(T X) > 0) = 1. T t∈T Y (t) ', 'original_lines': '1, t(cid:48) n, t(cid:48) 1), . . . , (x(cid:48) i) is a draw from pT (Y n), where (x(cid:48) i(t) for t i = y(cid:48) 1, y(cid:48) = t(cid:48) i, t(cid:48) for any unit i, we can observe the potential outcome y(cid:48) i(t) of at most one intervention t. To make t∈T and the data-generating process pµ(X, T, Y ) satisfy T X, and overlap: Prpπ (pµ(T Y (t) t∈T } X) > 0) = 1. ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 10}, {'section': '2 (cid:54)', 'after_section': '2 (cid:54)', 'context_after': 'X = x, T = t], and we may predict Y (t) by regression. We further assume common domain support, ', 'paragraph_idx': 11, 'before_section': '2 (cid:54)', 'context_before': 'norability is also known as the no hidden confounders assumption, indicating that all variables that cause both T and Y are assumed to be measured. Under ignorability therefore, any domain shift in p(X) cannot be due to variables that causally influence T and Y , other than through X. Under ', 'modified_lines': 'Assumption 1, potential outcomes equal conditional expectations: E[Y (t) ', 'original_lines': 'Assumption 1, potential outcomes equals conditional expectations: E[Y (t) ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '2 PREDICTING OUTCOMES UNDER DESIGN SHIFT', 'after_section': None, 'context_after': '0, 1 } ∈ { | Predicting τ for unobserved units involves prediction of both potential outcomes. In a clinical set- ting, this is necessary to assess which medication should be administered to a certain individual. ', 'paragraph_idx': 9, 'before_section': None, 'context_before': ', often interpreted as treating (T = 1) or not treating (T = 0) a unit, and the domain is fixed across designs, pµ(x) = pπ(x). This is the classical setting for estimating treatment effects—the effect of choosing one ', 'modified_lines': 'intervention over another (Morgan & Winship, 2014).1 The effect of an intervention T in context X, is measured by the conditional average treatment effect (CATE), τ (x) = E [Y (1) X = x]. Y (0) − ', 'original_lines': 'intervention over another (Morgan & Winship, 2014).1 The effect of an intervention T in context X, is measured by the conditional average treatment effect (CATE), τ (x) = E [Y (1) Y (0) − X = x] . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 (cid:54)', 'after_section': '3 (cid:54)', 'context_after': '(cid:2)(ˆτ (x) τ (x))2(cid:3) (4) In Section 4, we argue that estimating CATE from observational data requires overcoming distribu- tional shift with respect to the treat-all and treat-none policies, in predicting each respective potential ', 'paragraph_idx': 15, 'before_section': '3 (cid:54)', 'context_before': '− ', 'modified_lines': 'MSE(ˆτ ) = Ep − ', 'original_lines': 'MSEπ(ˆτ ) = Ep − ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 15}, {'section': '3 RELATED WORK', 'after_section': None, 'context_after': '1Notions of causal effects exist also for the non-binary case, but these are not considered here. ', 'paragraph_idx': 16, 'before_section': None, 'context_before': '3 RELATED WORK ', 'modified_lines': 'A large body of work has shown that under assumptions of ignorability and having a well- specified model, various regression methods for counterfactual estimation are asymptotically con- sistent (Chernozhukov et al., 2017; Athey & Imbens, 2016; Belloni et al., 2014; Van Der Laan & Rubin, 2006). However, consistency results like these provide no insight in the case of model mis- specification. Under model misspecification, ordinary regression may suffer from (unnecessarily) large bias when generalizing across designs. A common way to alleviate this is importance sam- pling, see Section 2. This idea is used in propensity-score methods (Austin, 2011), that use treatment assignment probabilities (propensities) to re-weight samples for causal effect estimation, and more generally in re-weighted regression, see e.g. (Swaminathan & Joachims, 2015). A major drawback of these methods is the assumption that the design density is known. To address this, others (Gretton et al., 2009; Kallus, 2016), have proposed learning weights to minimize the distributional distance between samples under pπ and pµ, but rely on specifying the data representation a priori. Johansson et al. (2016); Shalit et al. (2017) proposed learning representations for counterfactual inference, inspired by work in unsupervised domain adaptation (Mansour et al., 2009). However, the generalization bounds of Johansson et al. (2016) do not apply to general hypotheses and the bounds of Shalit et al. (2017) and Long et al. (2015) are loose—even if infinite samples are available, they are not guaranteed to converge to the lowest possible error. Moreover, these approaches do not make use of important information that is can be estimated from data: the treatment assignment probabilities. 4 GENERALIZATION UNDER DESIGN SHIFT We give a bound on the risk in predicting outcomes under a target design pπ(T, X) based on un- labeled samples from pπ and labeled samples from a source design pµ(T, X). Our result com- bines distribution matching and re-weighting, resulting in a tighter bound than the closest related work (Shalit et al., 2017). The predictors we consider are compositions f (x, t) = h(Φ(x), t) where ', 'original_lines': 'The central problem in learning predictors that generalize across designs, is to adjust for the distri- butional shift arising from changes in policy and domain. These types of shift have typically been ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'Our bounds build on the intuition that if either a) π is close to a re-weighting of µ, or b) the true outcome is a simple function of x and t, the gap between the target error and the re-weighted source ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Φ is a representation of x and h an hypothesis. We first state a result for the general design shift setting, then show how this result can be used to bound the error in prediction of treatment effects. In Section 5 we give a result about the asymptotic properties of the minimizers of the upper bound. H ', 'original_lines': 'treated separately, the former in counterfactual estimation (Morgan & Winship, 2014) and reinforce- ment learning (Precup et al., 2001), and the latter in domain adaptation. Our proposed framework fits into the family of regression or covariate adjustment methods (Pearl, 2009; Rubin, 2005) for counterfactual estimation. These methods focus on estimating expected potential outcomes, under the assumption that relevant variables2 have been measured. A large body of work has shown that under the additional assumption of having a well-specified model, various regression methods are asymptotically consistent (Chernozhukov et al., 2017; Athey & Imbens, 2016; Belloni et al., 2014; Van Der Laan & Rubin, 2006). However, consistency results like these provide no insight in the case of model misspecification. Under model misspecification, ordinary regression may suffer from (unnecessarily) large bias when generalizing across designs. A common way to alleviate this is the importance sampling princi- ple, see Section 2. This idea is used in so-called propensity-score methods (Austin, 2011), that uses treatment assignment probabilities (propensities) to re-weight samples for causal effect estima- tion, and more generally in re-weighted regression, performed to reduce bias under a misspecified model (Swaminathan & Joachims, 2015). A major drawback of these methods is the assumption that the design density is known. To address this, others (Gretton et al., 2009; Kallus, 2016), have proposed learning weights to minimize the distributional distance between samples under pπ and pµ, but rely on specifying the data representation a priori. In high-dimensional settings, kernel methods such as the re-weighting method of Gretton et al. (2009) are often outperformed by using representations learned from data (Bengio et al., 2013). Johansson et al. (2016); Shalit et al. (2017) proposed learning representations for counterfactual inference, inspired by work in unsupervised domain adaptation (Mansour et al., 2009). However, the generalization bounds of Johansson et al. (2016) do not apply to general hypotheses and the bounds of Shalit et al. (2017) and Long et al. (2015) are loose—even if infinite samples are available. In this work, we combine ideas from representation learning and sample re-weighting, generalizing previous work, to give finite-sample learning bounds that are tight in the asymptotic limit under a well-specified model, and best-in-class otherwise. These bounds give rise to algorithms with good empirical performance and added robustness over previous work. 4 GENERALIZATION UNDER DESIGN SHIFT We proceed to derive a bound on the risk in predicting outcomes under a target design pπ(T, X), based on unlabeled samples from pπ and labeled samples from a source design pµ(T, X). Our result combines distribution matching and re-weighting, resulting in a tighter bound than the closest related work (Shalit et al., 2017). The predictors we consider take the form of a composition f (x, t) = h(Φ(x), t) where Φ is a representation of x and h an hypothesis. We first state a result for the general design shift setting. Then, we show how this result can be used to bound the error in prediction of treatment effects. Finally, in Section 5 we give a result about the asymptotic properties of the minimizers of the upper bound. ', 'after_paragraph_idx': 20, 'before_paragraph_idx': None}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'H − Important examples of IPMs include the Wasserstein distance, for which with Lipschitz constant at most 1, and the Maximum Mean Discrepancy for which in the norm-1 ball in a reproducing kernel Hilbert space. We can now state the following result. is the family of functions are functions H H (cid:107) H Rπ(f ) ', 'paragraph_idx': 21, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': 'Definition 2. The integral probability metric (IPM) distance, associated with a normed vector space of functions ', 'modified_lines': ', between distributions p and q is, IPMH(p, q) := suph∈H | Ep[h] Eq[h] . | Lemma 1. For hypotheses f with loss (cid:96)f such that (cid:96)f / (cid:96)f (cid:107) support, there exists a valid re-weighting w of pµ, see Definition 1, such that, , and pµ, pπ with common ∈ H ', 'original_lines': ', between distributions p and q is, H IPMH(p, q) := sup h∈H | Ep[h] Eq[h] | 2Specifically all confounding variables, causal of both treatment and outcome. 5 Under review as a conference paper at ICLR 2018 Lemma 1. For hypotheses f with loss (cid:96)f such that (cid:96)f / (cid:96)f (cid:107) support, there exists a valid re-weighting w of pµ, see Definition 1, such that, ∈ H , and pµ, pπ with common ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 21}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'denote a set of hypotheses h(Φ, t) operating on the previous notation. Let , Φ . ∈ C} We can now relate the expected target risk Rπ(f ) to the re-weighted empirical source risk ˆRw ', 'paragraph_idx': 23, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': 'π,Φ(z, t) := For a design π, we let pπ,Φ(z, t) be the distribution induced by Φ over Z × T ', 'modified_lines': 'pπ,Φ(z, t)w(Ψ(z), t) its re-weighted form and ˆpw π,Φ its re-weighted empirical form, following our representation Φ and let ', 'original_lines': 'pπ,Φ(z, t)w(Ψ(z), t) its reweighted form and ˆpw π,Φ its reweighted empirical form, following our representation Φ and let ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'exists a constant BΦ > 0 such that (cid:96)h,Φ/BΦ H ducing kernel Hilbert space of a kernel, k such that k((z, t), (z, t)) < . Finally, let w be a valid re-weighting of pµ,Φ. Then with probability at least 1 ', 'paragraph_idx': 23, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': 'y(cid:48))2, and assume that there L(h(z, t), mt(Ψ(z))) where L is the squared loss, L(y, y(cid:48)) = (y − ', 'modified_lines': 'is a repro- Z × T → Y} ', 'original_lines': 'is a repro- Z × T → Y} ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': 'Abstract', 'after_section': None, 'context_after': 'F n,δ measures the capacity of ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(6) ', 'modified_lines': '', 'original_lines': '− ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': None, 'context_after': 'A similar bound exists where sample complexity. H is the family of functions Lipschitz constant at most 1, but with worse See Appendix A.2 for a proof that involves applying finite-sample generalization bounds to the first inequality in Lemma 1, as well as moving to the space induced by the representation Φ. Using uniform weights w(x, t) = 1 in (6), results in a bound similar to that of Shalit et al. (2017) and Long et al. (2015). For π infinite samples. Instead, we consider minimizing (6) with respect to w, improving the tightness of the bound. Recall from Lemma 1 that there exist weights w for which Rπ(f ) = Rw ', 'paragraph_idx': 27, 'before_section': None, 'context_before': 'D ', 'modified_lines': 'Vµ(w, (cid:96)f ) = max (cid:16)(cid:113) Epµ[w2(x, t)(cid:96)2 f (x, t)], (cid:113) E ˆpµ[w2(x, t)(cid:96)2 (cid:17) f (x, t)] . 5 Under review as a conference paper at ICLR 2018 = µ, minimizing the resulting bound results biased hypotheses, even in the asymptotical limit, as the IPM term does not vanish when the sample size increases. This is a rather undesirable property, as even k-nearest-neighbor classifiers are consistent in the limit of ', 'original_lines': 'H Vµ(w, (cid:96)f ) = max( (cid:113) Epµ[w2(x, t)(cid:96)2 f (x, t)], (cid:113) E ˆpµ[w2(x, t)(cid:96)2 f (x, t)]) . = µ, minimizing the resulting bound results biased hypotheses, even in the asymptotical limit, as the IPM term does not vanish when the sample size increases. This is a rather undesirable property, as even a k-nearest-neighbor classifiers are consistent in the limit of ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'substituted a hyperparameter α for BΦ, but discussed the difficulties of selecting its value without access to counterfactual labels. In our experiments, we explore a heuristic for adaptively choosing α, based on measures of complexity of the observed held-out loss as a function of the input. . In this case, pµ(T 0 } { | ', 'paragraph_idx': 27, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': 'H as well as the determinant of the Jacobian of Ψ, see the appendix. and is determined by (cid:107) ', 'modified_lines': 'Qualitatively, BΦ measures the joint complexity of Φ and (cid:96)(r, t). In practice, Shalit et al. (2017) (cid:96)f (cid:107) Theorem 1 is immediately applicable to the case of unsupervised domain adaptation in which there = is only a single potential outcome of interest, X) = pπ(T X). T ', 'original_lines': 'Qualitatively, BΦ mesures the joint complexity of Φ and (cid:96)(r, t). In practice, Shalit et al. (2017) (cid:96)f (cid:107) 6 (cid:54) Under review as a conference paper at ICLR 2018 We note that Theorem 1 is immediately applicable to the case of unsupervised domain adaptation X) = in which there is only a single potential outcome of interest, pπ(T T X), and our only source of shift is in the domain p(X). = ', 'after_paragraph_idx': 27, 'before_paragraph_idx': 27}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': '(h) is a regularizer of h, such as (cid:96)2-regularization. We can show the following result. ', 'paragraph_idx': 30, 'before_section': None, 'context_before': '(cid:125) ', 'modified_lines': '(8) ', 'original_lines': '(9) ', 'after_paragraph_idx': 30, 'before_paragraph_idx': None}, {'section': '6 (cid:54)', 'after_section': '6 (cid:54)', 'context_after': 'RCFR Oracle α RCFR Adapt. α ', 'paragraph_idx': 32, 'before_section': None, 'context_before': 'BART IPM-WNN CFRW ', 'modified_lines': 'RCFR Oracle α, w = 1 ', 'original_lines': 'RCFR Adapt. α, w = 1 ', 'after_paragraph_idx': 32, 'before_paragraph_idx': None}, {'section': '6 (cid:54)', 'after_section': '6 (cid:54)', 'context_after': 'a bound on the target risk. Dashed lines are not back-propagated through. Regularization penal- ties not shown. the following alternating minimization problem. hk, Φk = arg min ', 'paragraph_idx': 35, 'before_section': '6 (cid:54)', 'context_before': 'Figure 1: Architecture for predicting outcomes under design shift. A re-weighting function w ', 'modified_lines': 'is fit jointly with a representation Φ and hypoth- esis h of the potential outcomes, to minimize Consequently, under the assumptions of Thm. 1, for sufficiently large α and λw, Rπ( ˆfn) min f ∈F ≤ Rπ(f ) + Op(1/n3/8 + 1/√m). In words, the minimizers of (8) converge to the representation and hypothesis that minimize the counterfactual risk, in the limit of infinite samples. L π(h, Φ, w; β) directly over h, Φ and w is justified by Theorem 2, we note that While minimizing adjusting w to minimize the empirical risk term serves little purpose, as it may result in overem- phasizing “easy” training examples, especially if α is small. Instead, as a heuristic, we split the objective in two, see (8), and use only the IPM term and regularizer to learn w. In short, we solve ', 'original_lines': 'is fit jointly with a representation Φ and hy- pothis h of the potential outcomes, to minimize objective in two, see (9), and use only the IPM term and regularizer to learn w. In short, we solve ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 35}, {'section': '6 EXPERIMENTS', 'after_section': '6 EXPERIMENTS', 'context_after': 'of propensities), Random Forests, Causal Forests (Wager & Athey, 2017), BART (Chipman et al., 2010), and CFRW (Shalit et al., 2017) (with Wasserstein penalty). Finally, we use as baseline (IPM-WNN): first weights are found by IPM minimization in the input space (Gretton et al., 2009; Kallus, 2016), then used in a re-weighted neural net regression, with the same architecture as our 𝑥Φℎ𝑤IPM(𝑝+,-,𝑝.,-/)𝑤ℓ𝑡ContextRepres.HypothesisWeightingImbalanceWeightedriskTreatmentDNN Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 41, 'before_section': '6 EXPERIMENTS', 'context_before': 'We evaluate our framework in the CATE estimation setting, see Section 2.2—our task is to predict the expected difference between potential outcomes conditioned on pre-treatment variables, for a held-out sample of the population. We compare our results to ordinary least squares (OLS) (with ', 'modified_lines': 'one regressor per outcome), OLS-IPW (re-weighted OLS according to a logistic regression estimate 7 ', 'original_lines': 'one regressor per outcome), OLS-IPW (reweighted OLS according to a logistic regression estimate method. We use IHDP as benchmark, a semi-synthetic binary-treatment dataset (Hill, 2011), split into training and test sets by Shalit et al. (2017). IHDP has synthesized continuous outcomes that can be used to compute the ground-truth CATE error. Our implementtation, dubbed RCFR for Reweighted Counterfactual Regression, parameterizes rep- resentations Φ(x), weighting functions w(Φ, t) and hypotheses h(Φ, t) using neural networks, trained by minimizing (9). We use the RBF-kernel maximum mean discrepancy as IPM (Gret- ton et al., 2012). For a description of the architecture, training procedure and hyperparameters, see Appendix B. We compare results using uniform w = 1 and learned weights, picking the balance parameter α using either an oracle or the adaptive heuristic described in Section B. To pick other hyperparameters, we split training sets into one part used for function fitting and one used for early 8 ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': '6 EXPERIMENTS', 'after_section': '6 EXPERIMENTS', 'context_after': 'achieves state-of-the-art results, and that adaptively choosing α does not hurt performance much. observations of Shalit et al. (2017). b) For large α non-uniform re-weighting (small λw). c) While large α makes the factual error more representative of the counterfactual error, using it without re-weighting results in higher absolute error. ', 'paragraph_idx': 43, 'before_section': '6 EXPERIMENTS', 'context_before': '(smaller λw) improves the error, c) for large α, weighting helps, but overall error increases. In the right-hand plot, we compare the ratio of CATE error to source error. Color represents α (see left) and size λw. We see that for large α, the source error is more representative of CATE error, but does ', 'modified_lines': 'not improve in absolute value without weighting. Here, α was fixed. Best viewed in color. method. We use IHDP as benchmark, a semi-synthetic binary-treatment dataset (Hill, 2011), split into training and test sets by Shalit et al. (2017). IHDP has synthesized continuous outcomes that can be used to compute the ground-truth CATE error. Our implementation, dubbed RCFR for re-weighted Counterfactual Regression, parameterizes rep- resentations Φ(x), weighting functions w(Φ, t) and hypotheses h(Φ, t) using neural networks, trained by minimizing (8). We use the RBF-kernel maximum mean discrepancy as the IPM (Gret- ton et al., 2012). For a description of the architecture, training procedure and hyperparameters, see Appendix B. We compare results using uniform w = 1 and learned weights, setting the balance parameter α either fixed, by an oracle (test-set error), or adaptively using the heuristic described in Section 5. To pick other hyperparameters, we split training sets into one part used for function fitting and one used for early stopping and hyperparameter selection. Hyperparameters for regularization are chosen based on the empirical risk on a held-out source (factual) sample. We present the results of our evaluation on IHDP in Table 1. We see that our proposed method Furthermore, we see a substantial improvement from using non-uniform sample weights. In Figure 2 we take a closer look at the behavior of our model as we vary its hyperparameters on the IHDP dataset. Between the two plots we can draw the following conclusions: a) For moderate to large α [10, 100], we observe a marginal gain from using the IPM penalty. This is consistent with the [10, 1000], we see a large gain from using a ', 'original_lines': 'not improve in absolute value without weighting. Here, α was fixed during training. stopping and hyperparameter selection. Hyperparameters for regularizaion and early stopping is done based on the empirical risk on the held-out sample. We present the results of our evalation on IHDP in Table 1. We see that our proposed method Furthermore, we see a substantial improvement from using non-uniform sample weights. In Figure 2 we take a closer look at the behavior of our model as we vary its hyperparameters on the IHDP dataset. Between the two plots we can draw the following conclusions: a) For moderate [10, 100], we observe a marginal gain from using the IPM penalty. This confirms the to large α [10, 1000], we see a large gain from using a ', 'after_paragraph_idx': 44, 'before_paragraph_idx': 43}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Sch¨olkopf, and Gert RG phi-divergences and binary classification. arXiv ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log- likelihood function. Journal of statistical planning and inference, 90(2):227–244, 2000. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'f = h(Φ(x), t) : h { . Define mt(x) = EY [Y G ⊆ { F (x(cid:48) { ', 'paragraph_idx': 23, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': '(cid:96)h,Φ(Ψ(z), t) := L(h(z, t), mt(Ψ(z))) where L is the squared loss, L(y, y(cid:48)) = (y , where assume that there exists a constant BΦ > 0 such that (cid:96)h,Φ/BΦ ', 'modified_lines': '1), ..., (x(cid:48) ', 'original_lines': '. Finally, let w . We now restate and prove Theorem 1. (x1, t1, y1), ..., (xn, tn, yn) } is a reproducing kernel Hilbert space of a kernel, k such that k((z, t), (z, t)) < ∈ G 1), ..., (x(cid:48) Z × T → Y} denote the space of ∈ C} m, t(cid:48) ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': '2 PREDICTING OUTCOMES UNDER DESIGN SHIFT', 'after_section': None, 'context_after': '1, t(cid:48) , Φ ', 'paragraph_idx': 9, 'before_section': None, 'context_before': '| h : ', 'modified_lines': 'm, t(cid:48) ∈ C} ∈ G ', 'original_lines': 'h : ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '+ Φ,H ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'n,δ n3/8 ', 'modified_lines': '', 'original_lines': '13 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': None, 'context_after': 'ζi 2] (cid:107) ', 'paragraph_idx': 21, 'before_section': None, 'context_before': 'i=1 E[ ', 'modified_lines': '(cid:107) ', 'original_lines': '(cid:107) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 20:55:10
|
ICLR.cc/2018/Conference
|
SyPNTzZ0-
|
SJfm0G-R-
|
[]
|
2017-10-27 20:59:05
|
ICLR.cc/2018/Conference
|
SJfm0G-R-
|
H1-Ezd6mG
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT Predictive models that generalize well under distributional shift are often desir- ', 'modified_lines': 'able and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to pre- dict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distributional shift appears in unsupervised domain adaptation, where we are tasked with generaliz- ing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a bound on the general- ization error under design shift, based on integral probability metrics and sample re-weighting. We combine this idea with representation learning, generalizing and tightening existing results in this space. Finally, we propose an algorithmic frame- work inspired by our bound and verify is effectiveness in causal effect estimation. ', 'original_lines': 'able and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different from those who received the treatment in the data. A related kind of distribu- tional shift appears in unsupervised domain adaptation, where we are tasked with generalizing to a distribution of inputs that is different from the one in which we observe labels. We pose both of these problems as prediction under a shift in design. Popular methods for overcoming distributional shift are often heuristic or rely on assumptions that are rarely true in practice, such as having a well-specified model or knowing the policy that gave rise to the observed data. Other methods are hindered by their need for a pre-specified metric for comparing observations, or by poor asymptotic properties. In this work, we devise a family of algorithms to address these issues, by jointly learning a representation and a re-weighting of observed data in the induced representation. We show that our algorithms mini- mize an upper bound on the generalization error under design shift, and verify the effectiveness of this approach in causal effect estimation. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The two most common approaches for addressing distributional shift are to learn shift-invariant rep- resentations of the data (Ajakan et al., 2014) or to perform sample re-weighting or matching (Shi- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'models for a target domain, observing ground truth only in a source domain. In this work, we pose both domain adaptation and treatment effect estimation as special cases of ', 'modified_lines': 'prediction across shifting designs, referring to changes in both action policy and feature domain. We separate policy from domain as we wish to make causal statements about the policy, but not about the domain. Learning from observational data to predict the counterfactual outcome under treatment B for a patient who received treatment A, one must adjust for the fact that treatment A was systematically given to patients of different characteristics from those who received treatment B. We call this predicting under a shift in policy. Furthermore, if all of our observational data comes from hospital P , but we wish to predict counterfactuals for patients in hospital Q, with a population that differs from P , an additional source of distributional shift is at play. We call this a shift in domain. Together, we refer to the combination of domain and policy as the design. The design for which we observe ground truth is called the source, and the design of interest the target. ', 'original_lines': 'prediction across shifting designs, referring to changes in both action policy and feature domain. We separate policy from domain as we wish to make causal statements about the policy, but not about the domain. Learning from observational data to predict the counterfactual outcome under treatment B for a patient who received A, one must adjust for the fact that treatment A was systematically given to patients of different characteristics from those who received treatment B. We call this predicting under a shift in policy. Furthermore, if all of our observational data comes from hospital P , but we wish to predict counterfactuals for patients in hospital Q, with a population that differs from P , an additional source of distributional shift is at play. We call this a shift in domain. Together, we refer to the combination of domain and policy as the design. The design for which we observe ground truth is called the source, and the design of interest the target. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Our key algorithmic contribution is to show how to combine the intuition behind shift-invariant Main contributions We bring together two techniques used to overcome distributional shift be- tween designs—re-weighting and representation learning, with complementary robustness proper- ties, generalizing existing methods based on either technique. We give finite-sample generalization 2 PREDICTING OUTCOMES UNDER DESIGN SHIFT | ples (x(cid:48) m) from pπ(X, T ). Outcomes are only available to us in labeled samples from a source domain: (x1, t1, y1), . . . , (xn, tn, yn), where (xi, ti) are draws from a source design X), corresponding only to the factual outcome pµ(X, T ) and yi = yi(ti) is a draw from pT (Y | | X) Assumption 1 (Consistency, ignorability and overlap). For any unit i, assigned to intervention ti, we observe Yi = Y (ti). Further, t∈T and the data-generating process pµ(X, T, Y ) satisfy ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'et al., 2015). Although representation learning can be shown to reduce the error associated to dis- tributional shift (Long et al., 2015) in some cases, standard approaches are biased, even in the limit of infinite data, as they penalize the use also of predictive information. In contrast, re-weighting methods correct for distributional shift by assigning higher weight to samples from the source de- sign that are representative of the target design, often using importance sampling. This idea has been well studied in, for example, the causal inference (Rosenbaum & Rubin, 1983), domain adap- tation (Shimodaira, 2000) and reinforcement learning (Precup et al., 2001) literature. For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the inverse probability of observed treatments (treatment propensity). Re-weighting with knowledge of impor- tance sampling weights often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples (Swaminathan & Joachims, 2015). A significant hurdle in applying re-weighting methods is that optimal weights are rarely known in practice. There are a variety of methods to learn these weights. Weights can be estimated as the inverse of estimated feature or treatment densities (Rosenbaum & Rubin, 1983; Freedman & Berk, 2008) but this plug-in approach can lead to highly unstable estimates. More stable methods learn weights by minimizing distributional distance metrics (Gretton et al., 2009; Kallus, 2016; 2017; Zubizarreta, 2015). Closely related, matching (Stuart, 2010) produces weights by finding units in the source design that are close in some metric to units in the target design. Specifying a distributional or unit-wise metric is challenging, especially if the input space is high-dimensional where no metric incorporating all features can ever be made small. This has inspired heuristics such as first performing variable selection and then finding a matching in the selected covariates. representation learning and re-weighting methods by jointly learning a representation Φ of the in- put space and a weighting function w(Φ) to minimize a) the re-weighted empirical risk and b) a re-weighted measure of distributional shift between designs. This is useful also for the identity representation Φ(x) = x, as it allows for principled control of the variance of estimators through regularization of the re-weighting function w(x), mitigating the issues of exact importance sam- pling methods. Further, this allows us to evaluate w on hold-out samples to select hyperparameters or do early stopping. Finally, letting w depend on Φ alleviates the problem of choosing a metric by which to optimize sample weights, as Φ is trained to extract information predictive of the outcome. We capture these ideas in an upper bound on the generalization error under a shift in design and specialize it to the case of treatment effect estimation. bounds for prediction under design shift, without assuming access to importance sampling weights or to a well-specified model, and develop an algorithmic framework to minimize these bounds. We propose a neural network architecture that jointly learns a representation of the input and a weight- ing function to improve balance across changing settings. Finally, we apply our proposed algorithm to the task of predicting causal effects from observational data, achieving state-of-the art results on a widely used benchmark. ∈ Y (Imbens & Rubin, 2015, Ch. 1–2), which has a stationary distribution pt(Y in contexts X The goal of this work is to accurately predict outcomes of interventions T ∈ X drawn from a target design pπ(X, T ). The outcome of intervening with t is the potential out- X) come Y (t) given context X. Assuming a stationary outcome is akin to the covariate shift assumption (Shi- modaira, 2000), often used in domain adaptation.1 For example, in the classical binary setting, Y (1) represents the outcome under treatment and Y (0) the outcome under control. The target de- sign consists of two components: the target policy pπ(T X), which describes how one intends to map observations of contexts (such as patient prognostics) to interventions (such as pharmaco- logical treatments) and the target domain pπ(X), which describes the population of contexts to which the policy will be applied. The target design is known to us only through m unlabeled sam- ∈ T ∈ T | 1Equivalently, we may write pπ(Y (t) | X) = pµ(Y (t) | X). 2 Under review as a conference paper at ICLR 2018 1, t(cid:48) m, t(cid:48) 1), . . . , (x(cid:48) Y (T ) of the treatment administered. Like the target design, the source design consists of a domain of contexts for which we have data and a policy, which describes the (unknown) historical administra- tion of treatment in the data. Only the factual outcomes of the treatments administered are observed, while the counterfactual outcomes yi(t) for t = ti are, naturally, unobserved. Our focus is the observational or off-policy setting, in which interventions in the source design are = pµ(T ). This encapsulates both the performed non-randomly as a function of X, pµ(T covariate shift often observed between treated and control populations in observational studies and the covariate shift between the domain of the study and the domain of an eventual wider interven- tion. Examples of this problem are plentiful: in addition to the example given in the introduction, consider predicting the return of an advertising policy based on the historical results of a different policy, applied to a different population of customers. We stress that we are interested in the causal effect of an intervention T on Y , conditioned on X. As such, we cannot think of X and T as a sin- gle variable. Without additional assumptions, it is impossible to deduce the effect of an intervention based on observational data alone (Pearl, 2009), as it amounts disentangling correlation and causa- tion. Crucially, for any unit i, we can observe the potential outcome yi(t) of at most one intervention t. In our analysis, we make the following standard assumptions. ', 'original_lines': 'et al., 2015). Although representation learning can be shown to reduce the error associated to distri- butional shift (Long et al., 2015), standard approaches are biased, even in the limit of infinite data. Re-weighting methods correct for distributional shift by assigning higher weight to samples from the source design that are representative of the target design, often using importance sampling. This idea has been well studied in, for example, the causal inference (Rosenbaum & Rubin, 1983), do- main adaptation (Shimodaira, 2000) and reinforcement learning (Precup et al., 2001) literature. For example, in causal effect estimation, importance sampling is equivalent to re-weighting units by the probability of observed treatments. Re-weighting, with knowledge of importance sampling weights, often leads to asymptotically unbiased estimators of the target outcome, but may suffer from high variance in finite samples (Swaminathan & Joachims, 2015). A significant hurdle is that optimal weights are rarely known in practice. There are a variety of methods to learn these weights. Weights can be estimated as the inverse of estimated propensities (Rosenbaum & Rubin, 1983; Freedman & Berk, 2008) but this plug-in approach can lead to highly unstable estimates. More stable methods learn weights by minimizing distributional distance met- rics (Gretton et al., 2009; Kallus, 2016; 2017; Zubizarreta, 2015). Closely related, matching (Stuart, 2010) produces weights by finding units in the source design that are close in some metric to units in the target design. Specifying a distributional or unit-wise metric is challenging, especially if the input space is high-dimensional where no metric incorporating all features can ever be made small. This has inspired heuristics such as first performing variable selection and then finding a matching in the selected covariates. representation learning and re-weighting methods by jointly learning a representation Φ of the input space and a weighting function w(Φ) to minimize a) the re-weighted empirical risk and b) a re- weighted measure of distributional shift between designs. By letting w depend on Φ we alleviate the problem of choosing a metric by which to optimize unit weights, as Φ extracts information predictive of the outcome. At the same time, our theory still guarantees a uniform bound on true risk in the target design. This leads to a general algorithmic framework, and a natural bound on the generalization error under design shift. bounds for prediction under design shift, assuming that we have access to neither importance sam- pling weights nor a well-specified model, and develop an algorithmic framework to minimize these bounds. We propose a neural network architecture that jointly learns a representation of the input and a weighting function to improve balance across changing settings. We apply our proposed al- gorithm to the task of predicting causal effects from observational data, achieving state-of-the art results on a widely used benchmark. ∈ T ∈ Y ∈ T drawn from a target design pπ(X, T ). The outcome of intervening with t The goal of this work is to accurately predict outcomes of interventions T in contexts X is the ∈ X potential outcome Y (t) (Imbens & Rubin, 2015, Ch. 1–2), which has a stationary dis- tribution pt(Y X) given context X. For example, in the classical binary setting Y (1) repre- sents the outcome under treatment and Y (0) the outcome under control. The target design con- sists of two components: the target policy pπ(T X), which describes how one intends to map observations of contexts (such as patient prognostics) to interventions (such as pharmacological treatments) and the target domain pπ(X), which describes the population of contexts to which the policy will be applied. The target design is known to us only through m unlabeled sam- Y (T ) of the treatment administered. Like the target design, the source design consists of a domain of contexts for which we have data and a policy, which describes the (unknown) historical admin- istration of treatment in the data. Only the factual outcomes of the treatments administered are = ti are, naturally, unobserved. Our focus observed, while the counterfactual outcomes yi(t) for t 1), . . . , (x(cid:48) m, t(cid:48) 1, t(cid:48) 2 (cid:54) Under review as a conference paper at ICLR 2018 is the observational or off-policy setting, in which interventions in the source design are performed non-randomly as a function of X, pµ(T = pµ(T ). This encapsulates both the covariate shift often observed between treated and control populations in observational studies and the covariate shift between the domain of the study and the domain of an eventual wider intervention. Examples of this problem are plentiful: in addition to the example given in the introduction, consider predict- ing the return of an advertising policy based on the historical results of a different policy, applied to a different population of customers. We stress that we are interested in the causal effect of an intervention T on Y , conditioned on X. As such, we cannot think of X and T as a single variable. | Without additional assumptions, it is impossible to deduce the effect of an intervention based on ob- servational data alone (Pearl, 2009), as it amounts disentangling correlation and causation. Crucially, for any unit i, we can observe the potential outcome yi(t) of at most one intervention t. To make prediction of outcomes and causal effects possible, we make the following standard assumptions. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 RE-WEIGHTED RISK MINIMIZATION', 'after_section': None, 'context_after': '| Rπ(f ) := Ex,t,y∼pπ [(cid:96)f (x, t, y)] (xi, ti, yi) ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '2.1 RE-WEIGHTED RISK MINIMIZATION ', 'modified_lines': 'We attempt to learn predictors f : X = x, T = t]. Recall that under Assumption 1, this conditional expectation is equal to the (possibly counterfactual) potential outcome Y (t), conditioned on X. Our goal is to ensure that hypotheses f are accurate under a design pπ that deviates from the data-generating process, pµ. This is unlike standard supervised learning for which pπ = pµ. We measure the (in)ability of f to predict outcomes under π, using the expected risk, such that f (x, t) approximates E[Y X × T → Y µ = ', 'original_lines': 'X ×T → Y such that f (x, t) approximates E[Y X = x, T = t]. We attempt to learn predictors f : Recall that under Assumption 1, this conditional expectation is equal to the (possibly counterfactual) potential outcome Y (t), conditioned on X. Our goal is to ensure that these hypotheses are accurate under a design pπ that deviates from the data-generating process, pµ. This is unlike the supervised learning setting in that the distribution pπ to which we wish to generalize is different from that which generated the training data, pµ. We measure the (in)ability of f to predict outcomes under π, using the expected risk, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 RE-WEIGHTED RISK MINIMIZATION', 'after_section': None, 'context_after': 'µ . Definition 1. A function w : R+ is a valid re-weighting of pµ if X × T → w(x, t) > 0. 2.2 CONDITIONAL TREATMENT EFFECT ESTIMATION An important special case of our setting is when treatments are binary, T 0, 1 } Y (0) − | (4) In Section 4, we argue that estimating CATE from observational data requires overcoming distribu- tional shift with respect to the treat-all and treat-none policies, in predicting each respective potential 3 RELATED WORK A large body of work has shown that under assumptions of ignorability and having a well- 4 GENERALIZATION UNDER DESIGN SHIFT We give a bound on the risk in predicting outcomes under a target design pπ(T, X) based on un- labeled samples from pπ and labeled samples from a source design pµ(T, X). Our result com- 4 Under review as a conference paper at ICLR 2018 Definition 2. The integral probability metric (IPM) distance, associated with a normed vector space of functions , between distributions p and q is, IPMH(p, q) := suph∈H | Ep[h] . | ', 'paragraph_idx': 12, 'before_section': '2.1 RE-WEIGHTED RISK MINIMIZATION', 'context_before': '(3) Unfortunately, importance sampling weights can be very large when pπ is large and pµ small, re- ', 'modified_lines': 'sulting in large variance in ˆRw∗ µ (f ) (Swaminathan & Joachims, 2015). More importantly, pµ(x, t) is rarely known in practice, and neither is w∗. In principle, however, any re-weighting function w with the following property yields a valid risk under the re-weighted distribution pw 3 (cid:54) (cid:54) Under review as a conference paper at ICLR 2018 We denote the re-weighted density pw Ex,t∼pµ [w(x, t)] = 1 and pµ(x, t) > 0 µ (x, t) := w(x, t)pµ(x, t). ⇒ A natural candidate in place of w∗ is an estimate ˆw∗ based on estimating densities pπ(x, t) and pµ(x, t). In this work, we adopt a different strategy, learning parameteric re-weighting functions w from observational data, that minimize an upper bound on the risk under pπ. ∈ { , often inter- preted as treating (T = 1) or not treating (T = 0) a unit, and the domain is fixed across de- signs, pµ(X) = pπ(X). This is the classical setting for estimating treatment effects—the effect of choosing one intervention over another (Morgan & Winship, 2014).2 The effect of an inter- vention T = 1 in context X, is measured by the conditional average treatment effect (CATE), τ (x) = E [Y (1) X = x]. Predicting τ for unobserved units typically involves prediction of both potential outcomes3. In a clinical setting, knowledge of τ is necessary to assess which medication should be administered to a certain individual. Historically, the (population) average treatment effect, ATE = Ex∼p[τ (x)], has received comparatively much more attention (Rosenbaum & Rubin, 1983), but is inadequate for personalized decision making. Using predictors f (x, t) of potential outcomes Y (t) in contexts X = x, we can estimate the CATE by ˆτ (x) = f (x, 1) f (x, 0) and measure the quality using the mean squared error (MSE), − outcome, and show how this can be used to derive generalization bounds for CATE. − MSE(ˆτ ) = Ep (cid:2)(ˆτ (x) τ (x))2(cid:3) specified model, various regression methods for counterfactual estimation are asymptotically consis- tent (Chernozhukov et al., 2017; Athey & Imbens, 2016; Belloni et al., 2014). However, consistency results like these provide little insight into the case of model misspecification. Under model misspec- ification, regression methods may suffer from additional bias when generalizing across designs due to distributional shift. A common way to alleviate this is importance sampling, see Section 2. This idea is used in propensity-score methods (Austin, 2011), that use treatment assignment probabilities (propensities) to re-weight samples for causal effect estimation, and more generally in re-weighted regression, see e.g. (Swaminathan & Joachims, 2015). A major drawback of these methods is the assumption that the design density is known. To address this, others (Gretton et al., 2009; Kallus, 2016), have proposed learning sample weights w to minimize the distributional distance between samples under pπ and pw µ , but rely on specifying the data representation a priori, without regard for which aspects of the data actually matter for outcome prediction and policy estimation. On the other hand, Johansson et al. (2016); Shalit et al. (2017) proposed learning representations for counterfactual inference, inspired by work in unsupervised domain adaptation (Mansour et al., 2009). The drawback of this line of work is that the generalization bounds of Shalit et al. (2017) and Long et al. (2015) are loose—even if infinite samples are available, they are not guaranteed to converge to the lowest possible error. Moreover, these approaches do not make use of important information that can be estimated from data: the treatment/domain assignment probabilities. bines representation learning, distribution matching and re-weighting, resulting in a tighter bound 2Notions of causal effects exist also for the non-binary case, but these are not considered here. 3This is sufficient but not necessary. than the closest related work, Shalit et al. (2017). The predictors we consider are compositions f (x, t) = h(Φ(x), t) where Φ is a representation of x and h an hypothesis. We first give an up- per bound on the risk in the general design shift setting, then show how this result can be used to bound the error in prediction of treatment effects. In Section 5 we give a result about the asymptotic properties of the minimizers of this upper bound. Risk under distributional shift Our bounds on the risk under a target design capture the intuition that if either a) the target design π and source design µ are close, or b) the true outcome is a simple function of x and t, the gap between the target risk and the re-weighted source risk is small. These notions can be formalized using integral probability metrics (IPM) (Sriperumbudur et al., 2009) that measure distance between distributions w.r.t. a normed vector space of functions H Eq[h] ', 'original_lines': 'sulting in large variance in ˆRw µ (f ) (Swaminathan & Joachims, 2015). More importantly, pπ(x, t) is rarely known in practice, and neither is w∗. In principle, however, any re-weighting function w with the following property yields a valid risk under the re-weighted distribution pw Ex,t∼pµ[w(x, t)] = 1 and pµ(x, t) > 0 ⇒ We denote the re-weighted density pw µ (x, t) := w(x, t)pµ(x, t). 3 (cid:54) Under review as a conference paper at ICLR 2018 A natural candidate in place of w∗ is an estimate ˆw∗ of the importance sampling weights. In this work, we adopt a different strategy, learning re-weighting functions w from observational data, that minimize an upper bound on the risk under π. Prediction under shifting design is of interest in several tasks, perhaps the most common of which are domain adaptation and counterfactual estimation. In this work, we focus on the latter, observing that the estimation of causal effects is an important special case. , often interpreted as treating (T = 1) or not treating (T = 0) a unit, and the domain is fixed across designs, pµ(x) = pπ(x). This is the classical setting for estimating treatment effects—the effect of choosing one intervention over another (Morgan & Winship, 2014).1 The effect of an intervention T in context X, is measured by the conditional average treatment effect (CATE), τ (x) = E [Y (1) X = x]. ∈ { Predicting τ for unobserved units involves prediction of both potential outcomes. In a clinical set- ting, this is necessary to assess which medication should be administered to a certain individual. Historically, the (population) average treatment effect, ATE = Ex∼p[τ (x)], has received compara- tively much more attention (Rosenbaum & Rubin, 1983), but is inadequate for personalized decision making. Using predictors f (x, t) of potential outcomes Y (t) in contexts X = x, we can estimate f (x, 0) and measure the quality using the mean squared error (MSE), the CATE by ˆτ (x) = f (x, 1) − MSE(ˆτ ) = Ep (cid:2)(ˆτ (x) τ (x))2(cid:3) − outcomes, and show how this can be used to give generalization bounds for CATE. specified model, various regression methods for counterfactual estimation are asymptotically con- sistent (Chernozhukov et al., 2017; Athey & Imbens, 2016; Belloni et al., 2014; Van Der Laan & Rubin, 2006). However, consistency results like these provide no insight in the case of model mis- specification. Under model misspecification, ordinary regression may suffer from (unnecessarily) large bias when generalizing across designs. A common way to alleviate this is importance sam- pling, see Section 2. This idea is used in propensity-score methods (Austin, 2011), that use treatment assignment probabilities (propensities) to re-weight samples for causal effect estimation, and more generally in re-weighted regression, see e.g. (Swaminathan & Joachims, 2015). A major drawback of these methods is the assumption that the design density is known. To address this, others (Gretton et al., 2009; Kallus, 2016), have proposed learning weights to minimize the distributional distance between samples under pπ and pµ, but rely on specifying the data representation a priori. Johansson et al. (2016); Shalit et al. (2017) proposed learning representations for counterfactual inference, inspired by work in unsupervised domain adaptation (Mansour et al., 2009). However, the generalization bounds of Johansson et al. (2016) do not apply to general hypotheses and the bounds of Shalit et al. (2017) and Long et al. (2015) are loose—even if infinite samples are available, they are not guaranteed to converge to the lowest possible error. Moreover, these approaches do not make use of important information that is can be estimated from data: the treatment assignment probabilities. bines distribution matching and re-weighting, resulting in a tighter bound than the closest related work (Shalit et al., 2017). The predictors we consider are compositions f (x, t) = h(Φ(x), t) where 1Notions of causal effects exist also for the non-binary case, but these are not considered here. Φ is a representation of x and h an hypothesis. We first state a result for the general design shift setting, then show how this result can be used to bound the error in prediction of treatment effects. In Section 5 we give a result about the asymptotic properties of the minimizers of the upper bound. H Our bounds build on the intuition that if either a) π is close to a re-weighting of µ, or b) the true outcome is a simple function of x and t, the gap between the target error and the re-weighted source error is small. This can be formalized using integral probability metrics (IPM) (Sriperumbudur et al., 2009) that measure distance between distributions w.r.t. a normed vector space of functions . Eq[h] ', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'We follow the setup of Shalit et al. (2017), and consider learning twice-differentiable, invertible representations Φ : representation, such that Ψ(Φ(x)) = x for all x. Let denote space of such representation functions. π,Φ(z, t) := Z × T pπ,Φ(z, t)w(Ψ(z), t) its re-weighted form and ˆpw π,Φ its re-weighted empirical form, following our , Φ Theorem 1. Given is a labeled sample (x1, t1, y1), ..., (xn, tn, yn) from pµ, and an unlabeled sample (x(cid:48) m) from pπ, with corresponding empirical measures ˆpµ and ˆpπ. Sup- pose that Φ is a twice-differentiable, invertible representation, that h(Φ, t) is an hypothesis, and X = x, T = t], let (cid:96)h,Φ(Ψ(z), t) := f = h(Φ(x), t) L(h(z, t), mt(Ψ(z))) where L is the squared loss, L(y, y(cid:48)) = (y − exists a constant BΦ > 0 such that (cid:96)h,Φ/BΦ Z × T → Y} H . Finally, let w be a valid , where ∞ F n,δ n3/8 Φ,H δ D 1 √n + σ2 Y (6) F where F D Vµ(w, (cid:96)f ) = max (cid:16)(cid:113) (cid:113) (cid:17) f (x, t)] . (cid:96)f (cid:107) Theorem 1 is immediately applicable to the case of unsupervised domain adaptation in which there = is only a single potential outcome of interest, . In this case, pµ(T X) = pπ(T 0 } ', 'paragraph_idx': 26, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': 'is the representation space, and Ψ : ', 'modified_lines': 'is the inverse For a design π, we let pπ,Φ(z, t) be the distribution induced by Φ over denote a set of hypotheses h(Φ, t) previous notation. Finally, we let operating on the representation Φ and let f = h(Φ(x), t) : { h . We can now relate the expected target risk Rπ(f ) to the re-weighted empirical ∈ E} source risk ˆRw . Define mt(x) = EY [Y the space of all compositions, G ⊆ { F Z × T → Y} 1), ..., (x(cid:48) , with pw µ (f ). m, t(cid:48) 1, t(cid:48) ∈ G h : = F ∈ F | 5 Under review as a conference paper at ICLR 2018 ducing kernel Hilbert space of a kernel, k such that k((z, t), (z, t)) < re-weighting of pµ,Φ. Then with probability at least 1 ∈ H ⊆ { h : 2δ, y(cid:48))2, and assume that there is a repro- − (cid:18) 1 √m H m,n,δ a function of the kernel norm of + + (cid:19) , H Rπ(f ) ≤ ˆRw µ (f ) + BΦIPMH(ˆpπ,Φ, ˆpw µ,Φ) + Vµ(w, (cid:96)f ) C n,δ is a function of the pseudo-dimension of C , both only with logarithmic dependence on n and m, σ2 Y is the expected variance in Y , and Epµ[w2(x, t)(cid:96)2 f (x, t)], E ˆpµ[w2(x, t)(cid:96)2 A similar bound exists where H the Wasserstein distance, but with worse sample complexity. is the family of functions Lipschitz constant at most 1, and IPMH See Appendix A.2 for a proof of Theorem 1 that involves applying finite-sample generalization bounds to Lemma 1, as well as moving to the space induced by the representation Φ. Theorem 1 has several implications: non-identity feature representations, non-uniform sample weights, and variance control of these weights can all contribute to a lower bound. Using uni- form weights w(x, t) = 1 in (6), results in a bound similar to that of Shalit et al. (2017) and Long et al. (2015). When π = µ, minimizing uniform-weight bounds results in biased hypotheses, even in the asymptotical limit, as the IPM term does not vanish when the sample size increases. This is an undesirable property, as even k-nearest-neighbor classifiers are consistent in the limit of infinite samples. We consider minimizing (6) with respect to w, improving the tightness of the bound. Theorem 1 indicates that even though importance sampling weights w∗ yield estimators with small bias, they can suffer from high variance, as captured by the factor Vµ(w, (cid:96)f ). The factor BΦ is H as well as not known in general as it depends on the true outcome, and is determined by (cid:107) the determinant of the Jacobian of Ψ, see Appendix A.2. Qualitatively, BΦ measures the joint complexity of Φ and (cid:96)f and is sensitive to the scale of Φ—as the scale of Φ vanishes, BΦ blows up. To prevent this in practice, we normalize Φ. As BΦ is unknown, Shalit et al. (2017) substituted a hyperparameter α for BΦ, but discussed the difficulties of selecting its value without access to counterfactual labels. In our experiments, we explore a heuristic for adaptively choosing α, based on measures of complexity of the observed held-out loss as a function of the input. Finally, the Φ,H term from δ D concentration results for estimating IPMs (Sriperumbudur et al., 2012), see Appendix A.2. F n,δ follows from standard learning theory results (Cortes et al., 2010) and , and F C X) = 1. ', 'original_lines': 'The idea of learning representations to minimize distributional shift in representation space, and thus the source-target error gap, has been applied to domain adaptation (Ajakan et al., 2014), algorithmic fairness (Zemel et al., 2013) and counterfactual prediction (Shalit et al., 2017), to name a few. is the inverse For a design π, we let pπ,Φ(z, t) be the distribution induced by Φ over denote a set of hypotheses h(Φ, t) operating on the previous notation. Let representation Φ and let . ∈ C} We can now relate the expected target risk Rπ(f ) to the re-weighted empirical source risk ˆRw µ (f ). y(cid:48))2, and assume that there is a repro- ducing kernel Hilbert space of a kernel, k such that k((z, t), (z, t)) < re-weighting of pµ,Φ. Then with probability at least 1 f = h(Φ(x), t) : h { . Define mt(x) = EY [Y the space of all compositions, Z × T → Y} G ⊆ { F 1), ..., (x(cid:48) , with pw ∈ H ⊆ { m, t(cid:48) ∈ F 1, t(cid:48) ∈ G h : 2δ, h : = F | − Rπ(f ) ≤ ˆRw µ (f ) + BΦIPMH(ˆpπ,Φ, ˆpw µ,Φ) + Vµ(w, (cid:96)f ) C + (cid:18) 1 √m + (cid:19) n,δ measures the capacity of C the capacity of , σ2 H Y is the expected variance in potential outcomes, and and has only logarithmic dependence on n, H m,n,δ measures A similar bound exists where sample complexity. H Epµ[w2(x, t)(cid:96)2 f (x, t)], E ˆpµ[w2(x, t)(cid:96)2 is the family of functions Lipschitz constant at most 1, but with worse 5 Under review as a conference paper at ICLR 2018 See Appendix A.2 for a proof that involves applying finite-sample generalization bounds to the first inequality in Lemma 1, as well as moving to the space induced by the representation Φ. Using uniform weights w(x, t) = 1 in (6), results in a bound similar to that of Shalit et al. (2017) = µ, minimizing the resulting bound results biased hypotheses, even and Long et al. (2015). For π in the asymptotical limit, as the IPM term does not vanish when the sample size increases. This is a rather undesirable property, as even k-nearest-neighbor classifiers are consistent in the limit of infinite samples. Instead, we consider minimizing (6) with respect to w, improving the tightness of the bound. Recall from Lemma 1 that there exist weights w for which Rπ(f ) = Rw µ (f ). Theorem 1 indicates, as noted by for example Cortes et al. (2010), that even though importance sampling weights w∗ yield estimators with small bias, they can suffer from high variance due to the factor Vµ(w, (cid:96)f ). The factor BΦ is not known in general as it depends on the true outcome, H as well as the determinant of the Jacobian of Ψ, see the appendix. and is determined by (cid:107) Qualitatively, BΦ measures the joint complexity of Φ and (cid:96)(r, t). In practice, Shalit et al. (2017) substituted a hyperparameter α for BΦ, but discussed the difficulties of selecting its value without access to counterfactual labels. In our experiments, we explore a heuristic for adaptively choosing α, based on measures of complexity of the observed held-out loss as a function of the input. X). ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 26}, {'section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'after_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_after': 'the conditional average treatment effect, MSE(ˆτ ) can be bounded by the sum of risks under the constant treat-all and treat-none policies. As in Section 2.2, we consider the case of a fixed domain = ', 'paragraph_idx': 31, 'before_section': '4 GENERALIZATION UNDER DESIGN SHIFT', 'context_before': '| ', 'modified_lines': 'Conditional average treatment effects A simple argument shows that the error in predicting ', 'original_lines': 'Conditional Average Treatment Effects A simple argument shows that the error in predicting ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 31}, {'section': '6 (cid:54)', 'after_section': None, 'context_after': 'min f ∈F ', 'paragraph_idx': 34, 'before_section': None, 'context_before': 'min h,Φ,w L ', 'modified_lines': 'π(h, Φ, w; β) ', 'original_lines': 'π(h, Φ, w; β)] ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 (cid:54)', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 OLS OLS-IPW ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'Rπ(f ) + Op(1/√n + 1/√m) . ', 'modified_lines': 'Consequently, under the assumptions of Thm. 1, for sufficiently large α and λw, min f ∈F Rπ(f ) + Op(1/n3/8 + 1/√m). Rπ( ˆfn) ≤ In words, the minimizers of (8) converge to the representation and hypothesis that minimize the counterfactual risk, in the limit of infinite samples. 5.1 IMPLEMENTATION L π(h, Φ, w; β) over h, Φ and w is, while motivated by Theorem 2, a difficult op- Minimization of timization problem to solve in practice. For example, adjusting w to minimize the empirical risk term may result in overemphasizing “easy” training examples, resulting in a poor local minimum. Perhaps more importantly, ensuring invertibility of Φ is challenging for many representation learn- ing frameworks, such as deep neural networks. In our implementation, we deviate from theory on these points, by fitting the re-weighting w based only on imbalance and variance terms, and don’t explicitly enforce invertibility. As a heuristic, we split the objective, see (8), in two and use only the IPM term and regularizer to learn w. In short, we adopt the following alternating procedure. π (Φk, w; D, α, λw) w h π(h, Φ, w; D, α, λh), wk+1 = arg min hk, Φk = arg min (9) h,Φ L w L The re-weighting function w(x, t) could be represented by one free parameter per training point, as it is only used to learn the model, not for prediction. However, we propose to let w be a parametric function of Φ(x). Doing so ensures that information predictive of the outcome is used for balancing, and lets us compute weights on a hold-out set, to perform early stopping or select hyperparameters. This is not possible with existing re-weighting methods such as Gretton et al. (2009); Kallus (2016). An example architecture for the treatment effect estimation setting is presented in Figure 1. By Proposition 1, estimating treatment effects involves predicting under the two constant policies— treat-everyone and treat-no-one. In Section 6, we evaluate our method in this task. As noted by Shalit et al. (2017), choosing hyperparameters for counterfactual prediction is fun- damentally difficult, as we cannot observe ground truth for counterfactuals. In this work, we ex- plore setting the balance parameter α adaptively. α is used in (8) in place of BΦ, a factor mea- suring the complexity of the loss and representation function as functions of the input, a quantity that changes during training. As a heuristic, we use an approximation of the Lipschitz constant of (cid:96)f , with f = h(Φ(x), t), based on observed examples: αh,Φ = maxi,j∈[n] | (cid:96)f (xj, tj, yj) 2. We use a moving average over batches to improve stability. (cid:107) (cid:96)f (xi, ti, yi) / (cid:107) | xj xi − − 6 EXPERIMENTS 6.1 SYNTHETIC EXPERIMENTS FOR DOMAIN ADAPTATION We create a synthetic domain adaptation experiment to highlight the benefit of using a learned re- weighting function to minimize weighted risk over using importance sampling weights w∗(x) = 7 Table 1: Causal effect estimation on IHDP. CATE error RMSE(ˆτ ), target prediction error ˆRπ(f ) and std errors. Lower is better. ', 'original_lines': '6 (cid:54) Table 1: Causal effect estimation on IHDP. CATE error, target prediction error and std er- rors. Lower is better. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.1 SYNTHETIC EXPERIMENTS FOR DOMAIN ADAPTATION', 'after_section': None, 'context_after': 'the expected difference between potential outcomes conditioned on pre-treatment variables, for a held-out sample of the population. We compare our results to ordinary least squares (OLS) (with one regressor per outcome), OLS-IPW (re-weighted OLS according to a logistic regression estimate of propensities), Random Forests, Causal Forests (Wager & Athey, 2017), BART (Chipman et al., trained by minimizing (8). We use the RBF-kernel maximum mean discrepancy as the IPM (Gret- ton et al., 2012). For a description of the architecture, training procedure and hyperparameters, see Appendix B. We compare results using uniform w = 1 and learned weights, setting the balance parameter α either fixed, by an oracle (test-set error), or adaptively using the heuristic described in Section 5. To pick other hyperparameters, we split training sets into one part used for function fitting and one used for early stopping and hyperparameter selection. Hyperparameters for regularization ∈ ', 'paragraph_idx': 41, 'before_section': None, 'context_before': '± ± ', 'modified_lines': 'Figure 1: Architecture for predicting out- comes under design shift. A re-weighting function w is fit jointly with a representa- tion Φ and hypothesis h of the potential out- comes, to minimize a bound on the target risk. Dashed lines are not back-propagated through. Regularization penalties not shown. N ∼ N (0, 1) and let y = σ(β(cid:62)x + c) where σ(z) = 1/(1 + e−z). (x; mπ, Id) where Id is the d-dimensional identity matrix, mµ = b1d/2, mπ = pπ(x)/pµ(x) for small sample sizes. We observe n labeled source samples, distributed according (x; mµ, Id) and predict for n unlabeled target samples drawn according to pπ(x) = to pµ(x) = b1d/2 and − N (0d, 1.5Id) 1d is the d-dimensional vector of all 1:s, here with b = 1, d = 10. We let β ∼ N and c Importance sampling weights, w∗(x) = pπ(x)/pµ(x), are known. In experiments, we vary n from 10 to 600. We fit (misspecified) linear models4 f (x) = β(cid:62)x + γ to the logistic outcome, and compare minimizing a weighted source risk by a) parameterizing sample weights as a small feed-forward neural network to minimize (8) (ours) b) using importance sampling weights (baseline), both using gradient descent. For our method, we add a small variance penalty, λw = 10−3, to the learned weights, use MMD with an RBF-kernel of σ = 1.0 as IPM, and let α = 10. We compare to exact importance sampling weights (IS) as well as clipped IS weights (ISC), wM (x) = min(w(x), M ) for M 5, 10 , a } common way of reducing variance of re-weighting methods (Swaminathan & Joachims, 2015). ∈ { In Figure 2a, we see that our proposed method behaves well at small sample sizes compared to importance sampling methods. The poor performance of exact IS weights is expected at smaller samples, as single samples are given very large weight, resulting in hypotheses that are highly sen- sitive to the training set. While clipped weights alleviates this issue, they do not preserve relevance ordering of high-weight samples, as many are given the truncation value M , in contrast to the re- weighting learned by our method. Recall also that true domain densities are known only to IS methods, but not to ours. 6.2 CONDITIONAL AVERAGE TREATMENT EFFECTS — IHDP We evaluate our framework in the CATE estimation setting, see Section 2.2. Our task is to predict 2010), and CFRW (Shalit et al., 2017) (with Wasserstein penalty). Finally, we use as baseline (IPM- WNN): first weights are found by IPM minimization in the input space (Gretton et al., 2009; Kallus, 2016), then used in a re-weighted neural net regression, with the same architecture as our method. Our implementation, dubbed RCFR for Re-weighted CounterFactual Regression, parameterizes representations Φ(x), weighting functions w(Φ, t) and hypotheses h(Φ, t) using neural networks, 4The identity representation Φ(x) = x is used for both approaches. 8 𝑥Φℎ𝑤IPM(𝑝+,-,𝑝.,-/)𝑤ℓ𝑡ContextRepres.HypothesisWeightingImbalanceWeightedriskTreatmentDNN Under review as a conference paper at ICLR 2018 (a) Target prediction error on synthetic domain adaptation experiment, comparing learned re- weighting (RCFR) and exact/clipped importance sampling weights (IS/ISC). The high variance of IS hurts performance for small sample sizes. (b) For small imbalance penalties α, re-weighting (low λw) has no effect. For moderate α, less uni- form re-weighting (smaller λw) improves the error, c) for large α, weighting helps, but overall error in- creases. Best viewed in color. are chosen based on the empirical loss on a held-out source (factual) sample. The Infant Health and Development Program (IHDP) dataset is a semi-synthetic binary-treatment benchmark (Hill, 2011), split into training and test sets by Shalit et al. (2017). IHDP has a set of d = 25 real-world continuous and binary features describing n = 747 children and their mothers, a real-world binary treatment made non-randomized through biased subsampling by Hill (2011), and a synthesized continuous outcome that can be used to compute the ground-truth CATE error. Average results over 100 different realizations/settings of the outcome are presented in Table 1. We see that our proposed method achieves state-of-the-art results, and that adaptively choosing α does not hurt performance much. Furthermore, we see a substantial improvement from using non- uniform sample weights. In Figure 2b we take a closer look at the behavior of our model as we vary its hyperparameters on the IHDP dataset. Between the two plots we can draw the following [10, 100], we observe a marginal gain from using the IPM conclusions: a) For moderate to large α penalty. This is consistent with the observations of Shalit et al. (2017). b) For large α [10, 1000], we see a large gain from using a non-uniform re-weighting (small λw). c) While large α makes the factual error more representative of the counterfactual error, using it without re-weighting results in higher absolute error. We believe that the moderate sample size of this dataset is one of the reasons for the usefulness of our method. See Appendix C.2 for a complementary view of these results. ', 'original_lines': 'Figure 1: Architecture for predicting outcomes under design shift. A re-weighting function w is fit jointly with a representation Φ and hypoth- esis h of the potential outcomes, to minimize a bound on the target risk. Dashed lines are not back-propagated through. Regularization penal- ties not shown. Consequently, under the assumptions of Thm. 1, for sufficiently large α and λw, Rπ( ˆfn) min f ∈F ≤ Rπ(f ) + Op(1/n3/8 + 1/√m). In words, the minimizers of (8) converge to the representation and hypothesis that minimize the counterfactual risk, in the limit of infinite samples. L π(h, Φ, w; β) directly over h, Φ and w is justified by Theorem 2, we note that While minimizing adjusting w to minimize the empirical risk term serves little purpose, as it may result in overem- phasizing “easy” training examples, especially if α is small. Instead, as a heuristic, we split the objective in two, see (8), and use only the IPM term and regularizer to learn w. In short, we solve the following alternating minimization problem. hk, Φk = arg min h,Φ L π(h, Φ, w; D, α, λh), wk+1 = arg min h w π (Φk, w; D, α, λw) w L (9) In principle, the weighting function w(x, t) could be represented as one free parameter per training point, as it is only used to learn the model, not for prediction. However, we propose to let w be a parametric function of Φ(x). Doing so enables us to both control the smoothness of the weights more flexibly, and lets us compute weights on a hold-out set, for example to perform early stopping. An example architecture for this framework is presented in Figure 1. As noted previously, estimating treatment effects involves predicting under the two constant policies—treat-everyone and treat-no- one. In Section 6, we evaluate our framework in this task. As noted by Shalit et al. (2017), choosing hyperparameters for prediction outcomes is fundamentally difficult, as we cannot observe ground truth for counterfactuals. In this work, we explore setting the balance parameter α adaptively. α is used in (8) in place of a factor measuring the complexity of the loss and representation function as functions of the input, a quantity that evolves during training. As a heuristic, we use an approximation of the Lipschitz constant of (cid:96)f based on observed examples, with f = h(Φ(x), t), αh,Φ = maxi,j∈[n] | 2. We use a moving (cid:107) average over batches to encourage stability. (cid:96)f (xj, tj) | (cid:96)f (xi, ti) / (cid:107) xj xi − − 6 EXPERIMENTS We evaluate our framework in the CATE estimation setting, see Section 2.2—our task is to predict 2010), and CFRW (Shalit et al., 2017) (with Wasserstein penalty). Finally, we use as baseline (IPM-WNN): first weights are found by IPM minimization in the input space (Gretton et al., 2009; Kallus, 2016), then used in a re-weighted neural net regression, with the same architecture as our 7 𝑥Φℎ𝑤IPM(𝑝+,-,𝑝.,-/)𝑤ℓ𝑡ContextRepres.HypothesisWeightingImbalanceWeightedriskTreatmentDNN Under review as a conference paper at ICLR 2018 Figure 2: Error in CATE estimation on IHDP as a function of re-weighting regularization strength λw (left) and source prediction error (right). We see in the left-hand plot that a) for small imbalance penalties α, re-weighting (low λw) has no effect, b) for moderate α, less uniform re-weighting (smaller λw) improves the error, c) for large α, weighting helps, but overall error increases. In the right-hand plot, we compare the ratio of CATE error to source error. Color represents α (see left) and size λw. We see that for large α, the source error is more representative of CATE error, but does not improve in absolute value without weighting. Here, α was fixed. Best viewed in color. method. We use IHDP as benchmark, a semi-synthetic binary-treatment dataset (Hill, 2011), split into training and test sets by Shalit et al. (2017). IHDP has synthesized continuous outcomes that can be used to compute the ground-truth CATE error. Our implementation, dubbed RCFR for re-weighted Counterfactual Regression, parameterizes rep- resentations Φ(x), weighting functions w(Φ, t) and hypotheses h(Φ, t) using neural networks, are chosen based on the empirical risk on a held-out source (factual) sample. We present the results of our evaluation on IHDP in Table 1. We see that our proposed method achieves state-of-the-art results, and that adaptively choosing α does not hurt performance much. Furthermore, we see a substantial improvement from using non-uniform sample weights. In Figure 2 we take a closer look at the behavior of our model as we vary its hyperparameters on the IHDP dataset. Between the two plots we can draw the following conclusions: a) For moderate to large α [10, 100], we observe a marginal gain from using the IPM penalty. This is consistent with the [10, 1000], we see a large gain from using a observations of Shalit et al. (2017). b) For large α non-uniform re-weighting (small λw). c) While large α makes the factual error more representative of the counterfactual error, using it without re-weighting results in higher absolute error. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'weighting methods either use pre-defined weights or learn weights based on a measure of distribu- tional distance in the input space. These approaches are highly sensitive to the choice of metric used to measure balance, as the input may be high-dimensional and contain information that is not pre- ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'We have proposed a theory and an algorithmic framework for learning to predict outcomes of in- terventions under shifts in design—changes in both intervention policy and feature domain. The framework combines representation learning and sample re-weighting to balance source and tar- ', 'modified_lines': 'get designs, emphasizing information from the source sample relevant for the target. Existing re- ', 'original_lines': 'get designs, emphasizing information from the source sample relevant to the target. Existing re- ', 'after_paragraph_idx': 5, 'before_paragraph_idx': None}, {'section': '6.1 SYNTHETIC EXPERIMENTS FOR DOMAIN ADAPTATION', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'studies for causal effects. Biometrika, 70(1):41–55, 1983. ', 'modified_lines': '10 ', 'original_lines': '9 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 CONDITIONAL TREATMENT EFFECT ESTIMATION', 'after_section': None, 'context_after': 'the distribution induced by Φ over π,Φ(z, t) := pπ,Φ(z, t)w(Ψ(z), t) its re-weighted form and ˆpw ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'is the inverse representation, such that Ψ(Φ(x)) = x for all x. Let denote space of such representation functions. For a design π, we let pπ,Φ(z, t) be ', 'modified_lines': 'E ', 'original_lines': 'C ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'm) } ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1), ..., (x(cid:48) ', 'modified_lines': '', 'original_lines': '(x(cid:48) { ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'F n,δ measures the capacity of ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Y (11) ', 'modified_lines': '', 'original_lines': '2δ, − ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Rπ(f ) − ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Proof. We have by definition ', 'modified_lines': '', 'original_lines': 'is the family of functions Lipschitz constant at most 1, but with worse ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 δ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '≤ 18ν2 log ', 'modified_lines': '', 'original_lines': ' . − δ, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'h(Φ∗(x(cid:48) i), t(cid:48) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'm (cid:88) ', 'modified_lines': '', 'original_lines': 'i=1 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 21:46:17
|
ICLR.cc/2018/Conference
|
H1-Ezd6mG
|
B1AU4Oamz
|
[{'section': '6 (cid:54)', 'after_section': None, 'context_after': 'min f ∈F Rπ(f ) + Op(1/n3/8 + 1/√m). In words, the minimizers of (8) converge to the representation and hypothesis that minimize the counterfactual risk, in the limit of infinite samples. L π (Φk, w; D, α, λw) w (9) The re-weighting function w(x, t) could be represented by one free parameter per training point, as it is only used to learn the model, not for prediction. However, we propose to let w be a parametric ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'Rπ(f ) + Op(1/√n + 1/√m) . Consequently, under the assumptions of Thm. 1, for sufficiently large α and λw, ', 'modified_lines': ' Rπ( ˆfn) ≤ π(h, Φ, w; β) over h, Φ and w is, while motivated by Theo- Implementation Minimization of rem 2, a difficult optimization problem to solve in practice. For example, adjusting w to minimize the empirical risk term may result in overemphasizing “easy” training examples, resulting in a poor local minimum. Perhaps more importantly, ensuring invertibility of Φ is challenging for many rep- resentation learning frameworks, such as deep neural networks. In our implementation, we deviate from theory on these points, by fitting the re-weighting w based only on imbalance and variance terms, and don’t explicitly enforce invertibility. As a heuristic, we split the objective, see (8), in two and use only the IPM term and regularizer to learn w. In short, we adopt the following alternating procedure. hk, Φk = arg min h,Φ L π(h, Φ, w; D, α, λh), wk+1 = arg min h w L ', 'original_lines': ' Rπ( ˆfn) ≤ 5.1 IMPLEMENTATION π(h, Φ, w; β) over h, Φ and w is, while motivated by Theorem 2, a difficult op- Minimization of timization problem to solve in practice. For example, adjusting w to minimize the empirical risk term may result in overemphasizing “easy” training examples, resulting in a poor local minimum. Perhaps more importantly, ensuring invertibility of Φ is challenging for many representation learn- ing frameworks, such as deep neural networks. In our implementation, we deviate from theory on these points, by fitting the re-weighting w based only on imbalance and variance terms, and don’t explicitly enforce invertibility. As a heuristic, we split the objective, see (8), in two and use only the IPM term and regularizer to learn w. In short, we adopt the following alternating procedure. h π(h, Φ, w; D, α, λh), wk+1 = arg min hk, Φk = arg min h,Φ L w L ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '6.2 CONDITIONAL AVERAGE TREATMENT EFFECTS — IHDP ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'samples, as single samples are given very large weight, resulting in hypotheses that are highly sen- sitive to the training set. While clipped weights alleviates this issue, they do not preserve relevance ordering of high-weight samples, as many are given the truncation value M , in contrast to the re- ', 'modified_lines': 'weighting learned by our method. True domain densities are known only to IS methods. ', 'original_lines': 'weighting learned by our method. Recall also that true domain densities are known only to IS methods, but not to ours. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6.2 CONDITIONAL AVERAGE TREATMENT EFFECTS — IHDP', 'after_section': '6.2 CONDITIONAL AVERAGE TREATMENT EFFECTS — IHDP', 'context_after': '4The identity representation Φ(x) = x is used for both approaches. ', 'paragraph_idx': 45, 'before_section': '6.2 CONDITIONAL AVERAGE TREATMENT EFFECTS — IHDP', 'context_before': 'Appendix B. We compare results using uniform w = 1 and learned weights, setting the balance parameter α either fixed, by an oracle (test-set error), or adaptively using the heuristic described in Section 5. To pick other hyperparameters, we split training sets into one part used for function fitting ', 'modified_lines': 'and one used for early stopping and hyperparameter selection. Hyperparameters for regularization are chosen based on the empirical loss on a held-out source (factual) sample. ', 'original_lines': '', 'after_paragraph_idx': 46, 'before_paragraph_idx': 45}, {'section': '6.2 CONDITIONAL AVERAGE TREATMENT EFFECTS — IHDP', 'after_section': None, 'context_after': 'The Infant Health and Development Program (IHDP) dataset is a semi-synthetic binary-treatment benchmark (Hill, 2011), split into training and test sets by Shalit et al. (2017). IHDP has a set of ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'c) for large α, weighting helps, but overall error in- creases. Best viewed in color. ', 'modified_lines': 'Figure 2: Results for domain adaptation and causal effect estimation experiments. ', 'original_lines': 'and one used for early stopping and hyperparameter selection. Hyperparameters for regularization are chosen based on the empirical loss on a held-out source (factual) sample. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 21:55:34
|
ICLR.cc/2018/Conference
|
B1AU4Oamz
|
ryz4DWWRW
|
[]
|
2018-01-25 15:40:43
|
ICLR.cc/2018/Conference
|
BJd5iM-0Z
|
Skdg2f-0-
|
[]
|
2017-10-27 20:49:51
|
ICLR.cc/2018/Conference
|
Skdg2f-0-
|
SJPwTz-0-
|
[{'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_after': 'that we are trying to learn. We use the weak annotator to generate labels for the unlabeled samples. Generated labels are noisy due to the limited accuracy of the weak annotator. This gives us the weak dataset consisting of tuples of training samples xi and their weak labels ˜yi, i.e. Dw = {(xi,˜yi)}. Note ', 'paragraph_idx': 9, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': 'We now describe FWL, our proposed approach for semi-supervised learning when we have access to weak supervision (e.g. heuristics or weak annotators). We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the weak annotator, and a small set of high-quality ', 'modified_lines': 'samples labeled by experts, called the strong dataset, consisting of tuples of training samples xi and their true labels yi, i.e. Ds = {(xi,yi)}. We consider the latter to be observations from the true target function ', 'original_lines': 'samples labeled by experts, called the strong dataset, consisting of tuples of training samples xj and their true labels yj, i.e. Ds = {(xj,yj)}. We consider the latter to be observations from the true target function ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_after': 'T (xt) = g(mpost(xt)) Σ(xt) = h(Kpost(xt,xt)) ', 'paragraph_idx': 17, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': 'dent representation ψ(.) and then use the teacher to generate a soft dataset Dsw consisting of (cid:104)sample,predicted label,confidence(cid:105) pairs for all data samples. ', 'modified_lines': 'We use a Gaussian process as teacher to capture the label uncertainty in terms of the student represen- tation, estimated w.r.t the strong data. We explain the finer details of the GP in Appendix D, and just present the overall description here. A prior mean and co-variance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. The GP is trained on this representation of the strong dataset to learn the posterior mean mpost (used to generate soft labels) and posterior co-variance Kpost(.,.) (which represents label uncer- tainty). We then create the soft dataset Dsw = {(xt,¯yt)} using the posterior GP, input samples xt from Dw ∪Ds, and predicted labels ¯yt with their associated uncertainties as computed by T (xt) and Σ(xt): ', 'original_lines': 'We use a Gaussian process as teacher to capture the label uncertainty in terms of the student representation, estimated w.r.t the strong data. We explain the finer details of the GP in Appendix D, and just present the overall description here. A prior mean and co-variance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. The GP is trained on this representation of the strong dataset to learn the posterior mean mpost (used to generate soft labels) and posterior co-variance Kpost(.,.) (which we use to generate confidence). We then create a new dataset Dsw = {(xt, ¯yt)} (the soft dataset) using the posterior GP, input samples xt from Dw ∪ Ds, and labels ¯yt with their associated uncertainties as computed by T (xt) and Σ(xt): ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_after': 'on the strong dataset Ds but then use it to generate soft labels ¯yt = T (xt) and uncertainty Σ(xt) for samples belonging to Dsw = Dw ∪Ds. ', 'paragraph_idx': 17, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': 'The generated labels are called soft labels. Therefore, we refer to Dsw as a soft dataset. g(.) transforms the output of GP to the suitable output space. For example in classification tasks, g(.) would be the softmax function to produce probabilities that sum up to one. For multidimensional-output tasks where ', 'modified_lines': 'a vector of variances is provided by the GP, the vector Kpost(xt,xt) is passed through an aggregating function h(.) to generate a scalar value for the uncertainty of each sample. Note that we train GP only ', 'original_lines': 'a vector of variances is provided by the GP, the vector Kpost(xt,xt) passes through an aggregating function h(.) to generate a scalar value for the confidence of each sample. Note that we train GP only ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': '3.2 DOCUMENT RANKING', 'after_section': None, 'context_after': '5 RankerWeightsCompositionalityEmbedding ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'Figure 3: The student for the docu- ment ranking task. ', 'modified_lines': 'Description of the data with weak labels and data with true labels as well as setups of the document- ranking experiments is presented in Appendix F.2 in more details. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Results and Discussions We conducted k-fold cross validation on Ds (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked 1000 documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.2340IJ12345 0.2453IJ1234567 ', 'modified_lines': '', 'original_lines': 'Description of the data with weak labels and data with true labels as well as setups of the document- ranking experiments is presented in Appendix F.2 in more details. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '78 Robust04', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 46, 'before_section': '78 Robust04', 'context_before': 'For this task, since the amount of data with true labels are larger compared to the ranking task, the performance of NNS is acceptable. Alternately sampling from weak and strong data gives better ', 'modified_lines': 'results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NNWω→S seems to work pretty good in this task. Similar to the ranking ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 46}, {'section': 'Abstract', 'after_section': None, 'context_after': 'task, fine-tuning NNS based on labels generated by GP instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Figure 4: The student for the senti- ment classification task. ', 'modified_lines': '', 'original_lines': 'results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NNWω→S seems to work pretty good in this task. Similar to the ranking ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 20:55:59
|
ICLR.cc/2018/Conference
|
SJPwTz-0-
|
B1Ujvm-AZ
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'data and is thereby able to learn a better and more meaningful task-specific representation of the data. 2 FIDELITY-WEIGHTED LEARNING (FWL) ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'We introduce the proposed FWL approach in more detail in Section 2. We then present our experimental setup in Section 3 where we evaluate FWL on a toy task and two real-world tasks, namely document ', 'modified_lines': 'ranking and sentence sentiment classification. In both latter cases, FWL outperforms competitive base- lines and yields state-of-the-art results, indicating that FWL makes better use of the limited true labeled One may also view FWL from the perspective of Vapnik’s learning with privileged information (LUPI) framework (Vapnik & Izmailov, 2015), for which we provide empirical support in Section 4.2. We discuss this connection in more detail in Appendix A. ', 'original_lines': 'ranking and sentence sentiment classification. In both cases, FWL outperforms competitive baselines and yields state-of-the-art results, indicating that FWL makes better use of the limited true labeled ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '3 EXPERIMENTS', 'after_section': None, 'context_after': 'Figure 3: The student for the docu- ment ranking task. 5 ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'The teacher is implemented by clustered GP algorithm. See Ap- pendix D for more details. ', 'modified_lines': 'The weak annotator is BM25 (Robertson & Zaragoza, 2009), a well-known unsupervised method for scoring query-document pairs based on statistics of the matched terms. More details are provided in Appendix E.1. ', 'original_lines': 'The weak annotator is BM25 (Robertson et al., 2009), a well-known unsupervised method for scoring query-document pairs based on statistics of the matched terms. More details are provided in Appendix E.1. Description of the data with weak labels and data with true labels as well as setups of the document- ranking experiments is presented in Appendix F.2 in more details. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '78 Robust04', 'after_section': '78 Robust04', 'context_after': 'Results and Discussions We conducted k-fold cross validation on Ds (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked 1000 documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents ', 'paragraph_idx': 38, 'before_section': None, 'context_before': '0.2340IJ12345 0.2453IJ1234567 ', 'modified_lines': 'Description of the data with weak labels and data with true labels as well as setups of the document- ranking experiments is presented in Appendix F.2 in more details. ', 'original_lines': '', 'after_paragraph_idx': 39, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'For this task, since the amount of data with true labels are larger compared to the ranking task, the performance of NNS is acceptable. Alternately sampling from weak and strong data gives better ', 'modified_lines': '', 'original_lines': 'results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NNWω→S seems to work pretty good in this task. Similar to the ranking ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '78 9', 'after_section': '78 9', 'context_after': 'task, fine-tuning NNS based on labels generated by GP instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning. ', 'paragraph_idx': 47, 'before_section': None, 'context_before': 'Figure 4: The student for the senti- ment classification task. ', 'modified_lines': 'results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NNWω→S seems to work quite well in this task. Similar to the ranking ', 'original_lines': '', 'after_paragraph_idx': 47, 'before_paragraph_idx': None}]
|
2017-10-27 21:39:41
|
ICLR.cc/2018/Conference
|
B1Ujvm-AZ
|
H1fH9Xb0b
|
[]
|
2017-10-27 21:50:50
|
ICLR.cc/2018/Conference
|
H1fH9Xb0b
|
ByCIPkJXM
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Assuming we can obtain a large set of weakly-labeled data in addition to a much smaller training set of “strong” labels, the simplest approach is to expand the training set by including the weakly-supervised ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'have to deal with data samples of variable quality. For example, in a large dataset of images only a small fraction of samples may be labeled by experts and the rest may be crowd-sourced using e.g. Amazon Mechanical Turk (Veit et al., 2017). In addition, in some applications, labels are intentionally ', 'modified_lines': 'perturbed due to privacy issues (Wainwright et al., 2012; Papernot et al., 2017). ', 'original_lines': 'perturbed due to privacy issues (Wainwright et al., 2012; Papernot et al., 2016). ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In this paper, we argue that treating weakly-labeled samples uniformly (i.e. each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality. Instead, we propose Fidelity-Weighted Learning (FWL), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with 1 ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'observations from the true function or distribution (which we call strong data). Indeed, it has recently been shown that a small amount of expert-labeled data can be augmented in such a way by a large set of raw data, with labels coming from a heuristic function, to train a more accurate neural ranking ', 'modified_lines': 'model (Dehghani et al., 2017c). The downside is that such approaches are oblivious to the amount or source of noise in the labels. ', 'original_lines': 'model (Dehghani et al., 2017). The downside is that such approaches are oblivious to the amount or source of noise in the labels. confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (quality) of each weak sample. By directly modeling the inaccuracies ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'introduced by the weak annotator in this way, we can control the extent to which we make use of this additional source of weak supervision: more for confidently-labeled weak samples close to the true observed data, and less for uncertain samples further away from the observed data. ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'from the true function, and Step 3: Fine-tune student on labels generated by teacher, taking the confidence into account. Red dotted borders and blue solid borders depict components with trainable and non-trainable parameters, respectively. ', 'modified_lines': 'confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample. By directly modeling the inaccuracies ', 'original_lines': '', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'data and is thereby able to learn a better and more meaningful task-specific representation of the data. 2 FIDELITY-WEIGHTED LEARNING (FWL) dataset consisting of tuples of training samples xi and their weak labels ˜yi, i.e. Dw = {(xi,˜yi)}. Note that we can generate a large amount of weak training data Dw at almost no cost using the weak annotator. In contrast, we have only a limited amount of observations from the true function, i.e. |Ds| (cid:28) |Dw|. ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'We introduce the proposed FWL approach in more detail in Section 2. We then present our experimental setup in Section 3 where we evaluate FWL on a toy task and two real-world tasks, namely document ', 'modified_lines': 'ranking and sentence sentiment classification. In all cases, FWL outperforms competitive baselines and yields state-of-the-art results, indicating that FWL makes better use of the limited true labeled Section 4 provides analysis of the bias-variance trade-off and the learning rate, suggesting also to view FWL from the perspective of Vapnik’s learning with privileged information (LUPI) framework (Vapnik & Izmailov, 2015). Section 5 situates FWL relative to related work, and we end the paper by drawing the main conclusions in Section 6. In this section, we describe our proposed FWL approach for semi-supervised learning when we have access to weak supervision (e.g. heuristics or weak annotators). We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the weak annotator, and a small set of high- quality samples labeled by experts, called the strong dataset, consisting of tuples of training samples xi and their true labels yi, i.e. Ds = {(xi,yi)}. We consider the latter to be observations from the true target function that we are trying to learn. We use the weak annotator to generate labels for the unlabeled sam- ples. Generated labels are noisy due to the limited accuracy of the weak annotator. This gives us the weak ', 'original_lines': 'ranking and sentence sentiment classification. In both latter cases, FWL outperforms competitive base- lines and yields state-of-the-art results, indicating that FWL makes better use of the limited true labeled One may also view FWL from the perspective of Vapnik’s learning with privileged information (LUPI) framework (Vapnik & Izmailov, 2015), for which we provide empirical support in Section 4.2. We discuss this connection in more detail in Appendix A. We now describe FWL, our proposed approach for semi-supervised learning when we have access to weak supervision (e.g. heuristics or weak annotators). We assume we are given a large set of unlabeled data samples, a heuristic labeling function called the weak annotator, and a small set of high-quality samples labeled by experts, called the strong dataset, consisting of tuples of training samples xi and their true labels yi, i.e. Ds = {(xi,yi)}. We consider the latter to be observations from the true target function that we are trying to learn. We use the weak annotator to generate labels for the unlabeled samples. Generated labels are noisy due to the limited accuracy of the weak annotator. This gives us the weak ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 Representation LearningWeak AnnotatorPrediction losswrt. the weak labelsStudentLearned RepresentationsTeacherTraining the teacher on the observations from true function(strong data)Representation LearningPrediction losswrt. the labels generated by the teacherLearned RepresentationsTeacherStudent ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'approximator called the teacher. The training process consists of three phases which we summarize in Algorithm 1 and Figure 1. ', 'modified_lines': '', 'original_lines': 'Step 1 Pre-train the student on Dw using weak labels generated with the weak annotator. The goal of this stage is to learn a reasonably good representation of the data for the given task. The student function is a neural network consisting of two parts. The first part ψ(.) learns the data representation and the second part φ(.) performs the prediction task (e.g. classification). Therefore ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_after': '3: Train the student on samples from Dsw with SGD and modulate the step-size ηt according to the per-sample quality estimated using the teacher (Equation 1). Dw = {(xi,˜yi)}. For brevity, in the following, we will refer to both data sample xi and its representation Step 2 Train the teacher on the strong data (ψ(xj), yj) ∈ Ds represented in terms of the stu- dent representation ψ(.) and then use the teacher to generate a soft dataset Dsw consisting of T (xt) = g(mpost(xt)) Σ(xt) = h(Kpost(xt,xt)) ', 'paragraph_idx': 12, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': '1: Train the student on samples from the weakly-annotated data Dw. 2: Freeze the representation-learning component ψ(.) of the student and train teacher on the strong data ', 'modified_lines': 'Ds = (ψ(xj),yj). Apply teacher to unlabeled samples xt to obtain soft dataset Dsw = {(xt, ¯yt)} where ¯yt = T (xt) is the soft label and for each instance xt, the uncertainty of its label, Σ(xt), is provided by the teacher. Step 1 Pre-train the student on Dw using weak labels generated with the weak annotator. The main goal of this step is to learn a task dependent representation of the data as well as pretraining the student. The student function is a neural network consisting of two parts. The first part ψ(.) learns the data representation and the second part φ(.) performs the prediction task (e.g. classification). Therefore the overall function is ˆy = φ(ψ(xi)). The student is trained on all samples of the weak dataset ψ(xi) by xi when it is obvious from the context. From the self-supervised feature learning point of view, we can say that representation learning in this step is solving a surrogate task of approximating the expert knowledge, for which a noisy supervision signal is provided by the weak annotator. (cid:104)sample,predicted label, confidence(cid:105) for all data samples. We use a Gaussian process as teacher to capture the label uncertainty in terms of the student representation, estimated w.r.t the strong data. We explain the finer details of the GP in Appendix C, and just present the overall description here. A prior mean and co-variance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. We use the learned representation by the student in the previous step to compensate lack of data in Ds and the teacher can enjoy the learned knowledge from the large quantity of the weakly annotated data. This way, we also let the teacher to see the data through the lens of the student. The GP is trained on the samples from Ds to learn the posterior mean mpost (used to generate soft labels) and posterior co-variance Kpost(.,.) (which represents label uncertainty). We then create the soft dataset Dsw = {(xt,¯yt)} using the posterior GP, input samples xt from Dw ∪Ds, and predicted labels ¯yt with their associated uncertainties as computed by T (xt) and Σ(xt): ', 'original_lines': 't ) where Ds = (ψ(xj),yj). Apply teacher to unlabeled samples xt to obtain soft dataset Dsw = (xt, ¯yt,x∗ ¯yt = T (xt) is the soft label and Σ(xt) is the uncertainty provided by the teacher. the overall function is ˆy = φ(ψ(xi)). The student is trained on all samples of the weak dataset ψ(xi) by xi when it is obvious from the context. (cid:104)sample,predicted label,confidence(cid:105) pairs for all data samples. We use a Gaussian process as teacher to capture the label uncertainty in terms of the student represen- tation, estimated w.r.t the strong data. We explain the finer details of the GP in Appendix D, and just present the overall description here. A prior mean and co-variance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. The GP is trained on this representation of the strong dataset to learn the posterior mean mpost (used to generate soft labels) and posterior co-variance Kpost(.,.) (which represents label uncer- tainty). We then create the soft dataset Dsw = {(xt,¯yt)} using the posterior GP, input samples xt from Dw ∪Ds, and predicted labels ¯yt with their associated uncertainties as computed by T (xt) and Σ(xt): ', 'after_paragraph_idx': 13, 'before_paragraph_idx': 12}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_after': 'Step 3 Fine-tune the weights of the student network on the soft dataset, while modulating the magnitude of each parameter update by the corresponding teacher-confidence in its label. The student network of Step 1 is fine-tuned using samples from the soft dataset Dsw = {(xt,¯yt)} where ¯yt = T (xt). The corresponding uncertainty Σ(xt) of each sample is mapped to a confidence value www∗ = argmin ', 'paragraph_idx': 19, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': 'samples belonging to Dsw = Dw ∪Ds. In practice, we furthermore divide the space of data into several regions and assign each region a ', 'modified_lines': 'separate GP trained on samples from that region. This leads to a better exploration of the data space and makes use of the inherent structure of data. The algorithm called clustered GP gave better results compared to a single GP. See Appendix A for the detailed description and empirical observations which makes the use of multiple GPs reasonable. according to Equation 1 below, and this is then used to determine the step size for each iteration of the stochastic gradient descent (SGD). So, intuitively, for data points where we have true labels, the uncertainty of the teacher is almost zero, which means we have high confidence and a large step-size for 3 Under review as a conference paper at ICLR 2018 updating the parameters. However, for data points where the teacher is not confident, we down-weight the training steps of the student. This means that at these points, we keep the student function as it was trained on the weak data in Step 1. More specifically, we update the parameters of the student by training on Dsw using SGD: ', 'original_lines': 'separate GP trained on samples from that region. By this division of space, we take advantage of the knowledge learned by several teachers, each an expert on its specific region of data space. As a nice side-effect, this also solves the scalability issues of GPs in that we can increase the number of regions until the number of points in each region is tractable with a single GP, and train these models in parallel. See Algorithm 2 in Appendix B for the detailed description. according to Equation 1 below, and this is then used to determine the step size for each iteration of SGD. So, intuitively, for data points where we have true labels, the uncertainty of the teacher is almost zero, which means we have high confidence and a large step-size for updating the parameters. However, for data points where the teacher is not confident, we down-weight the training steps of the student. This means that at these points, we keep the student function as it was trained on the weak data in Step 1. More specifically, we update the parameters of the student by training on Dsw using stochastic gradient descent (SGD): ', 'after_paragraph_idx': 20, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where l(·) is the per-example loss, ηt is the total learning rate, N is the size of the soft dataset Dsw, www is the parameters of the student network, and R(.) is the regularization term. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(xt,¯yt)∈Dsw wwwt+1 = wwwt −ηt(∇l(www,xt,¯yt)+∇R(www)) ', 'modified_lines': '', 'original_lines': ' 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': None, 'context_after': '1. WA. The weak annotator, i.e. the unsupervised method used for annotating the unlabeled data. 2. NNW. The student trained only on weak data. ', 'paragraph_idx': 9, 'before_section': None, 'context_before': '3 EXPERIMENTS ', 'modified_lines': 'In this section, we apply FWL first to a toy problem and then to two different real tasks: document ranking and sentiment classification. The neural networks are implemented in TensorFlow (Abadi et al., 2015; Tang, 2016). GPflow (Matthews et al., 2017) is employed for developing the GP modules. For both tasks, we evaluate the performance of our method compared to the following baselines: ', 'original_lines': 'The neural networks in the proposed method are implemented in TensorFlow (Abadi et al., 2015; Tang, 2016). GPflow (Matthews et al., 2017) is employed for developing the GP modules. For both tasks, we evaluate the performance of our method compared to the following baselines: ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EXPERIMENTS', 'after_section': '3 EXPERIMENTS', 'context_after': 'teacher without taking the confidence into account. This baseline is similar to (Veit et al., 2017). examples labeled by teacher using the confidence scores. 3.1 TOY PROBLEM ', 'paragraph_idx': 24, 'before_section': '3 EXPERIMENTS', 'context_before': 'by a fixed value 0 ≤ ω ≤ 1, and fine-tuned on strong data. As an approximation for the optimal value for ω, we have used the mean of η2 of our model (below). ', 'modified_lines': '7. FWL unsuprep. The representation in the first step is trained in an unsupervised way1 and the student is trained on examples labeled by the teacher using the confidence scores. 8. FWL \\Σ. The student trained on the weakly labeled data and fine-tuned on examples labeled by 9. FWL. Our FWL model, i.e. the student trained on the weakly labeled data and fine-tuned on In the following, we introduce each task and the results produced for it, more detail about the exact student network and teacher GP for each task are in the appendix. 4 Under review as a conference paper at ICLR 2018 (a) Training student on 100 examples from the weak function. (b) Fitting teacher based on 10 observations from the true function. (c) Fine-tuning the student based on observations from the true function. (d) Fine-tuning the student based on label/confidence from teacher. Figure 2: Toy example: The true function we want to learn is y = sin(x) and the weak function is y = 2sinc(x). ', 'original_lines': '7. FWL \\Σ. The student trained on the weakly labeled data and fine-tuned on examples labeled by 8. FWL. Our FWL model, i.e. the student trained on the weakly labeled data and fine-tuned on In the following, we apply FWL first to a toy problem and then to two different real tasks: document ranking and sentiment classification. We introduce each task and the results produced for it. The student network and teacher GP for each task is given in more detail in the appendix. ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 24}, {'section': 'Abstract', 'after_section': None, 'context_after': 'As can be seen in Figure 2d, FWL by taking into account label confidence, gives a better approximation of the true hidden function. We repeated the above experiment 10 times. The average RMSE with respect to the true function on a set of test points over those 10 experiments for the student, were as follows: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'which is the most common semi-supervised approach (Figure 2c). 2. A teacher-student framework working by the proposed FWL approach. ', 'modified_lines': '', 'original_lines': '4 Under review as a conference paper at ICLR 2018 (a) Training student on 100 examples from the weak function. (b) Fitting teacher based on 10 observations from the true function. (c) Fine-tuning the student based on observations from the true function. (d) Fine-tuning the student based on label/confidence from teacher. Figure 2: Toy example: The true function we want to learn is y = sin(x) and the weak function is y = 2sinc(x). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '3.2 DOCUMENT RANKING ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'by the teacher (blue line in Figure 2d): 0.4143 (best). More details of the neural network and GP along with the specification of the data used in the above ', 'modified_lines': 'experiment are presented in Appendix C and E.1. ', 'original_lines': 'experiment are presented in Appendix D and F.1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 DOCUMENT RANKING', 'after_section': '3.2 DOCUMENT RANKING', 'context_after': 'task. Given each training sample x as a triple of query q, and two documents d+ and d−, the goal is to learn a function F : {< q,d+,d− >} → R, which maps each data sample x to a scalar output value y indicating the probability of d+ being ranked higher than d− with respect to q. The student follows the architecture proposed in (Dehghani et al., layer ψ : {< q, d+, d− >} → Rm maps each input sample to an m- dimensional real-valued vector. In general, besides learning em- beddings for words, function ψ learns to compose word embedding ', 'paragraph_idx': 32, 'before_section': '3.2 DOCUMENT RANKING', 'context_before': 'to learn a representation for long documents and capture the notion of relevance between queries and documents. Furthermore, the size of publicly available datasets with query-document relevance judgments is unfortunately quite small (∼ 250 queries). We employ a state-of-the-art pairwise neural ', 'modified_lines': 'ranker architecture as the student (Dehghani et al., 2017c). In this model, ranking is cast as a regression 1In the document ranking task, as the representation of documents and queries, we use weighted averaging over pretrained embeddings of their words based on their inverse document frequency (Dehghani et al., 2017c). In the sentiment analysis task, we use skip-thoughts vectors(Kiros et al., 2015) 5 Under review as a conference paper at ICLR 2018 Table 1: Performance of FWL approach and baseline methods for ranking task. IJi indicates that the improvements with respect to the baseline i are statistically significant at the 0.05 level using the paired two-tailed t-test with Bonferroni correction. Robust04 ClueWeb nDCG@20 MAP nDCG@20 Method 1 WABM25 2 NNW (Dehghani et al., 2017c) 3 NNS 4 NNS+/W 5 NNW→S 6 NNWω→S MAP 0.2503IJ37 0.2702IJ137 0.1790 0.4102IJ37 0.4290IJ137 0.3519 0.2763IJ1237 0.2810IJ1237 0.2899IJ123457 0.4330IJ1237 0.4372IJ1237 0.4431IJ123457 0.1021IJ37 0.1297IJ137 0.0782 0.1354IJ1237 0.1346IJ1237 0.1320IJ12347 0.2070IJ37 0.2201IJ137 0.1730 0.2319IJ1237 0.2317IJ1237 0.2309IJ12347 7 8 9 FWLunsuprep FWL \\Σ FWL 0.2211IJ37 0.2980IJ123457 0.3124IJ12345678 0.3700IJ37 0.4516IJ123457 0.4607IJ12345678 0.0831IJ37 0.1386IJ123457 0.1472IJ12345678 0.1964IJ37 0.2340IJ123457 0.2453IJ12345678 2017c). The first layer of the network, i.e. representation learning ', 'original_lines': 'ranker architecture as the student (Dehghani et al., 2017). In this model, ranking is cast as a regression 2017). The first layer of the network, i.e. representation learning ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 32}, {'section': '1 WABM252 NNW (Dehghani et al., 2017c)3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'after_section': '1 WABM252 NNW (Dehghani et al., 2017c)3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'context_after': 'The teacher is implemented by clustered GP algorithm. See Ap- The weak annotator is BM25 (Robertson & Zaragoza, 2009), a well-known unsupervised method for scoring query-document pairs based on statistics of the matched RankerWeightsCompositionalityEmbedding Under review as a conference paper at ICLR 2018 Method 3 NNS 4 NNS+/W 5 NNW→S 6 NNWω→S FWL \\Σ FWL The student for the sentiment classification task is a convolutional model which has been shown to perform best on the dataset we used (Deriu et al., 2017; Severyn & Moschitti, 2015a;b; Deriu et al., ', 'paragraph_idx': 34, 'before_section': '1 WABM252 NNW (Dehghani et al., 2017c)3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'context_before': 'connected feed-forward network with a sigmoidal output unit to predict the probability of ranking d+ higher than d−. The general schema of the student is illustrated in Figure 3. More details are ', 'modified_lines': 'provided in Appendix B.1. pendix C for more details. Figure 3: The student for the docu- ment ranking task. terms. More details are provided in Appendix D.1. Description of the data with weak labels and data with true labels as well as the setup of the document- ranking experiments is presented in Appendix E.2 in more details. Results and Discussions We conducted k-fold cross validation on Ds (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked 1,000 documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents (nDCG@20). Table 1 shows the performance on both datasets. As can be seen, FWL provides a significant boost on the performance over all datasets. In the ranking task, the student is designed in particular to be trained on weak annotations (Dehghani et al., 2017c), hence training the network only on weak supervision, i.e. NNW performs better than NNS. This can be due to the fact that ranking is a complex task requiring many training samples, while relatively few data with true labels are available. Alternating between strong and weak data during training, i.e. NNS+/W seems to bring little (but statistically significant) improvement. However, we can gain better results by the typical fine-tuning strategy, NNW→S. Comparing the performance of FWLunsuprep to FWL indicates that, first of all learning the representation of the input data downstream of the main task leads to better results compared to a task-independent unsupervised or self-supervised way. Also the dramatic drop in the performance compared to the FWL, emphasizes on the importance of the preretraining the student on weakly labeled data. We can gain improvement by fine-tuning the NNW using labels generated by teacher without considering their confidence score, i.e. FWL \\Σ. This means we just augmented the fine-tuning process by generating a fine-tuning set using teacher which is better than Ds in terms of quantity and Dw in terms of quality. This baseline is equivalent to setting β = 0 in Equation 1. However, we see a big jump in performance when we use FWL to include the estimated label quality from the teacher, leading to the best overall results. 3.3 SENTIMENT CLASSIFICATION In sentiment classification, the goal is to predict the sentiment (e.g., positive, negative, or neutral) of a sentence. Each training sample x consists of a sentence s and its sentiment label ˜y. 6 Table 2: Performance of the proposed FWL approach and baseline methods for sentiment classification task. IJi indicates that the improvements with respect to the baseline#i are statistically significant, at the 0.05 level using the paired two-tailed t-test, with Bonferroni correction. SemEval-14 SemEval-15 1 WALexicon 2 NNW 7 8 9 10 FWLunsuprep SemEvalBest 0.5141 0.6719IJ137 0.6307IJ1 0.7032IJ1237 0.7080IJ1237 0.7166IJ12347 0.6588 IJ13 0.7202 IJ123457 0.7470 IJ12345678 0.4471 0.5606IJ1 0.5811IJ12 0.6319IJ1237 0.6441IJ1237 0.6603IJ123457 0.6954IJ123 0.6590IJ123457 0.6830IJ12345678 0.7162 (Rouvier & Favre, 2016) 0.6618 (Deriu et al., 2016) Figure 4: The student for the senti- ment classification task. ', 'original_lines': 'provided in Appendix C.1. pendix D for more details. terms. More details are provided in Appendix E.1. Figure 3: The student for the docu- ment ranking task. 5 Table 1: Performance of the proposed FWL approach and baseline methods for ranking task. IJi indicates that the improve- ments with respect to the baseline i are statistically significant at the 0.05 level using the paired two-tailed t-test with Bonferroni correction. 1 WABM25 2 NNW (Dehghani et al., 2017) 7 8 Robust04 ClueWeb nDCG@20 MAP nDCG@20 MAP 0.2503IJ3 0.2702IJ13 0.1790 0.4102IJ3 0.4290IJ13 0.3519 0.2763IJ123 0.2810IJ123 0.2899IJ12345 0.4330IJ123 0.4372IJ123 0.4431IJ12345 0.1021IJ3 0.1297IJ13 0.0782 0.1354IJ123 0.1346IJ123 0.1320IJ1234 0.2070IJ3 0.2201IJ13 0.1730 0.2319IJ123 0.2317IJ123 0.2309IJ1234 0.2980IJ12345 0.3124IJ1234567 0.4516IJ12345 0.4607IJ1234567 0.1386IJ12345 0.1472IJ1234567 0.2340IJ12345 0.2453IJ1234567 Description of the data with weak labels and data with true labels as well as setups of the document- ranking experiments is presented in Appendix F.2 in more details. Results and Discussions We conducted k-fold cross validation on Ds (the strong data) and report two standard evaluation metrics for ranking: mean average precision (MAP) of the top-ranked 1000 documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents (nDCG@20). Table 1 shows the performance on both datasets. As can be seen, FWL provides a significant boost on the performance over all datasets. In the ranking task, the student is designed in particular to be trained on weak annotations (Dehghani et al., 2017), hence training the network only on weak supervision, i.e. NNW performs better than NNS. This can be due to the fact that ranking is a complex task requiring many training samples, while relatively few data with true labels are available. Alternating between strong and weak data during training, i.e. NNS+/W seems to bring little (but statistically significant) improvement. However, we can gain better results by the typical fine-tuning strategy, NNW→S. We can gain improvement by fine-tuning the NNW using labels generated by teacher without considering their confidence score, i.e. FWL \\Σ. This means we just augmented the fine-tuning process by generating a fine-tuning set using teacher which is better than Ds in terms of quantity and Dw in terms of quality. This baseline is equivalent to setting β = 0 in Equation 1. However, we see a big jump in performance when we use FWL to include the estimated label quality from the teacher, leading to the best overall results. 3.3 SENTIMENT CLASSIFICATION In sentiment classification, the goal is to predict the sentiment (e.g., positive, negative, or neutral) of a sentence. Each training sample x consists of a sentence s and its sentiment label ˜y. ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 34}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': None, 'context_after': 'The weak annotator is a simple unsupervised lexicon-based method (Hamdan et al., 2013; Kiritchenko et al., 2014), which estimate a distribution over sentiments for each sentence, based on sentiment labels Specification of the data with weak labels and data with true labels along with the detailed experimental Results and Discussion We report Macro-F1, the official SemEval metric, in Table 2. We see that the For this task, since the amount of data with true labels are larger compared to the ranking task, the performance of NNS is acceptable. Alternately sampling from weak and strong data gives better results. Pretraining on weak labels then fine-tuning the network on true labels, further improves the performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the Besides the baselines, we also report the best performing systems which are also convolution-based models (Rouvier & Favre 2016 on SemEval-14; Deriu et al. 2016 on SemEval-15). Using FWL and ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'sentence to a matrix S ∈ Rm×|s|, followed by a series of 1d convolutional layers with max-pooling. The representation layer is followed by feed-forward layers and a softmax output layer which returns the probability distribution over all three classes. Figure 4 presents the general schema of the architecture ', 'modified_lines': 'of the student. See Appendix B.2 for more details. The teacher for this task is modeled by a GP. See Appendix C for more details. of its terms. More details are provided in Appendix D.2. setup are given in Appendix E.3. proposed FWL is the best performing approach. network with true labels, i.e. NNWω→S seems to work quite well in this task. For this task, like ranking task, learning the representation in an unsupervised task independent fashion, i.e. FWLunsuprep, does not lead to a good results compared to the FWL. Similar to the ranking task, fine-tuning NNS based on labels generated by GP instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning. ', 'original_lines': 'of the student. See Appendix C.2 for more details. The teacher for this task is modeled by a GP. See Appendix D for more details. of its terms. More details are provided in Appendix E.2. setups are given in Appendix F.3. proposed FWL is the best performing among all the baselines. 6 Under review as a conference paper at ICLR 2018 Table 2: Performance of the proposed FWL approach and baseline methods for sentiment classification task. IJi indicates that the improvements with respect to the baseline#i are statistically significant, at the 0.05 level using the paired two-tailed t-test, with Bonferroni correction. Method SemEval-14 SemEval-15 1 WALexicon 2 NNW 3 NNS 4 NNS+/W 5 NNW→S 6 NNWω→S FWL \\Σ FWL SemEvalBest 7 8 9 0.5141 0.6719IJ13 0.6307IJ1 0.7032IJ123 0.7080IJ123 0.7166IJ1234 0.4471 0.5606IJ1 0.5811IJ12 0.6319IJ123 0.6441IJ123 0.6603IJ12345 0.7202 IJ12345 0.7470 IJ1234567 0.6590IJ12345 0.6830IJ1234567 0.7162 (Rouvier & Favre, 2016) 0.6618 (Deriu et al., 2016) Figure 4: The student for the senti- ment classification task. network with true labels, i.e. NNWω→S seems to work quite well in this task. Similar to the ranking task, fine-tuning NNS based on labels generated by GP instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': None, 'context_after': '4.1 HANDLING THE BIAS-VARIANCE TRADE-OFF As mentioned in Section 2, β is a hyper-parameter that controls the contribution of weak and strong data to the training procedure. In order to investigate its influence, we fixed everything in the model Figure 5 illustrates the performance on the rank- ing (on Robust04 dataset) and sentiment classi- fication tasks (on SemEval14 dataset). For both sentiment classification and ranking, β = 1 gives Figure 5: Effect of different values for β. 4.2 A GOOD TEACHER IS BETTER THAN MANY OBSERVATIONS Figure 6 presents the results of these experiments. In general, for all tasks and both setups, the student learns faster when there is a teacher. One caveat is in the case where we have a very small amount of ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'results on both datasets. 4 ANALYSIS ', 'modified_lines': 'In this section, we provide further analysis of FWL by investigating the bias-variance trade-off and the learning rate. and ran the fine-tuning step with different values of β ∈ {0.0,0.1,1.0,2.0,5.0} in all the experiments. 7 EmbeddingClassifierEmbeddingConv.Feature MapPooled Repr. Under review as a conference paper at ICLR 2018 (a) Models trained on different amount weak data. (b) Models trained on different amount of strong data. Figure 6: Performance of FWL and the baseline model trained on different amount of data. the best results (higher scores are better). We also experimented on the toy problem with dif- ferent values of β in three cases: 1) having 10 observations from the true function (same setup as Section 3.1), marked as “Toy Data” in the plot, 2) having only 5 observations from the true func- tion, marked as “Toy Data *” in the plot, and 3) having f (x) = x+1 as the weak function, which is an extremely bad approximator of the true function, marked as “Toy Data **” in the plot. For the “Toy Data” experiment, β = 1 turned out to be optimal (here, lower scores are better). However, for “Toy Data *”, where we have an extremely small number of observations from the true function, setting β to a higher value acts as a regularizer by relying more on weak signals, and eventually leads to better generalization. On the other hand, for “Toy Data **”, where the quality of the weak annotator is extremely low, lower values of β put more focus on the true observations. Therefore, β lets us control the bias-variance trade-off in these extreme cases. We now look at the rate of learning for the student as the amount of training data is varied. We performed two types of experiments for all tasks: In the first experiment, we use all the available strong data but consider different percentages of the entire weak dataset. In the second experiment, we fix the amount of weak data and provide the model with varying amounts of strong data. We use standard fine-tuning with similar setups as for the baseline models. Details on the experiments for the toy problem are provided in Appendix E.1. ', 'original_lines': 'and ran the fine-tuning stage with different values of β ∈ {0.0,0.1,1.0,2.0,5.0} in all the experiments. the best results. We also experimented on the toy problem with different values of β in three cases: 1) having 10 observations from the true function (same setup as Section 3.1), marked as “Toy Data” in the plot, 2) having only 5 observations from the true function, marked as “Toy Data *” in the plot, and 3) having f (x) = x + 1 as the weak function, which is an extremely bad approximator of the true function, marked as “Toy Data **” in the plot. For the “Toy Data” experiment, β = 1 turned out to be optimal. However, for “Toy Data *”, where we have an extremely small number of observations from the true function, setting β to a higher value acts as a regularizer by relying more on weak signals, and eventually leads to better generalization. On the other hand, for “Toy Data **”, where the quality of the weak annotator is extremely low, lower values of β put more focus on the true observations. Therefore, β lets us control the bias-variance trade-off in these extreme cases. In this section, we study the rate of learning for the student as the amount of training data is varied. We performed two types of experiments for all tasks: In the first experiment, we use all the available strong data but consider different percentages of the entire weak dataset. In the second experiment, we fix 7 EmbeddingClassifierEmbeddingConv.Feature MapPooled Repr. Under review as a conference paper at ICLR 2018 (a) Models trained on different amount weak data. (b) Models trained on different amount of strong data. Figure 6: Performance of FWL and the baseline model trained on different amount of data. the amount of weak data and provide the model with varying amounts of strong data. We use standard fine-tuning with similar setups as for the baseline models1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '5 RELATED WORK Learning from imperfect labels has been thoroughly studied in the literature (Fr´enay & Verleysen, samples (Brodley & Friedl, 1999). There are some studies showing that weak or noisy labels can be One direction of research focuses on modeling the pattern of the noise or weakness in the labels. For instance, methods that use a generative model to correct weak labels such that a discriminative 6 CONCLUSION ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'The empirical observation of Figure 6 that our model learns more with less data can also be seen as evidence in support of another perspective to FWL, called learning using privileged information (Vapnik ', 'modified_lines': '& Izmailov, 2015). We elaborate more on this connection in Appendix F. 4.3 SENSITIVITY OF THE FWL TO THE QUALITY OF THE WEAK ANNOTATOR Our proposed setup in FWL requires defining a so-called “weak annotator” to provide a source of weak supervision for unlabelled data. In Section 4.1 we discussed the role of parameter β for controlling the bias-variance trade-off by trying two weak annotators for the toy problem. Now, in this section, we study how the quality of the weak annotator may affect the performance of the FWL, for the task of document ranking as a real-world problem. To do so, besides BM25 (Robertson & Zaragoza, 2009), we use three other weak annotators: 8 Under review as a conference paper at ICLR 2018 vector space model (Salton & Yang, 1973) with binary term occurrence (BTO) weighting schema and vector space model with TF-IDF weighting schema, which are both weaker than BM25, and BM25+RM3 (Abdul-jaleel et al., 2004) that uses RM3 as the pseudo-relevance feedback method on top of BM25, leading to better labels. Figure 7 illustrates the performance of these four weak an- notators in terms of their mean average precision (MAP) on the test data, versus the performance of FWL given the corresponding weak annotator. As it is expected, the per- formance of FWL depends on the quality of the employed weak annotator. The percentage of improvement of FWL over its corresponding weak annotator on the test data is also presented in Figure 7. As can be seen, the better the performance of the weak annotator is, the less the improvement of the FWL would be. Figure 7: Performance of FWL versus perfor- mance of the corespondence weak annotator in the document ranking task, on Robust04 dataset. 4.4 FROM MODIFYING THE LEARNING RATE TO WEIGHTED SAMPLING FWL provides confidence score based on the certainty asso- ciated with each generated label ¯yt, given sample xt ∈ Dsw. We can translate the confidence score as how likely includ- ing (xt,¯yt) in the training set for the student model improves the performance, and rather than using this score as the mul- tiplicative factor in the learning rate, we can use it to bias sampling procedure of mini-batches so that the frequency of training samples are proportional to the confidence score of their labels. We design an experiment to try FWL with this setup (FWLs), in which we keep the architectures of the stu- dent and the teacher and the procedure of the first two steps of the FWL fixed, but we changed the step 3 as follows: Given the soft dataset Dsw, consisting of xt, its label ¯yt and the associated confidence score generated by the teacher, we normalize the confidence scores over all training samples and set the normalized score of each sample as its probability to be sampled. Afterward, we train the student model by mini-batches sampled from this set with respect to the probabilities associated with each sample, but without considering the original confidence scores in parameter updating. This means the more confident the teacher is about the generated label for each sample, the more chance that sample has to be seen by the student model. Figure 8: Performance of FWL and FWLs with respect to different batch of data for the task of doc- ument ranking (Robust04 dataset) and sentiment classification (SemEval14 dataset). Figure 8 illustrates the performance of both FWL and FWLs trained on different amount of data sampled from Dsw, in the document ranking and sentiment classification tasks. As can be seen, compared to FWL, the performance of FWLs increases rapidly in the beginning but it slows down afterward. We have looked into the sampling procedure and noticed that the confidence scores provided by the teacher form a rather skewed distribution and there is a strong bias in FWLs toward sampling from data points that are either in or closed to the points in Ds, as GP has less uncertainty around these points and the confidence scores are high. We observed that the performance of FWLs gets closer to the performance of FWL after many epochs, while FWL had already a log convergence. The skewness of the confidence distribution makes FWLs to have a tendency for more exploitation than exploration, however, FWL has more chance to explore the input space, while it controls the effect of updates on the parameters for samples based on their merit. In this section, we position our FWL approach relative to related work. 2014). The imperfect (weak) signal can come from non-expert crowd workers, be the output of other models that are weaker (for instance with low accuracy or coverage), biased, or models trained on data from different related domains. Among these forms, in the distant supervision setup, a heuristic labeling rule (Deriu et al., 2016; Severyn & Moschitti, 2015b) or function (Dehghani et al., 2017c) 9 Under review as a conference paper at ICLR 2018 which can be relying on a knowledge base (Mintz et al., 2009; Min et al., 2013; Han & Sun, 2016) is employed to devise noisy labels. Learning from weak data sometimes aims at encoding various forms of domain expertise or cheaper supervision from lay annotators. For instance, in the structured learning, the label space is pretty complex and obtaining a training set with strong labels is extremely expensive, hence this class of problems leads to a wide range of works on learning from weak labels (Roth, 2017). Indirect supervision is considered as a from of learning from weak labels that is employed in particular in the structured learning, in which a companion binary task is defined for which obtaining training data is easier (Chang et al., 2010; Raghunathan et al., 2016). In the response-based supervision, the model receives feedback from interacting with an environment in a task, and converts this feedback into a supervision signal to update its parameters (Roth, 2017; Clarke et al., 2010; Riezler et al., 2014). Constraint-based supervision is another form of weak supervision in which constraints that are represented as weak label distributions are taken as signals for updating the model parameters. For instance, physics-based constraints on the output (Stewart & Ermon, 2017) or output constraints on execution of logical forms (Clarke et al., 2010). In the proposed FWL model, we can employ these approaches as the weak annotator to provide imperfect labels for the unlabeled data, however, a small amount of data with strong labels is also needed, which put our model in the class of semi-supervised models. In the semi-supervised setup, some ideas were developed to utilize weakly or even unlabeled data. For instance, the idea of self(incremental)-training (Rosenberg et al., 2005), pseudo-labeling (Lee, 2013; Hinton et al., 2014), and Co-training (Blum & Mitchell, 1998) are introduced for augmenting the training set by unlabeled data with predicted labels. Some research used the idea of self-supervised (or unsupervised) feature learning (Noroozi & Favaro, 2016; Dosovitskiy et al., 2016; Donahue et al., 2017) to exploit different labeling that are freely available besides or within the data, and to use them as intrinsic signals to learn general-purpose features. These features, that are learned using a proxy task, are then used in a supervised task like object classification/detection or description matching. As a common approach in semi-supervised learning, the unlabeled set can be used for learning the distribution of the data. In particular for neural networks, greedy layer-wise pre-training of weights using unlabeled data is followed by supervised fine-tuning (Hinton et al., 2006; Deriu et al., 2017; Severyn & Moschitti, 2015b;a; Go et al., 2009). Other methods learn unsupervised encoding at multiple levels of the architecture jointly with a supervised signal (Ororbia II et al., 2015; Weston et al., 2012). Alternatively, some noise cleansing methods have been proposed to remove or correct mislabeled leveraged by modifying the loss function (Reed et al., 2015; Patrini et al., 2017; 2016; Vahdat, 2017) or changing the update rule to avoid imperfections of the noisy data (Malach & Shalev-Shwartz, 2017; Dehghani et al., 2017a;b). model can be trained more effectively (Ratner et al., 2016; Rekatsinas et al., 2017; Varma et al., 2017). Furthermore, methods that aim at capturing the pattern of the noise by inserting an extra layer (Goldberger & Ben-Reuven, 2017) or a separate module tries to infer better labels from noisy ones and use them to supervise the training of the network (Sukhbaatar et al., 2015; Veit et al., 2017; Dehghani et al., 2017a). Our proposed FWL can be categorized in this class as teacher tries to infer better labels and provide certainty information which is incorporated as the update rule for the student model. ', 'original_lines': '& Izmailov, 2015). We elaborate more on this connection in Appendix A. 2014). In the semi-supervised setup, some ideas were developed to utilize weakly or even unlabeled data. For instance, the idea of self-training (Rosenberg et al., 2005), pseudo-labeling (Lee, 2013; Hinton et al., 2015), and Co-training (Blum & Mitchell, 1998) are introduced for augmenting the training set by unlabeled data with predicted labels. As a common approach in semi-supervised learning, the unlabeled set can be used for learning the distribution of the data. In particular for neural networks, greedy layer-wise pre-training of weights using unlabeled data is followed by supervised fine- tuning (Hinton et al., 2006; Deriu et al., 2017; Severyn & Moschitti, 2015b;a; Go et al., 2009). Other methods learn unsupervised encoding at multiple levels of the architecture jointly with a supervised signal (Ororbia II et al., 2015; Weston et al., 2012). On the other hand, some noise cleansing methods were proposed to remove or correct mislabeled leveraged by employing a particular architecture or defining a proper loss function to avoid over-fitting to imperfections of the training data (Dehghani et al., 2017; Patrini et al., 2016; Beigman & Klebanov, 2009; Zeng et al., 2015; Bunescu & Mooney, 2007). model can be trained more effectively (Ratner et al., 2016; Rekatsinas et al., 2017; Varma et al., 2017). Furthermore, methods that aim to capture the pattern of the noise by inserting an extra layer or a separate module (Sukhbaatar et al., 2015; Veit et al., 2017) tries to infer better labels from noisy ones and use them to supervise the training of the network. Our proposed method can be categorized in this class. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EXPERIMENTS', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 25, 'before_section': None, 'context_before': 'to document ranking and sentiment classification, and empirically verified that FWL speeds up the training process and improves over state-of-the-art semi-supervised alternatives. ', 'modified_lines': '10 ', 'original_lines': '1Details on the experiments for the toy problem is provided in Appendix F.1 8 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke. Fujii, Alexis Boukouvalas, Pablo Le´on-Villagr´a, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process library using TensorFlow. Journal of Machine Learning Research, 18(40):1–6, 2017. of Words and Phrases and their Compositionality. In NIPS ’13, pp. 3111–3119, 2013. Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814, 2010. ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'David Lopez-Paz, L´eon Bottou, Bernhard Sch¨olkopf, and Vladimir Vapnik. Unifying distillation and ', 'modified_lines': 'privileged information. In ICLR’16, 2016. arXiv preprint arXiv:1511.03643. Eran Malach and Shai Shalev-Shwartz. Decoupling” when to update” from” how to update”. In NIPS2017, 2017. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed Representations Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. Distant supervision for relation extraction with an incomplete knowledge base. In HLT-NAACL, pp. 777–782, 2013. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. Distant supervision for relation extraction without labeled data. In ACL, pp. 1003–1011, 2009. Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines. In ', 'original_lines': 'privileged information. arXiv preprint arXiv:1511.03643, 2015. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed Representations Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-25 21:02:14
|
ICLR.cc/2018/Conference
|
ByCIPkJXM
|
SklvwyJQM
|
[]
|
2017-12-25 21:02:16
|
ICLR.cc/2018/Conference
|
SklvwyJQM
|
BkRdDy17z
|
[]
|
2017-12-25 21:02:45
|
ICLR.cc/2018/Conference
|
BkRdDy17z
|
H1m07GWAW
|
[]
|
2018-01-25 15:40:18
|
ICLR.cc/2018/Conference
|
H1m07GWAW
|
r1cBrf0DM
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'or source of noise in the labels. In this paper, we argue that treating weakly-labeled samples uniformly (i.e. each weak sample contributes equally to the final classifier) ignores potentially valuable information of the label quality. Instead, we propose Fidelity-Weighted Learning (FWL), a Bayesian semi-supervised approach that leverages a small amount of data with true labels to generate a larger training set with confidence-weighted weakly-labeled samples, which can then be used to modulate the fine-tuning process based on the fidelity (or quality) of each weak sample. By directly modeling the inaccuracies introduced by the weak annotator in this way, we can control the extent to which we make use of this ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'observations from the true function or distribution (which we call strong data). Indeed, it has recently been shown that a small amount of expert-labeled data can be augmented in such a way by a large set of raw data, with labels coming from a heuristic function, to train a more accurate neural ranking ', 'modified_lines': 'model (Dehghani et al., 2017d). The downside is that such approaches are oblivious to the amount 1 Published as a conference paper at ICLR 2018 (a) Step 1 (b) Step 2 (c) Step 3 Figure 1: Illustration of Fidelity-Weighted Learning: Step 1: Pre-train student on weak data, Step 2: Fit teacher to observations from the true function, and Step 3: Fine-tune student on labels generated by teacher, taking the confidence into account. Red dotted borders and blue solid borders depict components with trainable and non-trainable parameters, respectively. ', 'original_lines': 'model (Dehghani et al., 2017c). The downside is that such approaches are oblivious to the amount 1 Under review as a conference paper at ICLR 2018 (a) Step 1 (b) Step 2 (c) Step 3 Figure 1: Illustration of Fidelity-Weighted Learning: Step 1: Pre-train student on weak data, Step 2: Fit teacher to observations from the true function, and Step 3: Fine-tune student on labels generated by teacher, taking the confidence into account. Red dotted borders and blue solid borders depict components with trainable and non-trainable parameters, respectively. ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 4}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': None, 'context_after': 'The main goal of this step is to learn a task dependent representation of the data as well as pretraining the student. The student function is a neural network consisting of two parts. The first part ψ(.) learns ', 'paragraph_idx': 12, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': 'quality estimated using the teacher (Equation 1). ', 'modified_lines': 'Our proposed setup comprises a neural network called the student and a Bayesian function approximator called the teacher. The training process consists of three phases which we summarize in Algorithm 1 and Figure 1. Step 1 Pre-train the student on Dw using weak labels generated by the weak annotator. ', 'original_lines': 'Step 1 Pre-train the student on Dw using weak labels generated with the weak annotator. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 11}, {'section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'after_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_after': 'representation, estimated w.r.t the strong data. We explain the finer details of the GP in Appendix C, and just present the overall description here. A prior mean and co-variance function is chosen for GP. The learned embedding function ψ(·) in Step 1 is then used to map the data samples to dense vectors as input to the GP. We use the learned representation by the student in the previous step to compensate lack of data in Ds and the teacher can enjoy the learned knowledge from the large quantity of the The GP is trained on the samples from Ds to learn the posterior mean mpost (used to generate soft labels) and posterior co-variance Kpost(.,.) (which represents label uncertainty). We then create the ', 'paragraph_idx': 16, 'before_section': '2 FIDELITY-WEIGHTED LEARNING (FWL)', 'context_before': 'dent representation ψ(.) and then use the teacher to generate a soft dataset Dsw consisting of (cid:104)sample,predicted label, confidence(cid:105) for all data samples. ', 'modified_lines': 'We use a Gaussian process as the teacher to capture the label uncertainty in terms of the student weakly annotated data. This way, we also let the teacher see the data through the lens of the student. ', 'original_lines': 'We use a Gaussian process as teacher to capture the label uncertainty in terms of the student weakly annotated data. This way, we also let the teacher to see the data through the lens of the student. ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 15}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '9. FWL. Our FWL model, i.e. the student trained on the weakly labeled data and fine-tuned on In the following, we introduce each task and the results produced for it, more detail about the exact student network and teacher GP for each task are in the appendix. 4 (a) Training student on 100 examples from the weak function. ', 'paragraph_idx': 5, 'before_section': None, 'context_before': 'student is trained on examples labeled by the teacher using the confidence scores. 8. FWL \\Σ. The student trained on the weakly labeled data and fine-tuned on examples labeled by ', 'modified_lines': 'the teacher without taking the confidence into account. This baseline is similar to (Veit et al., 2017). examples labeled by the teacher using the confidence scores. 1In the document ranking task, as the representation of documents and queries, we use weighted averaging over pretrained embeddings of their words based on their inverse document frequency (Dehghani et al., 2017d). In the sentiment analysis task, we use skip-thoughts vectors(Kiros et al., 2015) Published as a conference paper at ICLR 2018 ', 'original_lines': 'teacher without taking the confidence into account. This baseline is similar to (Veit et al., 2017). examples labeled by teacher using the confidence scores. Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 DOCUMENT RANKING', 'after_section': '3.2 DOCUMENT RANKING', 'context_after': 'task. Given each training sample x as a triple of query q, and two documents d+ and d−, the goal is to learn a function F : {< q,d+,d− >} → R, which maps each data sample x to a scalar output value y indicating the probability of d+ being ranked higher than d− with respect to q. 5 Table 1: Performance of FWL approach and baseline methods for ranking task. IJi indicates that the improvements with respect to the baseline i are statistically significant at the 0.05 level using the paired two-tailed t-test with Bonferroni correction. ', 'paragraph_idx': 32, 'before_section': '3.2 DOCUMENT RANKING', 'context_before': 'to learn a representation for long documents and capture the notion of relevance between queries and documents. Furthermore, the size of publicly available datasets with query-document relevance judgments is unfortunately quite small (∼ 250 queries). We employ a state-of-the-art pairwise neural ', 'modified_lines': 'ranker architecture as the student (Dehghani et al., 2017d). In this model, ranking is cast as a regression Published as a conference paper at ICLR 2018 ', 'original_lines': 'ranker architecture as the student (Dehghani et al., 2017c). In this model, ranking is cast as a regression 1In the document ranking task, as the representation of documents and queries, we use weighted averaging over pretrained embeddings of their words based on their inverse document frequency (Dehghani et al., 2017c). In the sentiment analysis task, we use skip-thoughts vectors(Kiros et al., 2015) Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 32}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'layer ψ : {< q, d+, d− >} → Rm maps each input sample to an m- dimensional real-valued vector. In general, besides learning em- beddings for words, function ψ learns to compose word embedding ', 'paragraph_idx': 4, 'before_section': None, 'context_before': '0.2453IJ12345678 The student follows the architecture proposed in (Dehghani et al., ', 'modified_lines': '2017d). The first layer of the network, i.e. representation learning ', 'original_lines': '2017c). The first layer of the network, i.e. representation learning ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 WABM252 NNW (Dehghani et al., 2017d)3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'after_section': '1 WABM252 NNW (Dehghani et al., 2017d)3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'context_after': 'on weak supervision, i.e. NNW performs better than NNS. This can be due to the fact that ranking is a complex task requiring many training samples, while relatively few data with true labels are available. ', 'paragraph_idx': 42, 'before_section': '1 WABM252 NNW (Dehghani et al., 2017d)3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'context_before': 'documents and normalized discounted cumulative gain calculated for the top 20 retrieved documents (nDCG@20). Table 1 shows the performance on both datasets. As can be seen, FWL provides a significant boost on the performance over all datasets. In the ranking task, the student is designed in ', 'modified_lines': 'particular to be trained on weak annotations (Dehghani et al., 2017d), hence training the network only ', 'original_lines': 'particular to be trained on weak annotations (Dehghani et al., 2017c), hence training the network only ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 42}, {'section': '3 EXPERIMENTS', 'after_section': '3 EXPERIMENTS', 'context_after': 'teacher without considering their confidence score, i.e. FWL \\Σ. This means we just augmented the fine-tuning process by generating a fine-tuning set using teacher which is better than Ds in terms of quantity and Dw in terms of quality. This baseline is equivalent to setting β = 0 in Equation 1. However, ', 'paragraph_idx': 23, 'before_section': None, 'context_before': 'strategy, NNW→S. Comparing the performance of FWLunsuprep to FWL indicates that, first of all learning the representation of the input data downstream of the main task leads to better results compared to a task-independent unsupervised or self-supervised way. Also the dramatic drop in the ', 'modified_lines': 'performance compared to the FWL, emphasizes the importance of the preretraining the student on weakly labeled data. We can gain improvement by fine-tuning the NNW using labels generated by the ', 'original_lines': 'performance compared to the FWL, emphasizes on the importance of the preretraining the student on weakly labeled data. We can gain improvement by fine-tuning the NNW using labels generated by ', 'after_paragraph_idx': 23, 'before_paragraph_idx': None}, {'section': '1 WALexicon2 NNW3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'after_section': '1 WALexicon2 NNW3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'context_after': 'labels generated by GP instead of data with true labels, regardless of the confidence score, works better than standard fine-tuning. ', 'paragraph_idx': 52, 'before_section': '1 WALexicon2 NNW3 NNS4 NNS+/W5 NNW→S6 NNWω→S', 'context_before': 'performance. Weighting the gradient updates from weak labels during pretraining and fine-tuning the network with true labels, i.e. NNWω→S seems to work quite well in this task. For this task, like ranking task, learning the representation in an unsupervised task independent fashion, i.e. FWLunsuprep, does ', 'modified_lines': 'not lead to good results compared to the FWL. Similar to the ranking task, fine-tuning NNS based on ', 'original_lines': 'not lead to a good results compared to the FWL. Similar to the ranking task, fine-tuning NNS based on ', 'after_paragraph_idx': 52, 'before_paragraph_idx': 52}, {'section': '4 ANALYSISIn this section, we provide further analysis of FWL by investigating the bias-variance trade-off and thelearning rate.', 'after_section': None, 'context_after': '7 EmbeddingClassifierEmbeddingConv.Feature MapPooled Repr. (a) Models trained on different amount weak data. ', 'paragraph_idx': 54, 'before_section': '4 ANALYSISIn this section, we provide further analysis of FWL by investigating the bias-variance trade-off and thelearning rate.', 'context_before': '4.1 HANDLING THE BIAS-VARIANCE TRADE-OFF ', 'modified_lines': 'As mentioned in Section 2, β is a hyperparameter that controls the contribution of weak and strong data to the training procedure. In order to investigate its influence, we fixed everything in the model and ran the fine-tuning step with different values of β ∈ {0.0,0.1,1.0,2.0,5.0} in all the experiments. Published as a conference paper at ICLR 2018 ', 'original_lines': 'As mentioned in Section 2, β is a hyper-parameter that controls the contribution of weak and strong data to the training procedure. In order to investigate its influence, we fixed everything in the model and ran the fine-tuning step with different values of β ∈ {0.0,0.1,1.0,2.0,5.0} in all the experiments. Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 54}, {'section': '5 RELATED WORK', 'after_section': None, 'context_after': '9 which can be relying on a knowledge base (Mintz et al., 2009; Min et al., 2013; Han & Sun, 2016) is employed to devise noisy labels. ', 'paragraph_idx': 66, 'before_section': '5 RELATED WORK', 'context_before': '2014). The imperfect (weak) signal can come from non-expert crowd workers, be the output of other models that are weaker (for instance with low accuracy or coverage), biased, or models trained on data from different related domains. Among these forms, in the distant supervision setup, a heuristic ', 'modified_lines': 'labeling rule (Deriu et al., 2016; Severyn & Moschitti, 2015b) or function (Dehghani et al., 2017d) Published as a conference paper at ICLR 2018 ', 'original_lines': 'labeling rule (Deriu et al., 2016; Severyn & Moschitti, 2015b) or function (Dehghani et al., 2017c) Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 66}, {'section': '5 RELATED WORK', 'after_section': '5 RELATED WORK', 'context_after': 'learning, in which a companion binary task is defined for which obtaining training data is easier (Chang et al., 2010; Raghunathan et al., 2016). In the response-based supervision, the model receives feedback from interacting with an environment in a task, and converts this feedback into a supervision signal ', 'paragraph_idx': 70, 'before_section': '5 RELATED WORK', 'context_before': 'supervision from lay annotators. For instance, in the structured learning, the label space is pretty complex and obtaining a training set with strong labels is extremely expensive, hence this class of problems leads to a wide range of works on learning from weak labels (Roth, 2017). Indirect supervision ', 'modified_lines': 'is considered as a form of learning from weak labels that is employed in particular in the structured ', 'original_lines': 'is considered as a from of learning from weak labels that is employed in particular in the structured ', 'after_paragraph_idx': 70, 'before_paragraph_idx': 70}, {'section': '5 RELATED WORK', 'after_section': '5 RELATED WORK', 'context_after': 'learn general-purpose features. These features, that are learned using a proxy task, are then used in a supervised task like object classification/detection or description matching. ', 'paragraph_idx': 71, 'before_section': '5 RELATED WORK', 'context_before': 'and Co-training (Blum & Mitchell, 1998) are introduced for augmenting the training set by unlabeled data with predicted labels. Some research used the idea of self-supervised (or unsupervised) feature learning (Noroozi & Favaro, 2016; Dosovitskiy et al., 2016; Donahue et al., 2017) to exploit different ', 'modified_lines': 'labelings that are freely available besides or within the data, and to use them as intrinsic signals to ', 'original_lines': 'labeling that are freely available besides or within the data, and to use them as intrinsic signals to ', 'after_paragraph_idx': 71, 'before_paragraph_idx': 71}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'One direction of research focuses on modeling the pattern of the noise or weakness in the labels. For instance, methods that use a generative model to correct weak labels such that a discriminative ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'samples (Brodley & Friedl, 1999). There are some studies showing that weak or noisy labels can be leveraged by modifying the loss function (Reed et al., 2015; Patrini et al., 2017; 2016; Vahdat, 2017) or changing the update rule to avoid imperfections of the noisy data (Malach & Shalev-Shwartz, 2017; ', 'modified_lines': 'Dehghani et al., 2017b;c). ', 'original_lines': 'Dehghani et al., 2017a;b). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'better labels and provide certainty information which is incorporated as the update rule for the student model. ', 'paragraph_idx': 4, 'before_section': None, 'context_before': '2017). Furthermore, methods that aim at capturing the pattern of the noise by inserting an extra layer (Goldberger & Ben-Reuven, 2017) or a separate module tries to infer better labels from noisy ones and use them to supervise the training of the network (Sukhbaatar et al., 2015; Veit et al., 2017; ', 'modified_lines': 'Dehghani et al., 2017b). Our proposed FWL can be categorized in this class as the teacher tries to infer ', 'original_lines': 'Dehghani et al., 2017a). Our proposed FWL can be categorized in this class as teacher tries to infer ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 EXPERIMENTS', 'after_section': None, 'context_after': 'The compositionality function (cid:12) projects a set of n embedding-weighting pairs to an m- dimensional representa- tion, independent from the value of n: ', 'paragraph_idx': 25, 'before_section': None, 'context_before': 'i denote the ith term in query q respectively document d. The embedding function ε maps each term to a dense m- dimensional real value vector, which is learned during the training phase. The weighting function ω assigns a weight to each term in the vocabulary. It has been shown that ω simulates the effect of inverse document ', 'modified_lines': 'frequency (IDF), which is an important feature in information retrieval (Dehghani et al., 2017d). ', 'original_lines': 'frequency (IDF), which is an important feature in information retrieval (Dehghani et al., 2017c). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Setup For the evaluation of the whole model, we conducted a 3-fold cross-validation. However, for each dataset, we first tuned all the hyper-parameters of the student in the first step on the set with true labels using batched GP ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'documents judged as non-relevant and form pairwise combinations among them. Data with weak labels We create a query set Q using the unique queries appearing in the AOL query logs (Pass ', 'modified_lines': 'et al., 2006). This query set contains web queries initiated by real users in the AOL search engine that were sampled from a three-month period from March 2006 to May 2006. We applied standard pre-processing Dehghani et al. (2017d;a) on the queries: We filtered out a large volume of navigational queries containing URL substrings (“http”, “www.”, “.com”, “.net”, “.org”, “.edu”). We also removed all non-alphanumeric characters from the queries. For each dataset, we took queries that have at least ten hits in the target corpus using our weak annotator method. Applying all these steps, We collect 6.15 million queries to train on in Robust04 and 6.87 million queries for ClueWeb. To prepare the weakly labeled training set Dw, we take the top 1,000 retrieved documents using BM25 for each query from training query set Q, which in total leads to ∼ |Q|×106 training samples. ', 'original_lines': 'et al., 2006). This query set contains web queries initiated by real users in the AOL search engine that were sampled from a three-month period from March 2006 to May 2006. We filtered out a large volume of navigational queries containing URL substrings (“http”, “www.”, “.com”, “.net”, “.org”, “.edu”). We also removed all non- alphanumeric characters from the queries. For each dataset, we took queries that have at least ten hits in the target corpus using our weak annotator method. Applying all these steps, We collect 6.15 million queries to train on in Robust04 and 6.87 million queries for ClueWeb. To prepare the weakly labeled training set Dw, we take the top 1,000 retrieved documents using BM25 for each query from training query set Q, which in total leads to ∼ |Q|×106 training samples. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-23 22:27:14
|
ICLR.cc/2018/Conference
|
H1kVqJtMf
|
HkU7olKzf
|
[{'section': '4 RESULTS FOR DEEP LINEAR NEURAL NETWORKS', 'after_section': '4 RESULTS FOR DEEP LINEAR NEURAL NETWORKS', 'context_after': '4.3 UNIFORM CONVERGENCE, STABILITY AND GENERALIZATION OF EMPIRICAL RISK ', 'paragraph_idx': 28, 'before_section': '4 RESULTS FOR DEEP LINEAR NEURAL NETWORKS', 'context_before': 'gradients at these points of ˆJn(w) and J (w) are close. This implies that a degenerate stationary point of J (w) will also give a near-zero gradient for ˆJn(w), i.e., it is also a stationary point for ˆJn(w). ', 'modified_lines': 'In the proof, we consider the essential multi-layer architecture of the deep linear network, and do not transform it into a linear regression model and directly apply existing results (see Loh & Wainwright (2015) and Negahban et al. (2011)). This is because we care more about deep ReLU networks which cannot be reduced in this way. Our proof technique is more suitable for analyzing the multi-layer neural networks which paves a way for analyzing deep ReLU networks. Also such an analysis technique can reveal the role of network parameters (dimension, norm, etc.) of each weight matrix in the results which may benefit the design of networks. Besides, the obtained results are more consistent with those for deep nonlinear networks (see Sec. 5). ', 'original_lines': '', 'after_paragraph_idx': 28, 'before_paragraph_idx': 28}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Theorem 3. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then there exist two universal constants cf (cid:48) and cf such that if n ≥ cf (cid:48) max(l3r4 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'risk easily. In this subsection, we first give the uniform convergence rate of empirical risk for deep linear neural networks in Theorem 3, and then use this result to derive the stability and generalization bounds for DNNs in Corollary 1. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Based on VC-dimension techniques, Bartlett & Maass (2003) proved that for a feedforward neural network with polynomial activation functions and one-dimensional output, with probability at least ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'number of nonzero weight parameters s in a DNN model. √ ', 'modified_lines': '', 'original_lines': ' (cid:113) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '2 λ,', 'after_section': None, 'context_after': 'Neyshabur et al. (2015) proved that the Rademacher complexity of a fully-connected neural network model with ReLU activation functions and one-dimensional output is O (cid:0)rl/ n(cid:1) (see Corollary 2 ', 'paragraph_idx': 197, 'before_section': None, 'context_before': 'at the order of O(ld log(d)+l2d) (Bartlett & Maass, 2003). Note that Bartlett & Maass (2003) did not reveal the role of the magnitude of weight in their results. In contrast, our uniform convergence bound is supw∈Ω | ˆJn(w)−J (w)| ≤ O((cid:112)(s log(dn/l) + log(1/ε))/n). So our convergence rate is tighter. ', 'modified_lines': ' (cid:113) 6 Under review as a conference paper at ICLR 2018 √ ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': 'number n. Note that our convergence rate involves r2l since we use squared loss instead of the training error in (Neyshabur et al., 2015). The extra parameters s and d are involved since we consider the parameter space rather than the function hypothesis f in (Neyshabur et al., 2015), which helps people more transparently understand the roles of the network parameters. Besides, the Rademacher complexity cannot be applied to analyzing convergence properties of the empirical risk gradient and stationary points as our techniques. Based on Theorem 3, we proceed to analyze the stability property of the empirical risk and the convergence rate of the generalization error in expectation. Let S = {(x(1), y(1)), · · · , (x(n), y(n))} ', 'paragraph_idx': 94, 'before_section': None, 'context_before': 'the output of the l-th layer in the network model f (w; x, y). The convergence rate in our theorem is O(r2l(cid:112)(s log(d/l) + log(1/ε))/n) and has the same convergence speed O(1/ n) w.r.t. sample ', 'modified_lines': ' √ ', 'original_lines': ' √ √ 6 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'value of the target output y falls in [0, 1]. Similar to the analysis of deep linear neural networks, here we also aim to characterize the empirical risk gradient, stationary points and empirical risk for deep nonlinear neural networks. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the input x to have bounded magnitude. Such an assumption is common. For instance, Tian (2017) and Soudry & Hoffer (2017) both assumed that the entries in the input vector are from Gaussian distribution. We also assume w ∈ Ω as in (Xu & Mannor, 2012). Here we also assume that the entry ', 'modified_lines': '', 'original_lines': ' 7 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ≤ ctlr4where ct is a constant. Then we use the inequality (cid:13)(cid:13)∇2f (w, x)(cid:13)(cid:13)We first consider Qst (cid:44) ∇w(s)', 'after_section': None, 'context_after': 'n − w(k)(cid:13) (cid:13) (cid:13)2 2τ ζ , (k = 1, · · · , m) According to Theorem 5, there is one-to-one correspondence between the non-degenerate stationary points of ˆJn(w) and J (w). Also the corresponding pair has the same non-degenerate index, implying ', 'paragraph_idx': 160, 'before_section': None, 'context_before': 'n and w(k) have the same non-degenerate index and they obey ', 'modified_lines': '(cid:13) (cid:13)w(k) (cid:13) ≤ (cid:114) 512 729 cyl(l + 2) (lcr + 1) cdcr (cid:114) s log(dn/l) + log(4/ε) n with probability at least 1 − ε, where cy, cd and cr are the same parameters in Theorem 4. ', 'original_lines': '(cid:13) (cid:13)w(k) (cid:13) with probability at least 1 − ε, where cy, cd and cr are the same parameters in Theorem 4. s log(dn/l) + log(4/ε) n cyl(l + 2) (lcr + 1) cdcr ≤ (cid:114) 512 729 (cid:114) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 λ,', 'after_section': '2 λ,', 'context_after': '(cid:12) ≥ ζ}. In this way, D can be decomposed into countably components, with each component containing either exactly one or zero non-degenerate stationary point. For each component, the uniform convergence of gradient and the results in differential topology guarantee that if J (w) has no stationary points, then ˆJn(w) also has no stationary points and vise versa. ', 'paragraph_idx': 196, 'before_section': None, 'context_before': 'To prove Theorems 2 and 5, we first prove the uniform convergence of the empirical Hessian to its population Hessian. Then, we define such a set D = {w ∈ Ω : (cid:107)∇J (w)(cid:107)2 < (cid:15) and ', 'modified_lines': 'inf i ', 'original_lines': 'inf i ', 'after_paragraph_idx': 196, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'remaining is to prove P(E2). We also prove that it has sub-exponential tail associated to the sample number n and the networks parameters and it obeys P(E2) ≤ ε/2 with proper conditions. Then we utilize the uniform convergence of ˆJn(w) to prove the stability and generalization bounds of ˆJn(w) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'supw∈Ω| ˆJn(w)−∇J (w)| > t into E1, E2 and E3 which have the same forms as their counterparts in the proofs of Theorem 1 with the gradient replaced by the loss function. To prove P(E1) ≤ ε/2 and P(E3) = 0, we can use the Lipschitz constant of the loss function and the (cid:15)-net properties. The ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7): ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 CONCLUSION', 'after_section': '7 CONCLUSION', 'context_after': 'S. Mei, Y. Bai, and A. Montanari. The landscape of empirical risk for non-convex losses. Annals of Statistics, 2017. ', 'paragraph_idx': 62, 'before_section': '7 CONCLUSION', 'context_before': 'K. Kawaguchi. Deep learning without poor local minima. In NIPS, pp. 1097–1105, 2016. ', 'modified_lines': 'P. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic theory for local optima. JMLR, 16(Mar):559–616, 2015. ', 'original_lines': '', 'after_paragraph_idx': 63, 'before_paragraph_idx': 61}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': 'B. Neyshabur, R. Tomioka, and N. Srebro. Norm-based capacity control in neural networks. In COLT, pp. 1376–1401, 2015. ', 'paragraph_idx': 63, 'before_section': '7 CONCLUSION', 'context_before': 'M-estimators with decomposable regularizers. In NIPS, pp. 1348–1356, 2009. ', 'modified_lines': 'S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. In NIPS, 2011. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 63}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'Y. Zhang, P. Liang, and M. Wainwright. Convexified convolutional neural networks. ICML, 2017. ', 'modified_lines': '12 ', 'original_lines': '11 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 100, 'before_section': None, 'context_before': 'where λ(cid:15) = {λ1, . . . , λkw } be an (cid:15)-covering net of Bd(1). ', 'modified_lines': '13 ', 'original_lines': '12 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 100, 'before_section': None, 'context_before': 'if s > t. ', 'modified_lines': '14 ', 'original_lines': '13 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '34 (cid:33)', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 234, 'before_section': None, 'context_before': 'i,j:i(cid:54)=j ', 'modified_lines': '15 ', 'original_lines': '14 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 RESULTS FOR DEEP NONLINEAR NEURAL NETWORKS', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 36, 'before_section': None, 'context_before': '(cid:1) , ', 'modified_lines': '16 ', 'original_lines': '15 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ≤ cy,', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 251, 'before_section': '2 ≤ cy,', 'context_before': 'i ', 'modified_lines': '17 ', 'original_lines': '16 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 251}, {'section': '2 λ,', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 208, 'before_section': None, 'context_before': '(cid:1) (x ⊗ e) , (7) ', 'modified_lines': '18 ', 'original_lines': '17 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 3', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 317, 'before_section': '4 3', 'context_before': 'p ', 'modified_lines': '19 ', 'original_lines': '18 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 317}, {'section': '4 3', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 317, 'before_section': '4 3', 'context_before': 'p ', 'modified_lines': '20 ', 'original_lines': '19 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 317}, {'section': '2 (cid:1) ,', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 323, 'before_section': None, 'context_before': 't−1:1 ', 'modified_lines': '21 ', 'original_lines': '20 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ⊗ z1.Then, we consider the case s > t:', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 107, 'before_section': None, 'context_before': '2 ≤ r2. ', 'modified_lines': '22 ', 'original_lines': '21 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 λ,', 'after_section': None, 'context_after': 's )2(cid:17) ', 'paragraph_idx': 195, 'before_section': None, 'context_before': 'i,k ', 'modified_lines': '23 ', 'original_lines': '22 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 =', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 153, 'before_section': '2 ,', 'context_before': '. ', 'modified_lines': '24 ', 'original_lines': '23 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 152}, {'section': '2 λ,', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 197, 'before_section': '2 λ,', 'context_before': '. ', 'modified_lines': '29 ', 'original_lines': '28 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 197}, {'section': '2 λ,', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 192, 'before_section': None, 'context_before': '38 (cid:107)N (cid:107)2 F . ', 'modified_lines': '38 ', 'original_lines': '37 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ≤ ctlr4where ct is a constant. Then we use the inequality (cid:13)(cid:13)∇2f (w, x)(cid:13)(cid:13)We first consider Qst (cid:44) ∇w(s)', 'after_section': '2 ≤ ctlr4where ct is a constant. Then we use the inequality (cid:13)(cid:13)∇2f (w, x)(cid:13)(cid:13)We first consider Qst (cid:44) ∇w(s)', 'context_after': '(cid:13) (cid:13) ', 'paragraph_idx': 160, 'before_section': None, 'context_before': 'We also bound ', 'modified_lines': '(cid:13) (cid:13) 2 ', 'original_lines': 'F ', 'after_paragraph_idx': 160, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Final result: Thus we can bound (cid:107)∇w∇xf (w, x)(cid:107)op ≤ (cid:107)∇w∇xf (w, x)(cid:107)F ≤ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1 28 cydj−1cr + ', 'modified_lines': '', 'original_lines': ' 1 28 cycr 1 28 cycr . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'jw∈[n(cid:15) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1 n ', 'modified_lines': '', 'original_lines': ' (cid:15) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Note that we have ∂f (w,x) ∂w(j) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(v(j−1))T (cid:17) (cid:17) G(u(i))Ai+1 · · · Al(v(l) − y) ', 'modified_lines': '', 'original_lines': ' (cid:16) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-21 09:13:02
|
ICLR.cc/2018/Conference
|
HkU7olKzf
|
rkzlEFsTb
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Final result: Thus we can bound (cid:107)∇w∇xf (w, x)(cid:107)op ≤ (cid:107)∇w∇xf (w, x)(cid:107)F ≤ ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1 28 cydj−1cr + ', 'modified_lines': '', 'original_lines': ' 1 28 cycr 1 28 cycr . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:19) . ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'G(u(j))Aj+1Aj+2 · · · Ai−1(W (i))T (cid:17)(cid:17) (cid:18) ∂f (w, x) (cid:16) ', 'modified_lines': '', 'original_lines': 'vec ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-25 15:42:49
|
ICLR.cc/2018/Conference
|
rkzlEFsTb
|
SkUxg1TvM
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Theorem 1. Suppose Assumption 1 on the input datum x holds and the activation functions in a deep neural network are linear. Then the empirical gradient uniformly converges to the population gradient in Euclidean norm. Specifically, there exist two universal constants cg(cid:48) and cg such that ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We first analyze the convergence of gradients for the DNN empirical and population risks. To our best knowledge, these results are the first ones giving guarantees on gradient convergence, which help better understand the landscape of DNNs and their optimization behavior. The results are stated blow. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 CONCLUSION', 'after_section': '7 CONCLUSION', 'context_after': 'REFERENCES R. Alessandro. Lecture notes of advanced statistical theory I, CMU. http://www.stat.cmu.edu/ ', 'paragraph_idx': 48, 'before_section': '7 CONCLUSION', 'context_before': 'magnitude of the weights is suggested. All the results are consistent with the widely used network architectures in practice. ', 'modified_lines': 'ACKNOWLEDGMENT This work is partially supported by National University of Singapore startup grant R-263-000-C08- 133, Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112, NUS IDS R-263-000-C67-646 and ECRA R-263-000-C87-133. ', 'original_lines': '', 'after_paragraph_idx': 49, 'before_paragraph_idx': 47}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Y. Fyodorov and I. Williams. Replica symmetry breaking condition exposed by random matrix calculation of ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'and topology of manifolds, volume 104. Springer Science & Business Media, 2012. R. Eldan and O. Shamir. The power of depth for feedforward neural networks. In COLT, pp. 907–940, 2016. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'V. N. Vapnik and V. Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998. R. Vershynin. Introduction to the non-asymptotic analysis of random matrices, compressed sensing. Cambridge ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'convergence and critical point analysis. ICML, 2017. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '7 CONCLUSION', 'after_section': '7 CONCLUSION', 'context_after': '(cid:174) l:s+1 ', 'paragraph_idx': 101, 'before_section': '7 CONCLUSION', 'context_before': '= (cid:0)(Bs−1:1x)T ⊗ (Bs−1:1x)(cid:1) ⊗ (cid:0)BT = (cid:0)(Bs−1:1x)(Bs−1:1x)T (cid:1) ⊗ (cid:0)BT ', 'modified_lines': '(cid:173) ', 'original_lines': ' (cid:173) ', 'after_paragraph_idx': 101, 'before_paragraph_idx': 101}, {'section': '7 CONCLUSION', 'after_section': None, 'context_after': 'k F ', 'paragraph_idx': 96, 'before_section': None, 'context_before': '(cid:13) (cid:13) ', 'modified_lines': '2 (cid:13) (cid:13) F ', 'original_lines': ' 2 (cid:13) (cid:13) F ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:13) (cid:13)Qj (cid:13) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': ' . ', 'modified_lines': '', 'original_lines': 'Under review as a conference paper at ICLR 2018 Then we bound each term separately: ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:118) (cid:117) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:13) F ', 'modified_lines': '', 'original_lines': ' 1 28 cycr 1 28 cycr . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:16) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(v(j−1))T (cid:17) (cid:17) G(u(i))Ai+1 · · · Al(v(l) − y) ', 'modified_lines': '', 'original_lines': ' vec ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-23 00:27:26
|
ICLR.cc/2018/Conference
|
BJk4cgkAb
|
SkH5pgyC-
|
[{'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': 'where min and max of G and D are taken over the set of generator and discriminator func- tions, respectively. The conventional form of V (G, D) (Goodfellow et al., 2014) is given by ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'max D ', 'modified_lines': 'V (G, D) ', 'original_lines': 'V (G, D), (3) ', 'after_paragraph_idx': 7, 'before_paragraph_idx': None}, {'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': '∇xf ∗(x) = ', 'paragraph_idx': 8, 'before_section': '2 METHOD', 'context_before': 'and its derivative ', 'modified_lines': '= sigmoid(f ∗(x)), where f ∗(x) = log qdata(x) − log pG(x), (3) ', 'original_lines': '= sigmoid(f ∗(x)), where f ∗(x) = log qdata(x) − log pG(x), (4) ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': 'can be unbounded or even incomputable. This prompts us to introduce some regularity condition to the derivative of f (x). A particularly successful works in this array are (Qi, 2017; Gulrajani et al., ', 'paragraph_idx': 8, 'before_section': '2 METHOD', 'context_before': '∇xpG(x) ', 'modified_lines': '(4) ', 'original_lines': '(5) ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': 'where we mean by (cid:107)f (cid:107)Lip the smallest value M such that (cid:107)f (x) − f (x(cid:48))(cid:107)/(cid:107)x − x(cid:48)(cid:107) ≤ M for any x, x(cid:48), with the norm being the (cid:96)2 norm. ', 'paragraph_idx': 8, 'before_section': '2 METHOD', 'context_before': 'V (G, D), ', 'modified_lines': '(5) ', 'original_lines': '(6) ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '1 (cid:1)', 'after_section': None, 'context_after': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'We would like to comment on the implication of (12). The first term ˆE (cid:2)δhT(cid:3) is the same as the derivative of the weights without normalization. In this light, the second term in the expression can be seen as the regularization term penalizing the first singular components with the adaptive regularization coefficient λ. λ is positive when δ and ¯WSNh are pointing in similar direction, and this prevents the column space of W from concentrating into one particular direction in the course of the training. In other words, spectral normalization prevents the transformation of each layer from becoming to sensitive in one direction. We can also use spectral normalization to devise a new parametrization for the model. Namely, we can split the layer map into two separate train- able components: spectrally normalized map and the spectral norm constant. As it turns out, this parametrization has its merit on its own and promotes the performance of GANs (See Appendix E). ', 'original_lines': 'to comment on the implication of (13). The first term ˆE (cid:2)δhT(cid:3) is the same as the derivative of the weights without normalization. In this light, the second term in the expression can be seen as the regularization term penalizing the first singular components with the adaptive regularization coef- ficient λ. λ is positive when δ and ¯WSNh are pointing in similar direction, and this prevents the column space of W from concentrating into one particular direction in the course of the training. In other words, spectral normalization prevents the transformation of each layer from becoming to sen- sitive in one direction. We can also use spectral normalization to devise a new parametrization for the model. Namely, we can split the layer map into two separate trainable components: spectrally normalized map and the spectral norm constant. As it turns out, this parametrization has its merit on its own and promotes the performance of GANs (See Appendix E). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'techniques. To see how our method fares against large dataset, we also applied our method on ILSVRC2012 dataset (ImageNet) (Russakovsky et al., 2015) as well. This section is structured as follows. First, we will discuss the objective functions we used to train the architecture, and then ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'In order to evaluate the efficacy of our experiment and investigate the reason behind its efficacy, we conducted a set of extensive experiments of unsupervised image generation on CIFAR-10 (Torralba ', 'modified_lines': 'et al., 2008) and STL-10 (Coates et al., 2011), and compared our method against other normalization ', 'original_lines': 'et al., 2008) and STL-10 (Coates et al., 2010), and compared our method against other normalization ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'the appendix section for more details of the architectures. For all methods other than WGAN-GP, we used the following standard objective function for the ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'As for the architecture of the discriminator and generator, we used convolutional neural networks. Also, for the evaluation of the spectral norm for the convolutional weight W ∈ Rdout×din×h×w, we treated the operator as a square matrix of dimension dout × (dinhw)2. We trained the parameters of ', 'modified_lines': 'the generator with batch normalization (Ioffe & Szegedy, 2015). We refer the readers to Table 3 in ', 'original_lines': 'the generator with batch normalization (Ioffe & Szegedy, 2015). We refer the readers to Table 4 in ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': '2Note that, since we are conducting the convolution discretely, the spectral norm will depend on the size of ', 'paragraph_idx': 23, 'before_section': '4 EXPERIMENTS', 'context_before': 'tor networks, including: WGAN-GP (Gulrajani et al., 2017), batch-normalization (BN) (Ioffe & Szegedy, 2015), layer normalization (LN) (Ba et al., 2016) and weight normalization (WN) (Sali- mans & Kingma, 2016). In order to evaluate the stand-alone efficacy of the gradient penalty, we also ', 'modified_lines': 'applied the penalty term (28) to the standard adversarial loss of GANs (14). We would refer to this ', 'original_lines': 'applied the penalty term (29) to the standard adversarial loss of GANs (15). We would refer to this ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'of updates for GAN generator were 100K for all experiments, unless otherwise noted. Firstly, we inspected the spectral norms of each layer during the training to make sure that our In Figures 1 and 2 we show the inception scores of each method with the settings A–F. We can see that spectral normalization is relatively robust to aggressive learning rates and momentum parame- ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'these 6 settings, A, B, and C are the settings used in previous representative works. The purpose of the settings D, E, and F is to the evaluate the performance of the algorithms implemented with more aggressive learning rates. For the details of the architectures of convolutional networks deployed in ', 'modified_lines': 'the generator and the discriminator, we refer the readers to Table 3 in the appendix section. Number spectral normalization procedure is indeed serving its purpose. As we can see in the Figure 8 in the C.1, the spectral norms of these layers floats around 1–1.05 region throughout the training. Please see Appendix C.1 for more details. ', 'original_lines': 'the generator and the discriminator, we refer the readers to Table 4 in the appendix section. Number Table 1: Hyper-parameter settings we tested in our experiments. †, ‡ and (cid:63) are the hyperparameter settings following Gulrajani et al. (2017), Warde-Farley & Bengio (2017) and Radford et al. (2016), respectively. Setting α A† B‡ C(cid:63) D E F 0.0001 0.0001 0.0002 0.001 0.001 0.001 β1 0.5 0.5 0.5 0.5 0.5 0.9 β2 ndis 0.9 0.999 0.999 0.9 0.999 0.999 5 1 1 5 5 5 spectral normalization procedure is indeed serving its purpose. As we can see in the Figure 8 in the C.1, the spectral norms of these layers floats around 1. ∼ 1.05 region throughout the training. Please see Appendix C.1 for more details. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '6 (a) CIFAR-10 ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'spectral normalization on STL-10, which consists of more diverse examples than CIFAR-10. Best scores of spectral normalization are better than all other methods on both CIFAR-10 and STL-10. ', 'modified_lines': 'GAN-GPWGAN-GPBNLNWNSN012345678Inception scoreABCDEFWGAN-GPLNWNSN0123456789Inception scoreABCDEF Under review as a conference paper at ICLR 2018 ', 'original_lines': 'In Tables 2, we show the inception scores of the different methods with optimal settings on CIFAR- 10 and STL-10 dataset. We see that SN-GANs performed better than all contemporaries on the optimal settings3. SN-GANs performed even better with hinge loss (17). In Figure 11 we show the images produced by the generators trained with WGAN-GP, weight nor- malization, and spectral normalization. SN-GANs were consistently better than GANs with weight normalization in terms of the quality of generated images. To be more precise, as we mentioned in Section 3, the set of images generated by spectral normalization was clearer and more diverse than the images produced by the weight normalization. We can also see that WGAN-GP failed to train good GANs with high learning rates and high momentums (D,E and F). The generated images with GAN-GP, batch normalization and layer normalization is shown in Figure 11a in the appendix section. We compared our algorithm against multiple benchmark methods in Table 2. We also tested the performance of our method on ResNet based GAN in Gulrajani et al. (2017). For the details, please 3As for STL-10, we ran SN-GANs over twice time longer the iterations because it did not seem to converge fast enough. Because the optimal setting of SN-GANs (setting B, ndis = 1) is computationally light, this elongated training sequence still completes before WGAN-GP with original iteration size. Under review as a conference paper at ICLR 2018 (a) CIFAR-10 (b) STL-10 Figure 1: Inception scores on CIFAR-10 and STL-10 with different methods and hyperparameters (higher is better). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': 'Method ', 'paragraph_idx': 34, 'before_section': '4 EXPERIMENTS', 'context_before': 'Figure 2: FIDs on CIFAR-10 and STL-10 with different methods and hyperparameters (lower is better). ', 'modified_lines': 'Table 2: Inception scores and FIDs with unsupervised image generation on CIFAR-10. † (Radford et al., 2016) (experimented by Yang et al. (2017)), ‡ (Yang et al., 2017), ∗ (Warde-Farley & Bengio, 2017), †† (Gulrajani et al., 2017) ', 'original_lines': 'Table 2: Inception scores and FIDs with unsupervised image generation on CIFAR-10. † (Dumoulin et al., 2017), ‡ (Radford et al., 2016) (experimented by (Yang et al., 2017)), †† (Warde-Farley & Bengio, 2017),∗ (Yang et al., 2017), ‡‡(Gulrajani et al., 2017) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'Inception score ', 'paragraph_idx': 28, 'before_section': '4 EXPERIMENTS', 'context_before': '(ours) SN-GANs (ours) SN-GANs. (2x updates) ', 'modified_lines': '(ours) SN-GANs, Eq.(16) (ours) SN-GANs. Eq.(16) (2x updates) ', 'original_lines': '(ours) SN-GANs, Eq.(17) (ours) SN-GANs. (2x updates, Eq.(17)) ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 28}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '8.24±.08 9.04±.12 6.64±.14 7.72±.13 7.86±.08 7.84±.07 8.51±.13 see Appendix section. Please note that all methods listed thereof are all different in both optimization methods and the architecture of the model. Our implementation of our algorithm was able to superior to all the predecessors in the performance. Singular values analysis on the weights of the discriminator D In Figure 4, we show the squared singular values of the weight matrices in the final discriminator D produced by each method ', 'paragraph_idx': 28, 'before_section': None, 'context_before': '38.3 ', 'modified_lines': '(ours) SN-GANs. Eq.(16) (ResNet) DCGAN† LR-GANs‡ Warde-Farley et al.∗ WGAN-GP (ResNet)†† 7.17±.07 In Tables 2, we show the inception scores of the different methods with optimal settings on CIFAR- 10 and STL-10 dataset. We see that SN-GANs performed better than all contemporaries on the optimal settings3. SN-GANs performed even better with hinge loss (16). In Figure 5 we show the images produced by the generators trained with WGAN-GP, weight nor- malization, and spectral normalization. SN-GANs were consistently better than GANs with weight normalization in terms of the quality of generated images. To be more precise, as we mentioned in Section 3, the set of images generated by spectral normalization was clearer and more diverse than the images produced by the weight normalization. We can also see that WGAN-GP failed to train good GANs with high learning rates and high momentums (D,E and F). The generated images with GAN-GP, batch normalization and layer normalization is shown in Figure 11 in the appendix section. We compared our algorithm against multiple benchmark methods in Table 2. We also tested the performance of our method on ResNet based GAN in Gulrajani et al. (2017). For the details, please 3As for STL-10, we ran SN-GANs over twice time longer the iterations because it did not seem to converge fast enough. Because the optimal setting of SN-GANs (setting B, ndis = 1) is computationally light, this elongated training sequence still completes before WGAN-GP with original iteration size. 7 GAN-GPWGAN-GPBNLNWNSN101102FIDABCDEFWGAN-GPLNWNSN101102FIDABCDEF Under review as a conference paper at ICLR 2018 4.1.1 ANALYSIS OF SPECTRALLY NORMALIZED GANS ', 'original_lines': '(ours) SN-GANs. (ResNet, Eq.(17)) DCGAN‡ Warde-Farley et al. †† WGAN-GP (ResNet) ‡‡ 7 GAN-GPWGAN-GPBNLNWNSN012345678Inception scoreABCDEFWGAN-GPLNWNSN0123456789Inception scoreABCDEFGAN-GPWGAN-GPBNLNWNSN101102103FIDABCDEFWGAN-GPLNWNSN101102FIDABCDEF Under review as a conference paper at ICLR 2018 4.1.1 ANALYSIS OF SPECTRAL NORMALIZED GANS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '5 CONCLUSION ', 'paragraph_idx': 37, 'before_section': None, 'context_before': 'images especially in Figure 5b. The images generated with spectral normalization is more diverse and complex than those generated with weight normalization. ', 'modified_lines': 'Training time On CIFAR-10, SN-GANs is a tad slower than weight normalization (about 110 ∼ 120% computational time), but significantly faster than WGAN-GP. As we mentioned in Section 3, WGAN-GP is slower than other methods because WGAN-GP needs to calculate the gradient of gradient norm (cid:107)∇xD(cid:107)2. For STL-10, the computational time of SN-GANs is almost the same as vanilla GANs, because the relative computational cost of the power iteration (17) is negligible when compared to the cost of forward and backward propagation on CIFAR-10 (images size of STL-10 is larger (48 × 48)). Please see Figure 9 in the appendix section for the actual computational time. 4.2 CLASS CONDITIONAL IMAGE GENERATION ON IMAGENET To show that our method remains effective on large high dimen- sional dataset, we also applied our method to the training of class conditional GANs on ILRSVRC2012 dataset with 1000 classes, each consisting of approximately 1300 images, which we com- pressed to 128 × 128 pixels. Regarding the adversarial loss for conditional GANs, we used practically the same formulation used in Mirza & Osindero (2014), except that we replaced the standard GANs loss with hinge loss (16). Please see Appendix B.3 for the details of experimental settings. As we can see in the learning curves in Figure 3, our SN-GANs is the only methods with successful training sequence among all other methods. To our knowledge, our method is the first of its kind in succeeding to produce decent images from ImageNet dataset with a single pair of a discriminator and a generator (Figure 6). To measure the degree of mode-collapse, we followed the footstep of Odena et al. (2017) and computed the intra MS-SSIM Odena et al. (2017) for pairs of independently generated GAN images of each class. . Figure 3: Learning curves of Inception score with different methods. ', 'original_lines': 'Training time As we mentioned in Section 3, WGAN-GP is slower than other methods because WGAN-GP needs to calculate the gradient of gradient norm (cid:107)∇xD(cid:107)2. On CIFAR-10, spectral normalization is a tad slower than weight normalization (about 110 ∼ 120% computational time), but significantly faster than WGAN-GP. For STL-10, the computational time of spectral normalization is almost the same as vanilla GANs, because the relative computational cost of the power iteration (18) is negligible when compared to the cost of forward and backward propagation on CIFAR-10 (images size of STL-10 is larger (48 × 48)). Please see Figure 9 in the appendix section for the actual computational time. 4.2 CONDITIONAL IMAGE GENERATION ON IMAGENET To show that our method remains effective on large high dimensional dataset, we also applied our method to the training of class conditional GANs on ILRSVRC2012 dataset with 1000 classes, each consisting of approximately 1300 images, which we compressed to 128 × 128 pixels. Regarding the adversarial loss for conditional GANs, we used practically the same formulation used in Mirza & Osindero (2014), except that we replaced the standard GANs loss with hinge loss (17). Please see Appendix B.3 for the details of experimental settings. As we can see in the learning curves in Figure 3, our SN-GANs are the only methods with successful training sequence among all other methods (Baseline indicates the model without any normalization in the discriminator). To our knowledge, our method is the first of its kind in succeeding to produce decent images from ImageNet dataset with a single pair of a discriminator and a generator (Figure 6). To measure the degree of mode-collapse, we followed the footstep of Odena et al. (2016) and computed the Intra-class MSSSIM Odena et al. (2016) for pairs of independently generated GAN images (Table 3). We see that our SN-GAN is suffering less from the mode-collapse than other contemporaries. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'clipping suffers from the same problem as weight normalization and Frobenius normalization. With weight clipping with the truncation value c, the value (cid:107)W x(cid:107)2 for a fixed unit vector x is maximized when the rank of W is again one, and the training will again favor the discriminators that use ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Still another regularization technique is weight clipping introduced by Arjovsky et al. (2017) in their training of Wasserstein GANs. Weight clipping simply truncates each element of weight matrices so that its absolute value is bounded above by a prescribed constant c ∈ R+. Unfortunately, weight ', 'modified_lines': '', 'original_lines': ' 19 02549Index of s0.00.20.40.60.81.0s2SNWN Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 SPECTRAL NORMALIZATION', 'after_section': '2.2 FAST APPROXIMATION OF THE SPECTRAL NORM σ(W )', 'context_after': 'regularizer function: where λ > 0 is a balancing coefficient and ˆx is: ˆx := (cid:15)x + (1 − (cid:15)) ˜x where (cid:15) ∼ U [0, 1], x ∼ pdata, ˜x = G(z), z ∼ pz. (30) Using this augmented objective function, Gulrajani et al. (2017) succeeded in training a GAN based on ResNet (He et al., 2016) with an impressive performance. The advantage of their method in comparison to spectral normalization is that they can impose local 1-Lipschitz constraint directly on ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'Recently, Gulrajani et al. (2017) introduced a technique to enhance the stability of the training of Wasserstein GANs (Arjovsky et al., 2017). In their work, they endeavored to place K-Lipschitz ', 'modified_lines': 'constraint (5) on the discriminator by augmenting the adversarial loss function with the following λ E ˆx∼p ˆx [((cid:107)∇ ˆxD( ˆx)(cid:107)2 − 1)2], (28) (29) ', 'original_lines': 'constraint (6) on the discriminator by augmenting the adversarial loss function with the following λ E ˆx∼p ˆx [((cid:107)∇ ˆxD( ˆx)(cid:107)2 − 1)2], (29) (31) ', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'with respect to unnormalized weight W as follows: ∂V (G, D(W )) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'method on CIFAR-10 ; weight normalization (WGAN-GP w/ WN), spectral normalization (WGAN- GP w/ SN), and parametrization free (WGAN-GP). ', 'modified_lines': '', 'original_lines': 'F THE GRADIENT OF GENERAL NORMALIZATION METHOD Let us denote ¯W := W/N (W ) to be the normalized weight where N (W ) to be a scalar normalized coefficient (e.g. Spectral norm or Frobenius norm). In general, we can write the derivative of loss ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-26 06:15:41
|
ICLR.cc/2018/Conference
|
SkH5pgyC-
|
HJ6UTYg0W
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'success as a framework of generative models in recent years, and it has been applied to numerous types of tasks and datasets (Radford et al., 2016; Salimans et al., 2016; Ho & Ermon, 2016; Li et al., A persisting challenge in the training of GANs is the performance control of the discriminator. In high dimensional spaces, the density ratio estimation by the discriminator is often inaccurate and ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'INTRODUCTION ', 'modified_lines': 'Generative adversarial networks (GANs) (Goodfellow et al., 2014) have been enjoying considerable 2017). In a nutshell, GANs are a framework to produce a model distribution that mimics a given target distribution, and it consists of a generator that produces the model distribution and a discrimi- nator that distinguishes the model distribution from the target. The concept is to consecutively train the model distribution and the discriminator in turn, with the goal of reducing the difference be- tween the model distribution and the target distribution measured by the best discriminator possible at each step of the training. GANs have been drawing attention in the machine learning community not only for its ability to learn highly structured probability distribution but also for its theoretically interesting aspects. For example, (Nowozin et al., 2016; Uehara et al., 2016; Mohamed & Laksh- minarayanan, 2017) revealed that the training of the discriminator amounts to the training of a good estimator for the density ratio between the model distribution and the target. This is a perspective that opens the door to the methods of implicit models (Mohamed & Lakshminarayanan, 2017; Tran et al., 2017) that can be used to carry out variational optimization without the direct knowledge of the density function. ', 'original_lines': 'Generative adversarial networks (GANs) (Goodfellow et al., 2014) has been enjoying considerable 2017). In a nutshell, GAN is a framework to produce a model distribution that mimics a given target distribution, and it consists of a generator that produces the model distribution and a discriminator that distinguishes the model distribution from the target. The concept is to consecutively train the model distribution and the discriminator in turn, with the goal of reducing the difference between the model distribution and the target distribution measured by the best discriminator possible at each step of the training. GANs have been drawing attention in the machine learning community not only for its ability to learn highly structured probability distribution but also for its theoretically interesting aspects. For example, (Nowozin et al., 2016; Uehara et al., 2016; Mohamed & Lakshminarayanan, 2017) revealed that the training of the discriminator amounts to the training of a good estimator for the density ratio between the model distribution and the target. This is a perspective that opens the door to the methods of implicit models (Mohamed & Lakshminarayanan, 2017; Tran et al., 2017) that can be used to carry out variational optimization without the direct knowledge of the density function. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 2 METHOD ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': '• Implementation is simple and the additional computational cost is small. In fact, our normalization method also functioned well even without tuning Lipschitz constant, ', 'modified_lines': 'which is the only hyper parameter. In this study, we provide explanations of the effectiveness of spectral normalization for GANs against other regularization techniques, such as weight normaliza- tion (Salimans & Kingma, 2016), weight clipping (Arjovsky et al., 2017), and gradient penalty (Gul- rajani et al., 2017). We also show that, in the absence of complimentary regularization techniques (e.g., batch normalization, weight decay and feature matching on the discriminator), spectral nor- malization can improve the sheer quality of the generated images better than weight normalization and gradient penalty. ', 'original_lines': 'which is the only hyper parameter. In this study, we provide a theoretical explanation of the effec- tiveness of spectral normalization for GANs against other regularization techniques, such as weight normalization (Salimans & Kingma, 2016), weight clipping (Arjovsky et al., 2017), and gradient penalty (Gulrajani et al., 2017). We also show that, in the absence of complimentary regulariza- tion techniques (e.g., batch normalization, weight decay and feature matching on the discriminator), spectral normalization can improve the sheer quality of the generated images better than weight normalization and gradient penalty. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': 'The machine learning community has been pointing out recently that the function space from which the discriminators are selected crucially affects the performance of GANs. A number of works (Ue- qdata(x) qdata(x) + pG(x) and its derivative ∇xf ∗(x) = ', 'paragraph_idx': 7, 'before_section': '2 METHOD', 'context_before': 'tivation function A that is used in the D of this expression is some continuous function with range [0, 1] (e.g, sigmoid function). It is known that, for a fixed generator G, the optimal discriminator for this form of V (G, D) is given by D∗ ', 'modified_lines': ' G(x) := qdata(x)/(qdata(x) + pG(x)). hara et al., 2016; Qi, 2017; Gulrajani et al., 2017) advocate the importance of Lipschitz continuity in assuring the boundedness of statistics. For example, the optimal discriminator of GANs on the above standard formulation takes the form = sigmoid(f ∗(x)), where f ∗(x) = log qdata(x) − log pG(x), (3) G(x) = D∗ ', 'original_lines': 'G(x) := qdata(x)/(qdata(x) + pG(x)), which is the minimizer of the Jensen-Shannon divergence (Goodfellow et al., 2014). hara et al., 2016; Qi, 2017; Gulrajani et al., 2017) advocate the importance of Lipschitz continuity in assuring the boundedness of statistics. For example, the optimal discriminator of GAN on the above standard formulation takes the form D∗ G(x) = = sigmoid(f ∗(x)), where f ∗(x) = log qdata(x) − log pG(x), (3) ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 7}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2017), which proposed methods to control the Lipschitz constant of the discriminator by adding arg max (cid:107)f (cid:107)Lip≤K ', 'paragraph_idx': 4, 'before_section': None, 'context_before': '(4) can be unbounded or even incomputable. This prompts us to introduce some regularity condition to ', 'modified_lines': 'the derivative of f (x). A particularly successful works in this array are (Qi, 2017; Arjovsky et al., 2017; Gulrajani et al., regularization terms defined on input examples x. We would follow their footsteps and search for the discriminator D from the set of K-Lipschitz continuous functions, that is, ', 'original_lines': 'the derivative of f (x). A particularly successful works in this array are (Qi, 2017; Gulrajani et al., regularization terms defined on input examples x. We would follow their footstep and search for the discriminator D from the set of K-Lipschitz continuous functions, that is, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 METHOD', 'after_section': None, 'context_after': 'be transformed by ¯WSN , the derivative of the V (G, D) calculated over the mini-batch with respect to W of the discriminator D is given by: ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '(10) where Eij is the matrix whose (i, j)-th entry is 1 and zero everywhere else, and u1 and v1 are ', 'modified_lines': 'respectively the first left and right singular vectors of W . If h is the hidden layer in the network to ', 'original_lines': 'respectively the first left and right singular vectors of W . If h is the hidden node in the network to ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'after_section': None, 'context_after': '4 ', 'paragraph_idx': 20, 'before_section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'context_before': 'from generative distribution and a sample x from the data distribution. While this rather straight- forward approach does not suffer from the problems we mentioned above regarding the effective dimension of the feature space, the approach has an obvious weakness of being heavily dependent ', 'modified_lines': 'on the support of the current generative distribution. As a matter of course, the generative distribu- tion and its support gradually changes in the course of the training, and this can destabilize the effect of such regularization. In fact, we empirically observed that a high learning rate can destabilize the performance of WGAN-GP. On the contrary, our spectral normalization regularizes the function the operator space, and the effect of the regularization is more stable with respect to the choice of the batch. Training with our spectral normalization does not easily destabilize with aggressive learning rate. Moreover, WGAN-GP requires more computational cost than our spectral normalization with single-step power iteration, because the computation of (cid:107)∇ ˆxf (cid:107)2 requires one whole round of for- ward and backward propagation. In the appendix section, we compare the computational cost of the two methods for the same number of updates. ', 'original_lines': 'on the support of the current generative distribution. As a matter of course, the generative distri- bution and its support gradually changes in the course of the training, and this can destabilize the effect of such regularization. In fact, we empirically observed that a high learning rate can desta- bilize the performance of WGAN-GP. On the contrary, our spectral normalization regularizes the function the operator space, and the effect of the regularization is more stable with respect to the choice of the batch. Training with our spectral normalization does not falter with aggressive learn- ing rate. Moreover, WGAN-GP requires more computational cost than our spectral normalization with single-step power iteration, because the computation of (cid:107)∇ ˆxf (cid:107)2 requires one whole round of forward and backward propagation. In the experiment section, we compare the computational cost of the two methods for the same number of updates. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'hinge loss, which is given by VD( ˆG, D) = E ', 'paragraph_idx': 23, 'before_section': '4 EXPERIMENTS', 'context_before': 'where z ∈ Rdz is a latent variable, p(z) is the standard normal distribution N (0, I), and G : Rdz → Rd0 is a deterministic generator function. We set dz to 128 for all of our experiments. For the updates of G, we used the alternate cost proposed by Goodfellow et al. (2014) − Ez∼p(z)[log(D(G(z)))] as ', 'modified_lines': 'used in Goodfellow et al. (2014) and Warde-Farley & Bengio (2017). For the updates of D, we used the original cost defined in (14). We also tested the performance of the algorithm with the so-called ', 'original_lines': 'used in Goodfellow et al. (2014); Warde-Farley & Bengio (2017). For the updates of D, we used the original cost defined in 14. We also tested the performance of the algorithm with the so-called ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': '(ours) SN-GANs, Eq.(16) Inception score ', 'paragraph_idx': 29, 'before_section': '4 EXPERIMENTS', 'context_before': 'Layer Norm. Weight Norm. (ours) SN-GANs ', 'modified_lines': '(ours) SN-GANs (2x updates) (ours) SN-GANs, Eq.(16) (2x updates) ', 'original_lines': '(ours) SN-GANs. (2x updates) (ours) SN-GANs. Eq.(16) (2x updates) ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 29}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '8.24±.08 ', 'paragraph_idx': 29, 'before_section': None, 'context_before': '38.3 ', 'modified_lines': '(ours) SN-GANs, Eq.(16) (ResNet) ', 'original_lines': '(ours) SN-GANs. Eq.(16) (ResNet) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '3As for STL-10, we ran SN-GANs over twice time longer the iterations because it did not seem to converge fast enough. Because the optimal setting of SN-GANs (setting B, ndis = 1) is computationally light, this ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'section. We compared our algorithm against multiple benchmark methods in Table 2. We also tested the ', 'modified_lines': 'performance of our method on ResNet based GANs used in Gulrajani et al. (2017). Please see Table 4 and 5 in the appendix section for the detail network architectures. Please note that all methods listed thereof are all different in both optimization methods and the architecture of the model. Our implementation of our algorithm was able to superior to all the predecessors in the performance. ', 'original_lines': 'performance of our method on ResNet based GAN in Gulrajani et al. (2017). For the details, please see Appendix section. Please note that all methods listed thereof are all different in both optimization methods and the architecture of the model. Our implementation of our algorithm was able to superior to all the predecessors in the performance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'Singular values analysis on the weights of the discriminator D In Figure 4, we show the squared singular values of the weight matrices in the final discriminator D produced by each method ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'GAN-GPWGAN-GPBNLNWNSN101102FIDABCDEFWGAN-GPLNWNSN101102FIDABCDEF Under review as a conference paper at ICLR 2018 ', 'modified_lines': '4.1.1 ANALYSIS OF SN-GANS ', 'original_lines': '4.1.1 ANALYSIS OF SPECTRALLY NORMALIZED GANS ', 'after_paragraph_idx': 33, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'other methods. To our knowledge, our method is the first of its kind in succeeding to produce decent images from ImageNet dataset with a single pair of a discriminator and a generator (Figure 6). To measure the degree of mode-collapse, we followed the footstep of Odena et al. (2017) and computed the intra MS-SSIM Odena Figure 3: Learning curves of Inception score with different methods. 5 CONCLUSION ', 'paragraph_idx': 39, 'before_section': None, 'context_before': 'details of experimental settings. As we can see in the learning curves in Figure 3, our SN-GANs ', 'modified_lines': 'are the only methods with successful training sequence among all et al. (2017) for pairs of independently generated GANs images of each class. We see that our SN-GAN ((intra MS-SSIM)=0.101) is suffering less from the mode-collapse than AC-GANs ((intra MS-SSIM)∼0.25). ', 'original_lines': 'is the only methods with successful training sequence among all et al. (2017) for pairs of independently generated GAN images of each class. . ', 'after_paragraph_idx': 39, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': 'B.2 ', 'paragraph_idx': 24, 'before_section': None, 'context_before': 'where {µp1, Cp1}, {µp2 , Cp2} are the mean and covariance of samples from q and p, respectively. If f(cid:9) is the output of the final layer of the inception model before the softmax, the Fr´echet inception ', 'modified_lines': 'distance (FID) between two distributions p1 and p2 on the images is the distance between f(cid:9) ◦p1 and f(cid:9) ◦ p2. We computed the Fr´echet inception distance between the true distribution and the generated distribution empirically over 10000 and 5000 samples. Multiple repetition of the experiments did not exhibit any notable variations on this score. ', 'original_lines': 'distance (FID) between two distributions p1 and p2 on the images is the distance between f(cid:9) ◦ p1 and f(cid:9) ◦ p2. We computed the Fr´echet inception distance between the true distribution and the generated distribution empirically over 5000 sample and reported the results. Multiple repetition of the experiments did not exhibit any notable variations on this score. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '14 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(b) Discriminator, M = 32 for SVHN and CIFAR10, and M = 48 for STL-10 ', 'modified_lines': '', 'original_lines': 'C APPENDIX RESULTS C.1 ACCURACY OF SPECTRAL NORMALIZATION Figure 8 shows the spectral norm of each layer in the generator over the course of the training. The setting of optimizers is C in Table 1 throughout the training. In fact, they do not deviate by more than ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 10:41:25
|
ICLR.cc/2018/Conference
|
HJ6UTYg0W
|
S1UZI3xQz
|
[{'section': '1 ]ij ¯WSN', 'after_section': None, 'context_after': 'where Eij is the matrix whose (i, j)-th entry is 1 and zero everywhere else, and u1 and v1 are respectively the first left and right singular vectors of W . If h is the hidden layer in the network to be transformed by ¯WSN , the derivative of the V (G, D) calculated over the mini-batch with respect ', 'paragraph_idx': 14, 'before_section': '1 ]ij ¯WSN', 'context_before': '(10) ', 'modified_lines': '1For examples, ReLU (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011) and leaky ReLU (Maas et al., 2013) satisfies the condition, and many popular activation functions satisfy K-Lipschitz constraint for some predefined K as well. 2Indeed, when the spectrum has multiplicities, we would be looking at subgradients here. However, the probability of this happening is zero (almost surely), so we would continue discussions without giving consid- erations to such events. 3 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '1 (cid:1)', 'after_section': '1 (cid:1)', 'context_after': 'where δ := (cid:0)∂V (G, D)/∂ (cid:0) ¯WSNh(cid:1)(cid:1)T tation over the mini-batch. ∂V We would like to comment on the implication of (12). The first term ˆE (cid:2)δhT(cid:3) is the same as the derivative of the weights without normalization. In this light, the second term in the expression can be seen as the regularization term penalizing the first singular components with the adaptive ', 'paragraph_idx': 17, 'before_section': '1 (cid:1)', 'context_before': '(12) ', 'modified_lines': '∂W = 0 when ˆE[δhT] = ku1vT , λ := ˆE (cid:2)δT (cid:0) ¯WSNh(cid:1)(cid:3), and ˆE[·] represents empirical expec- 1 for some k ∈ R. ', 'original_lines': ' , λ := ˆE (cid:2)δT (cid:0) ¯WSNh(cid:1)(cid:3), and ˆE[·] represents empirical expec- 1 for some k ∈ R. ∂W = 0 when ˆE[δhT] = ku1vT 1For examples, ReLU (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011) and leaky ReLU (Maas et al., 2013) satisfies the condition, and many popular activation functions satisfy K-Lipschitz constraint for some predefined K as well. 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 17}, {'section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'after_section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'context_after': 'do when σ1( ¯WWN) = ', 'paragraph_idx': 19, 'before_section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'context_before': 'The weight normalization introduced by Salimans & Kingma (2016) is a method that normalizes the (cid:96)2 norm of each row vector in the weight matrix. Mathematically, this is equivalent to requiring the weight by the weight normalization ¯WWN: ', 'modified_lines': ' √ ', 'original_lines': '', 'after_paragraph_idx': 19, 'before_paragraph_idx': 19}, {'section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'after_section': None, 'context_after': 'Our spectral normalization, on the other hand, do not suffer from such a conflict in interest. Note that the Lipschitz constant of a linear operator is determined only by the maximum singular value. In other words, the spectral norm is independent of rank. Thus, unlike the weight normalization, our spectral normalization allows the parameter matrix to use as many features as possible while satisfying local 1-Lipschitz constraint. Our spectral normalization leaves more freedom in choosing the number of singular components (features) to feed to the next layer of the discriminator. Gulrajani et al. (2017) used Gradient penalty method in combination with WGAN. In their work, they placed K-Lipschitz constant on the discriminator by augmenting the objective function with ', 'paragraph_idx': 21, 'before_section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'context_before': '√ ', 'modified_lines': ' Brock et al. (2016) introduced orthonormal regularization on each weight to stabilize the training of GANs. In their work, Brock et al. (2016) augmented the adversarial objective function by adding the following term: (cid:107)W TW − I(cid:107)2 F . 4 (14) Under review as a conference paper at ICLR 2018 While this seems to serve the same purpose as spectral normalization, orthonormal regularization are mathematically quite different from our spectral normalization because the orthonormal regular- ization destroys the information about the spectrum by setting all the singular values to one. On the other hand, spectral normalization only scales the spectrum so that the its maximum will be one. ', 'original_lines': '√ ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'after_section': None, 'context_after': '4 EXPERIMENTS conducted a set of extensive experiments of unsupervised image generation on CIFAR-10 (Torralba et al., 2008) and STL-10 (Coates et al., 2011), and compared our method against other normalization techniques. To see how our method fares against large dataset, we also applied our method on ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'ward and backward propagation. In the appendix section, we compare the computational cost of the two methods for the same number of updates. ', 'modified_lines': 'In order to evaluate the efficacy of our approach and investigate the reason behind its efficacy, we ', 'original_lines': '4 Under review as a conference paper at ICLR 2018 In order to evaluate the efficacy of our experiment and investigate the reason behind its efficacy, we ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'As for the architecture of the discriminator and generator, we used convolutional neural networks. Also, for the evaluation of the spectral norm for the convolutional weight W ∈ Rdout×din×h×w, we For all methods other than WGAN-GP, we used the following standard objective function for the adversarial loss: [log D(x)] + E ', 'paragraph_idx': 24, 'before_section': '4 EXPERIMENTS', 'context_before': 'follows. First, we will discuss the objective functions we used to train the architecture, and then we will describe the optimization settings we used in the experiments. We will then explain two performance measures on the images to evaluate the images produced by the trained generators. ', 'modified_lines': 'Finally, we will summarize our results on CIFAR-10, STL-10, and ImageNet. treated the operator as a 2-D matrix of dimension dout × (dinhw)3. We trained the parameters of the generator with batch normalization (Ioffe & Szegedy, 2015). We refer the readers to Table 3 in the appendix section for more details of the architectures. V (G, D) := E x∼qdata(x) ', 'original_lines': 'Finally, we will summarize our results on the CIFAR-10, STL-10, and on ImageNet. treated the operator as a square matrix of dimension dout × (dinhw)2. We trained the parameters of the generator with batch normalization (Ioffe & Szegedy, 2015). We refer the readers to Table 3 in the appendix section for more details of the architectures. V (G, D) := E x∼qdata ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 24}, {'section': '2 METHOD', 'after_section': None, 'context_after': '(a) CIFAR-10 (b) STL-10 ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '5 5 ', 'modified_lines': 'respectively for the discriminator and the generator. Optimizing these objectives is equivalent to minimizing the so-called reverse KL divergence : KL[pg||qdata]. This type of loss has been already proposed and used in Lim & Ye (2017); Tran et al. (2017). The algorithm based on the hinge loss also showed good performance when evaluated with inception score and FID. For Wasserstein GANs with gradient penalty (WGAN-GP) (Gulrajani et al., 2017), we used the following objective function: V (G, D) := Ex∼qdata[D(x)]−Ez∼p(z)[D(G(z))]−λ E ˆx∼p ˆx[((cid:107)∇ ˆxD( ˆx)(cid:107)2−1)2], where the regularization term is the one we introduced in the appendix section D.4. For quantitative assessment of generated examples, we used inception score (Salimans et al., 2016) and Fr´echet inception distance (FID) (Heusel et al., 2017). Please see Appendix B.1 for the details of each score. 4.1 RESULTS ON CIFAR10 AND STL-10 In this section, we report the accuracy of the spectral normalization (we use the abbreviation: SN- GAN for the spectrally normalized GANs) during the training, and the dependence of the algo- rithm’s performance on the hyperparmeters of the optimizer. We also compare the performance quality of the algorithm against those of other regularization/normalization techniques for the dis- criminator networks, including: Weight clipping (Arjovsky et al., 2017), WGAN-GP (Gulrajani et al., 2017), batch-normalization (BN) (Ioffe & Szegedy, 2015), layer normalization (LN) (Ba et al., 2016), weight normalization (WN) (Salimans & Kingma, 2016) and orthonormal regularization (or- thonormal) (Brock et al., 2016). In order to evaluate the stand-alone efficacy of the gradient penalty, we also applied the gradient penalty term to the standard adversarial loss of GANs (15). We would refer to this method as ‘GAN-GP’. For weight clipping, we followed the original work Arjovsky et al. (2017) and set the clipping constant c at 0.01 for the convolutional weight of each layer. For gradient penalty, we set λ to 10, as suggested in Gulrajani et al. (2017). For orthonormal, we initial- ized the each weight of D with a randomly selected orthonormal operator and trained GANs with the objective function augmented with the regularization term used in Brock et al. (2016). For all comparative studies throughout, we excluded the multiplier parameter γ in the weight normalization method, as well as in batch normalization and layer normalization method. This was done in order to prevent the methods from overtly violating the Lipschitz condition. When we experimented with different multiplier parameter, we were in fact not able to achieve any improvement. For optimization, we used the Adam optimizer Kingma & Ba (2015) in all of our experiments. We tested with 6 settings for (1) ndis, the number of updates of the discriminator per one update of the generator and (2) learning rate α and the first and second order momentum parameters (β1, β2) of Adam. We list the details of these settings in Table 1 in the appendix section. Out of these 6 settings, A, B, and C are the settings used in previous representative works. The purpose of the settings D, E, and F is to the evaluate the performance of the algorithms implemented with more aggressive learning rates. For the details of the architectures of convolutional networks deployed in the generator and the discriminator, we refer the readers to Table 3 in the appendix section. The number of updates for GAN generator were 100K for all experiments, unless otherwise noted. Firstly, we inspected the spectral norm of each layer during the training to make sure that our spectral normalization procedure is indeed serving its purpose. As we can see in the Figure 9 in the C.1, the spectral norms of these layers floats around 1–1.05 region throughout the training. Please see Appendix C.1 for more details. 6 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '(a) CIFAR-10 (b) STL-10 Figure 2: FIDs on CIFAR-10 and STL-10 with different methods and hyperparameters (lower is better). Table 2: Inception scores and FIDs with unsupervised image generation on CIFAR-10. † (Radford et al., 2016) (experimented by Yang et al. (2017)), ‡ (Yang et al., 2017), ∗ (Warde-Farley & Bengio, ', 'paragraph_idx': 39, 'before_section': None, 'context_before': 'Figure 1: Inception scores on CIFAR-10 and STL-10 with different methods and hyperparameters (higher is better). ', 'modified_lines': ' In Figures 1 and 2 we show the inception scores of each method with the settings A–F. We can see that spectral normalization is relatively robust with aggressive learning rates and momentum param- eters. WGAN-GP fails to train good GANs at high learning rates and high momentum parameters on both CIFAR-10 and STL-10. Orthonormal regularization performed poorly for the setting E on the STL-10, but performed slightly better than our method with the optimal setting. These results suggests that our method is more robust than other methods with respect to the change in the set- ting of the training. Also, the optimal performance of weight normalization was inferior to both WGAN-GP and spectral normalization on STL-10, which consists of more diverse examples than CIFAR-10. Best scores of spectral normalization are better than almost all other methods on both CIFAR-10 and STL-10. In Tables 2, we show the inception scores of the different methods with optimal settings on CIFAR- 10 and STL-10 dataset. We see that SN-GANs performed better than almost all contemporaries on the optimal settings. SN-GANs performed even better with hinge loss (17).4. For the training with same number of iterations, SN-GANs fell behind orthonormal regularization for STL-10. For more detailed comparison between orthonormal regularization and spectral normalization, please see section 4.1.2. In Figure 6 we show the images produced by the generators trained with WGAN-GP, weight nor- malization, and spectral normalization. SN-GANs were consistently better than GANs with weight normalization in terms of the quality of generated images. To be more precise, as we mentioned in Section 3, the set of images generated by spectral normalization was clearer and more diverse than the images produced by the weight normalization. We can also see that WGAN-GP failed to train good GANs with high learning rates and high momentums (D,E and F). The generated images 4As for STL-10, we also ran SN-GANs over twice time longer iterations because it did not seem to converge. Yet still, this elongated training sequence still completes before WGAN-GP with original iteration size because the optimal setting of SN-GANs (setting B, ndis = 1) is computationally light. 7 Weight clip.GAN-GPWGAN-GPBNLNWNOrthonormalSN012345678Inception scoreABCDEFWeight clip.WGAN-GPLNWNOrthonormalSN0123456789Inception scoreABCDEFWeight clip.GAN-GPWGAN-GPBNLNWNOrthonormalSN102FIDABCDEFWeight clip.WGAN-GPLNWNOrthonormalSN101102FIDABCDEF Under review as a conference paper at ICLR 2018 ', 'original_lines': 'method as ‘GAN-GP’. For each method with gradient penalty, we set λ to 10, as suggested in Gul- rajani et al. (2017). For all comparative studies throughout, we excluded the multiplier parameter γ in the weight normalization method, as well as in batch normalization and layer normalization method. This was done in order to prevent the methods from overtly violating the Lipschitz condi- tion. When we experimented with the multiplier parameter, we were in fact not able to achieve any improvement. For optimization, we used the Adam optimizer Kingma & Ba (2015) in all of our experiments. We tested with 6 settings for (1) ndis, the number of updates of the discriminator per one update of the generator and (2) β1, β2, the first and second order momentum of the hyper-parameters on Adam (the learning rate α). We list the details of these settings in Table 1 in the appendix section. Out of these 6 settings, A, B, and C are the settings used in previous representative works. The purpose of the settings D, E, and F is to the evaluate the performance of the algorithms implemented with more aggressive learning rates. For the details of the architectures of convolutional networks deployed in the generator and the discriminator, we refer the readers to Table 3 in the appendix section. Number of updates for GAN generator were 100K for all experiments, unless otherwise noted. Firstly, we inspected the spectral norms of each layer during the training to make sure that our spectral normalization procedure is indeed serving its purpose. As we can see in the Figure 8 in the C.1, the spectral norms of these layers floats around 1–1.05 region throughout the training. Please see Appendix C.1 for more details. In Figures 1 and 2 we show the inception scores of each method with the settings A–F. We can see that spectral normalization is relatively robust to aggressive learning rates and momentum parame- ters. WGAN-GP fails to train good GANs at high learning rates and high momentum parameters on both CIFAR-10 and STL-10. Weight normalization is more robust than WGAN-GP on CIFAR-10 in this aspect. The optimal performance of weight normalization was inferior to both WGAN-GP and spectral normalization on STL-10, which consists of more diverse examples than CIFAR-10. Best scores of spectral normalization are better than all other methods on both CIFAR-10 and STL-10. 6 GAN-GPWGAN-GPBNLNWNSN012345678Inception scoreABCDEFWGAN-GPLNWNSN0123456789Inception scoreABCDEF Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, pp. 1–9, 2015. Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for nonpara- metric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30 (11):1958–1970, 2008. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Improved techniques for training GANs. In NIPS, pp. 2226–2234, 2016. ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': 'B.4 NETWORK ARCHITECTURES ', 'paragraph_idx': 44, 'before_section': None, 'context_before': 'B.3 CONDITIONAL IMAGE GENERATION ON IMAGENET ', 'modified_lines': 'The images used in this set of experiments were resized to 128 × 128 pixels. We used conditional batch normalization (CBN) Dumoulin et al. (2017); de Vries et al. (2017) for the generator network. Namely we replaced the standard batch normalization layer with the CBN conditional to the label information y ∈ {1, . . . , 1000}. The details of the architecture are given in Table 6. For the op- timization, we used Adam with the same hyperparameters we used for ResNet on CIFAR-10 and STL-10 dataset. We trained the networks with 450K generator updates. ', 'original_lines': 'The images used in this set of experiments were resized to 128 × 128 pixels. We used condi- tional batch normalization (CBN) Dumoulin et al. (2017) for the generator network. Namely we replaced the standard batch normalization layer with the CBN conditional to the label information y ∈ {1, . . . , 1000}. The details of the architecture are given in Table 6. For the optimization, we used Adam with the same hyperparameters we used for ResNet on CIFAR-10 and STL-10 dataset. We trained the networks with 450K generator updates. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 METHOD', 'after_section': None, 'context_after': '¯WWN := (cid:2) ¯wT , where ¯wi(wi) := wi/(cid:107)wi(cid:107)2, (cid:3)T ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'D.1 WEIGHT NORMALIZATION AND FROBENIUS NORMALIZATION The weight normalization introduced by Salimans & Kingma (2016) is a method that normalizes the ', 'modified_lines': '(cid:96)2 norm of each row vector in the weight matrix8: (25) ', 'original_lines': '(cid:96)2 norm of each row vector in the weight matrix5: (24) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 (14)', 'after_section': None, 'context_after': 'Having said that, one shall not rule out the possibility that the gradient penalty can compliment spectral normalization and vice versa. Because these two methods regularizes discriminators by ', 'paragraph_idx': 23, 'before_section': '4 (14)', 'context_before': 'our spectral normalization does not falter with aggressive learning rate. Moreover, WGAN-GP requires more computational cost than our spectral normalization with ', 'modified_lines': 'single-step power iteration, because the computation of (cid:107)∇xD(cid:107)2 requires one whole round of forward and backward propagation. In Figure 10, we compare the computational cost of the two methods for the same number of updates. ', 'original_lines': 'single-step power iteration, because the computation of (cid:107)∇xD(cid:107)2 requires one whole round of for- ward and backward propagation. In the experiment section (Figure 9), we compare the computa- tional cost of the two methods for the same number of updates. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 23}]
|
2017-12-27 05:56:14
|
ICLR.cc/2018/Conference
|
S1UZI3xQz
|
ByYhOhgQz
|
[]
|
2017-12-27 06:07:45
|
ICLR.cc/2018/Conference
|
ByYhOhgQz
|
rJMCgGipZ
|
[]
|
2018-01-25 15:42:50
|
ICLR.cc/2018/Conference
|
rJMCgGipZ
|
BJefuSqHz
|
[{'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': 'f (x, θ) = W L+1aL(W L(aL−1(W L−1(. . . a1(W 1x) . . . )))), (1) ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '2 METHOD In this section, we will lay the theoretical groundwork for our proposed method. Let us consider a ', 'modified_lines': 'simple discriminator made of a neural network of the following form, with the input x: ', 'original_lines': 'simple discriminator made of a neural network of the following form: ', 'after_paragraph_idx': 7, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': 'generated images especially in Figure 6b. The images generated with spectral normalization is more diverse and complex than those generated with weight normalization. ', 'paragraph_idx': 46, 'before_section': '4 EXPERIMENTS', 'context_before': 'Figure 4: The effect on the performance on STL-10 induced by the change of the feature map dimension of the final layer. The width of the highlighted region represents standard deviation of the results over multiple seeds of weight initialization. The orthonormal regularization does not perform ', 'modified_lines': 'well with large feature map dimension, possibly because of its design that forces the discriminator to use all dimensions including the ones that are unnecessary. For the setting of the optimizers’ hyper parameter, We used the setting C, which was optimal for ”orthonormal regularization” ', 'original_lines': 'well with large feature map dimension, possibly because of its design that forces the discriminator to use all dimensions including the ones that are unnecessary. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 46}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '14 ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 Figure 7: 128x128 pixel images generated by SN-GANs trained on ILSVRC2012 dataset. The ', 'modified_lines': 'inception score is 21.2±.35. ', 'original_lines': 'inception score is 21.2. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'RGB image x ∈ R32×32×3 ResBlock down 128 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ResBlock up 256 ', 'modified_lines': '', 'original_lines': 'BN, ReLU, 3×3 conv, 3 Tanh (a) Generator ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-27 18:42:16
|
ICLR.cc/2018/Conference
|
BJefuSqHz
|
HkLyLhbLG
|
[]
|
2018-02-02 09:56:45
|
ICLR.cc/2018/Conference
|
HkLyLhbLG
|
SkQg82WUz
|
[{'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'spectral normalization for GANs against other regularization techniques, such as weight normaliza- tion (Salimans & Kingma, 2016), weight clipping (Arjovsky et al., 2017), and gradient penalty (Gul- rajani et al., 2017). We also show that, in the absence of complimentary regularization techniques ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In fact, our normalization method also functioned well even without tuning Lipschitz constant, which is the only hyper parameter. In this study, we provide explanations of the effectiveness of ', 'modified_lines': '', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': '2 METHOD', 'after_section': '2 METHOD', 'context_after': 'where θ := {W 1, . . . , W L, W L+1} is the learning parameters set, W l ∈ Rdl×dl−1 , W L+1 ∈ R1×dL , and al is an element-wise non-linear activation function. We omit the bias term of each layer for simplicity. The final output of the discriminator is given by where A is an activation function corresponding to the divergence of distance measure of the user’s choice. The standard formulation of GANs is given by ', 'paragraph_idx': 7, 'before_section': '2 METHOD', 'context_before': 'simple discriminator made of a neural network of the following form, with the input x: f (x, θ) = W L+1aL(W L(aL−1(W L−1(. . . a1(W 1x) . . . )))), ', 'modified_lines': ' (1) ', 'original_lines': '(1) ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 7}, {'section': 'Abstract', 'after_section': None, 'context_after': '2.1 SPECTRAL NORMALIZATION Our spectral normalization controls the Lipschitz constant of the discriminator function f by literally ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'would introduce in this paper, called spectral normalization, is a method that aims to skirt this issue by normalizing the weight matrices using the technique devised by Yoshida & Miyato (2017). ', 'modified_lines': '', 'original_lines': '2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 (5)', 'after_section': '2 (5)', 'context_after': '2.3 GRADIENT ANALYSIS OF THE SPECTRALLY NORMALIZED WEIGHTS The gradient2 of ¯WSN(W ) with respect to Wij is: ', 'paragraph_idx': 12, 'before_section': '2 (5)', 'context_before': 'GANs. Please see Appendix A for the detail method and Algorithm 1 for the summary of the actual spectral normalization algorithm. ', 'modified_lines': '1For examples, ReLU (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011) and leaky ReLU (Maas et al., 2013) satisfies the condition, and many popular activation functions satisfy K-Lipschitz constraint for some predefined K as well. 3 Published as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 12, 'before_paragraph_idx': 11}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where Eij is the matrix whose (i, j)-th entry is 1 and zero everywhere else, and u1 and v1 are respectively the first left and right singular vectors of W . If h is the hidden layer in the network to ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(9) (10) ', 'modified_lines': '', 'original_lines': ' 1For examples, ReLU (Jarrett et al., 2009; Nair & Hinton, 2010; Glorot et al., 2011) and leaky ReLU (Maas et al., 2013) satisfies the condition, and many popular activation functions satisfy K-Lipschitz constraint for some predefined K as well. 2Indeed, when the spectrum has multiplicities, we would be looking at subgradients here. However, the probability of this happening is zero (almost surely), so we would continue discussions without giving consid- erations to such events. 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'after_section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'context_after': 'that matches the target distribution only at select few features. Weight clipping (Arjovsky et al., 2017) also suffers from same pitfall. Our spectral normalization, on the other hand, do not suffer from such a conflict in interest. Note that the Lipschitz constant of a linear operator is determined only by the maximum singular value. ', 'paragraph_idx': 16, 'before_section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'context_before': 'to distinguish the generator distribution from the target distribution. The former interest often reigns over the other in many cases, inadvertently diminishing the number of features to be used by the discriminators. Consequently, the algorithm would produce a rather arbitrary model distribution ', 'modified_lines': ' do when σ1( ¯WWN) = √ 2Indeed, when the spectrum has multiplicities, we would be looking at subgradients here. However, the probability of this happening is zero (almost surely), so we would continue discussions without giving consid- erations to such events. 4 Published as a conference paper at ICLR 2018 ', 'original_lines': ' √ ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 16}, {'section': 'Abstract', 'after_section': None, 'context_after': '(14) While this seems to serve the same purpose as spectral normalization, orthonormal regularization are mathematically quite different from our spectral normalization because the orthonormal regular- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:107)W TW − I(cid:107)2 F . ', 'modified_lines': '', 'original_lines': '4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'For all methods other than WGAN-GP, we used the following standard objective function for the adversarial loss: ', 'paragraph_idx': 23, 'before_section': '4 EXPERIMENTS', 'context_before': 'generator with batch normalization (Ioffe & Szegedy, 2015). We refer the readers to Table 3 in the appendix section for more details of the architectures. ', 'modified_lines': '3Note that, since we are conducting the convolution discretely, the spectral norm will depend on the size of the stride and padding. However, the answer will only differ by some predefined K. 5 Published as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 24, 'before_paragraph_idx': 22}, {'section': 'Abstract', 'after_section': None, 'context_after': 'respectively for the discriminator and the generator. Optimizing these objectives is equivalent to minimizing the so-called reverse KL divergence : KL[pg||qdata]. This type of loss has been already ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(16) (17) ', 'modified_lines': '', 'original_lines': ' 3Note that, since we are conducting the convolution discretely, the spectral norm will depend on the size of the stride and padding. However, the answer will only differ by some predefined K. 5 Under review as a conference paper at ICLR 2018 Table 1: Hyper-parameter settings we tested in our experiments. †, ‡ and (cid:63) are the hyperparameter settings following Gulrajani et al. (2017), Warde-Farley & Bengio (2017) and Radford et al. (2016), respectively. Setting α A† B‡ C(cid:63) D E F 0.0001 0.0001 0.0002 0.001 0.001 0.001 β1 0.5 0.5 0.5 0.5 0.5 0.9 β2 ndis 0.9 0.999 0.999 0.9 0.999 0.999 5 1 1 5 5 5 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 SPECTRAL NORMALIZATION VS OTHER REGULARIZATION TECHNIQUES', 'after_section': None, 'context_after': 'Table 2: Inception scores and FIDs with unsupervised image generation on CIFAR-10. † (Radford et al., 2016) (experimented by Yang et al. (2017)), ‡ (Yang et al., 2017), ∗ (Warde-Farley & Bengio, ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '7 ', 'modified_lines': 'Weight clip.GAN-GPWGAN-GPBNLNWNOrthonormalSN012345678Inception scoreABCDEFWeight clip.WGAN-GPLNWNOrthonormalSN0123456789Inception scoreABCDEF Published as a conference paper at ICLR 2018 (a) CIFAR-10 (b) STL-10 Figure 2: FIDs on CIFAR-10 and STL-10 with different methods and hyperparameters (lower is better). ', 'original_lines': 'Weight clip.GAN-GPWGAN-GPBNLNWNOrthonormalSN012345678Inception scoreABCDEFWeight clip.WGAN-GPLNWNOrthonormalSN0123456789Inception scoreABCDEFWeight clip.GAN-GPWGAN-GPBNLNWNOrthonormalSN102FIDABCDEFWeight clip.WGAN-GPLNWNOrthonormalSN101102FIDABCDEF Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'generated images especially in Figure 6b. The images generated with spectral normalization is more diverse and complex than those generated with weight normalization. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'few sets of rectified linear transformations, which means that they tend to lie on the space that is linear in most parts. Marginalizing out many features of the input distribution in such space can result in oversimplified discriminator. We can actually confirm the effect of this phenomenon on the ', 'modified_lines': '', 'original_lines': ' 5For our ResNet experiments, we trained the same architecture with multiple random seeds for weight initialization and produced models with different parameters. We then generated 5000 images 10 times and computed the average inception score for each model. The values for ResNet on the table are the mean and standard deviation of the score computed over the set of models trained with different seeds. 8 Under review as a conference paper at ICLR 2018 (a) CIFAR-10 (b) STL-10 Figure 3: Squared singular values of weight matrices trained with different methods: Weight clip- ping (WC), Weight Normalization (WN) and Spectral Normalization (SN). We scaled the singular values so that the largest singular values is equal to 1. For WN and SN, we calculated singular values of the normalized weight matrices. Figure 4: The effect on the performance on STL-10 induced by the change of the feature map dimension of the final layer. The width of the highlighted region represents standard deviation of the results over multiple seeds of weight initialization. The orthonormal regularization does not perform well with large feature map dimension, possibly because of its design that forces the discriminator to use all dimensions including the ones that are unnecessary. For the setting of the optimizers’ hyper parameter, We used the setting C, which was optimal for ”orthonormal regularization” ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'REFERENCES Martin Arjovsky and L´eon Bottou. Towards principled methods for training generative adversarial networks. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'future work, we would like to further investigate where our methods stand amongst other methods on more theoretical basis, and experiment our algorithm on larger and more complex datasets. ', 'modified_lines': '', 'original_lines': '6More precisely, we simply increased the input dimension and the output dimension by the same factor. In Figure 4, ‘relative size’ = 1.0 implies that the layer structure is the same as the original. 10 0.51.01.52.02.53.03.54.04.5Iterations1e510121416182022Inception scoreSN-GANsOrthonormal Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Masaki Saito, Eiichi Matsumoto, and Shunta Saito. Temporal generative adversarial nets with singular value ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-02 09:56:59
|
ICLR.cc/2018/Conference
|
SkQg82WUz
|
BJShRkf8z
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'generation. In EMNLP, pp. 2147–2159, 2017. Jae Hyun Lim and Jong Chul Ye. Geometric GAN. arXiv preprint arXiv:1705.02894, 2017. ', 'modified_lines': '', 'original_lines': ' 11 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where U ∈ Rdo×P , V ∈ Rdi×P , and S ∈ RP ×P is a diagonal matrix. However, it is not a simple task to train this model while remaining absolutely faithful to this parametrization constraint. Our ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Sii = K, (28) ', 'modified_lines': '', 'original_lines': ' 22 02549Index of s0.00.20.40.60.81.0s2SNWN Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '9We implement our method based on the open-sourced code provided by the author (Gulra- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'achieved 7.28, whereas the spectral normalization and vanilla normalization achieved 7.04 and 6.69, respectively. ', 'modified_lines': '', 'original_lines': 'F THE GRADIENT OF GENERAL NORMALIZATION METHOD Let us denote ¯W := W/N (W ) to be the normalized weight where N (W ) to be a scalar normalized coefficient (e.g. Spectral norm or Frobenius norm). In general, we can write the derivative of loss ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-02 13:59:08
|
ICLR.cc/2018/Conference
|
BJShRkf8z
|
HJ_KggMUf
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT ', 'modified_lines': 'One of the challenges in the study of generative adversarial networks is the insta- bility of its training. In this paper, we propose a novel weight normalization tech- nique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained mod- els are available at https://github.com/pfnet-research/sngan_ projection. ', 'original_lines': 'One of the challenges in the study of generative adversarial networks is the in- stability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discrimina- tor. Our new normalization technique is computationally light and easy to incor- porate into existing implementations. We tested the efficacy of spectral normal- ization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generat- ing images of better or equal quality relative to the previous training stabilization techniques. The code, generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-02 14:06:55
|
ICLR.cc/2018/Conference
|
HJ_KggMUf
|
H1QiADNwz
|
[{'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'mension of the feature space 6, especially at the final layer (7th conv) for which the training with our spectral normalization prefers relatively small feature space (dimension < 100; see Figure 3b). As ', 'paragraph_idx': 44, 'before_section': '4 EXPERIMENTS', 'context_before': 'Figure 4: The effect on the performance on STL-10 induced by the change of the feature map dimension of the final layer. The width of the highlighted region represents standard deviation of the results over multiple seeds of weight initialization. The orthonormal regularization does not perform ', 'modified_lines': 'well with large feature map dimension, possibly because of its design that forces the discriminator to use all dimensions including the ones that are unnecessary. For the setting of the optimizers’ hyper-parameters, We used the setting C, which was optimal for “orthonormal regularization” Figure 5: Learning curves for conditional image generation in terms of Inception score for SN- GANs and GANs with orthonormal regularization on ImageNet. ', 'original_lines': 'well with large feature map dimension, possibly because of its design that forces the discriminator to use all dimensions including the ones that are unnecessary. For the setting of the optimizers’ hyper parameter, We used the setting C, which was optimal for ”orthonormal regularization” Figure 5: Learning curves of Inception score with SN-GANs and orthonormal regularization. ', 'after_paragraph_idx': 44, 'before_paragraph_idx': 44}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Kui Jia, Dacheng Tao, Shenghua Gao, and Xiangmin Xu. Improving training of deep neural networks via ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'architecture for object recognition? In ICCV, pp. 2146–2153, 2009. ', 'modified_lines': '', 'original_lines': '11 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Our spectral normalization, on the other hand, do not suffer from such a conflict in interest. Note that the Lipschitz constant of a linear operator is determined only by the maximum singular value. In other words, the spectral norm is independent of rank. Thus, unlike the weight normalization, ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'discriminators. Consequently, the algorithm would produce a rather arbitrary model distribution that matches the target distribution only at select few features. ', 'modified_lines': '', 'original_lines': '8In the original literature, the weight normalization was introduced as a method for reparametrization of the where γi ∈ R is to be learned in the course of the training. In form ¯WWN := (cid:2)γ1 ¯wT this work, we deal with the case γi = 1 so that we can assess the methods under the Lipschitz constraint. 2 , ..., γdo ¯wT do 1 , γ2 ¯wT (cid:3)T 21 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A similar but less obvious approach is to parametrize W ∈ Rdo×di as follows from the get-go and train the discriminators with this constrained parametrization: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'One direct and straightforward way of controlling the spectral norm is to clip the singular val- ues (Saito et al., 2017), (Jia et al., 2017). This approach, however, is computationally heavy because one needs to implement singular value decomposition in order to compute all the singular values. ', 'modified_lines': '', 'original_lines': ' 22 02549Index of s0.00.20.40.60.81.0s2SNWN Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'becoming degenerate. For this reparametrization, we need to control the Lipschitz condition by other means, such as the gradient penalty (Gulrajani et al., 2017). Indeed, we can think of analogous versions of reparametrization by replacing ¯WSN in (32) with W normalized by other criterions. The ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'where γ is a scalar variable to be learned. This parametrization compromises the 1-Lipschitz con- straint at the layer of interest, but gives more freedom to the model while keeping the model from ', 'modified_lines': '', 'original_lines': ' 23 Published as a conference paper at ICLR 2018 Method Inception score FID WGAN-GP (Standard CNN, Baseline) w/ Frobenius Norm. w/ Weight Norm. w/ Spectral Norm. 6.68±.06 40.1 N/A∗ N/A∗ 42.4 32.0 6.36±.04 7.20±.08 (WGAN-GP, ResNet, Gulrajani et al. (2017)) WGAN-GP (ResNet, Baseline) w/ Spectral norm. w/ Spectral norm. (1.5x feature maps in D) 7.86±.08 7.80±.11 7.85±.06 7.96±.06 24.5 23.6 22.5 Table 7: Inception scores with different reparametrization mehtods on CIFAR10 without label su- pervisions. (*)We reported N/A for the inception score and FID of Frobenius normalization because the training collapsed at the early stage. Method (ResNet) Inception score FID (AC-WGAN-GP, Gulrajani et al. (2017)) AC-WGAN-GP (Baseline) w/ Spectral norm. w/ Spectral norm. (1.5x feature maps in D) 8.42±.10 8.29±.12 8.59±.12 8.60±.08 19.5 18.6 17.5 Table 8: Inception scores and FIDs with different reparametrization methods on CIFAR10 with the label supervision, by auxiliary classifier (Odena et al., 2017). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-16 14:45:47
|
ICLR.cc/2018/Conference
|
SkGqafbRb
|
BJKfe7-CZ
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'ei t = OR ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'when it executes the subtask. The eligibility vector can be computed from task graph and xt as follows: ', 'modified_lines': '', 'original_lines': '(8) (9) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 21:07:28
|
ICLR.cc/2018/Conference
|
BJKfe7-CZ
|
HyLKlmWAb
|
[]
|
2017-10-27 21:09:17
|
ICLR.cc/2018/Conference
|
HyLKlmWAb
|
BJI0GQ-0b
|
[{'section': '4 METHOD', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 19, 'before_section': None, 'context_before': '4 METHOD We propose neural task graph solver (NTS) which is a neural network that encodes a task graph and ', 'modified_lines': 'an observation as shown in Figure 2. Our NTS is trained through actor-critic method to maximize the ', 'original_lines': 'an observation as shown in Figure 2. A NTS is trained through actor-critic method to maximize the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 21:19:10
|
ICLR.cc/2018/Conference
|
BJI0GQ-0b
|
BJXpVQZCZ
|
[{'section': '4.1 NEURAL TASK GRAPH SOLVER', 'after_section': '4.1 NEURAL TASK GRAPH SOLVER', 'context_after': '(4) pcost module as follows: π(ot|st, G, xt, s) = Softmax(preward ', 'paragraph_idx': 21, 'before_section': '4.1 NEURAL TASK GRAPH SOLVER', 'context_before': 'The observation module encodes the input observation (st) using a convolutional neural network (CNN) and outputs a cost score: ', 'modified_lines': 't = CNN(st, s). An ideal observation module would learn to estimate high scores for subtasks where the target object is close to the agent, because they would require less costs (i.e., time). The NTS policy is a softmax policy which adds reward scores and cost scores computed from each ', 'original_lines': 't = CNN(st, s), where s is the number of step remaining. An ideal observation module would learn to estimate high scores for subtasks where the target object is close to the agent, because they would require less costs (i.e., time). The NTS policy is a softmax policy by adding reward scores and cost scores computed from each ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 21}, {'section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'after_section': None, 'context_after': 'ei t = OR ', 'paragraph_idx': 22, 'before_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_before': 'Note that the agent receives a half of the subtask reward when it satisfies its precondition, and receives the rest of reward ', 'modified_lines': 'when it executes the subtask. The eligibility vector (et) can be computed from the task graph and xt as follows: ', 'original_lines': 'when it executes the subtask. The eligibility vector can be computed from task graph and xt as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}]
|
2017-10-27 21:27:23
|
ICLR.cc/2018/Conference
|
BJXpVQZCZ
|
HkzZPQWC-
|
[{'section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'after_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_after': '(8) (9) (10) where yi AN D is the output of i-th AND node, and wi,j = 0 if there is a NOT connection between i-th node and j-th node, ', 'paragraph_idx': 22, 'before_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_before': 't ', 'modified_lines': 't )(1 − wi,j) ', 'original_lines': 't )(1 − wi,j) ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'after_section': None, 'context_after': 'Figure 3: Visualization of OR, (cid:102)OR, AND, and (cid:103)AND operations with three ', 'paragraph_idx': 22, 'before_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_before': 't = 1 when j-th node does not violate the pre-condition of i-th node. Note that ˜Rt is not differentiable with respect to xt because AND(·) and OR(·) are not differentiable. ', 'modified_lines': 'To derive our reward-propagation policy, we propose to substitute AND(·) and OR(·) functions with “smoothed” functions (cid:93)AND and (cid:102)OR as follows: ', 'original_lines': 'We propose to substitute AND(·) and OR(·) functions with “smoothed” functions (cid:93)AND and (cid:102)OR as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'after_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_after': '(cid:98)Rt = rT ', 'paragraph_idx': 22, 'before_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_before': '(11) ', 'modified_lines': 'where (cid:93)AND and (cid:102)OR were implemented as scaled sigmoid and tanh functions as illustrated by Fig- ure 3 (see Appendix for details). With the smoothed operations, the smoothed and shaped reward function is given as: ', 'original_lines': 'where (cid:93)AND and (cid:102)OR were implemented as simple scaled sigmoid and tanh functions as illustrated by Figure 3 (see Appendix for details). With the smoothed operations, the smoothed and shaped reward function is given as: ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'after_section': None, 'context_after': '5 EXPERIMENT In the experiment, we investigated following research questions: • Does the reward-propagation policy outperform other heuristic baselines (e.g. greedy policy, etc)? • Is the reward-propagation policy helpful for training NTS? • Can NTS deal with complex task dependencies under delayed reward? • Can NTS generalize well to unseen task graphs? 5.1 EXPERIMENTAL SETTING ', 'paragraph_idx': 22, 'before_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_before': '(13) Intuitively, the reward-propagation policy puts high probabilities over subtasks that are likely to ', 'modified_lines': 'increase the smoothed reward by a large margin at time t. Since this is a reasonably good policy that can be constructed on the fly without any learning, we propose to use the reward-propagation policy to pre-train our NTS through policy distillation. ', 'original_lines': 'increase the smoothed reward by a large margin at time t. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': 'Abstract', 'after_section': None, 'context_after': '5 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'generated for each episode. The agent receives a time penalty (-0.1) for each step. The episode length (time budget) was randomly set for each episode in a range such that 60% − 80% of subtasks are executed on average for both training and testing. ', 'modified_lines': '', 'original_lines': ' Subtask The set of subtasks is O = {pickup, transf orm} × X where X corresponds to 8 types of objects above. As we discussed in Section 3, the agent chooses options which execute subtasks rather than primitive actions. We used a pre-trained subtask executer to implement subtask execution policy (see Appendix for details). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5.1 EXPERIMENTAL SETTING', 'after_section': None, 'context_after': 'Task Graph The training set of task graphs consists of 4 layers of task dependencies. The testing set of task graphs consists of 4 or more layers of task dependencies with a larger number of subtasks. Task dependencies (AND, OR, and NOT) were randomly generated for each episode. In addition, we ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'D2, D3, and D4 have (unseen) larger graph structures. NTS- RProp outperforms other compared agents on all the task. ', 'modified_lines': 'Subtask The set of subtasks is O = {pickup, transf orm} × X where X corresponds to 8 types of objects above. As we discussed in Section 3, the agent chooses options which execute subtasks rather than primitive actions. We used a pre-trained subtask executer to implement subtask execution policy (see Appendix for details). ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '5.5 ANALYSIS OF TASK GRAPH COMPONENTS Figure 7: Normalized performance on task graphs with different types of dependencies. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'each subtask from the observation to make a better decision. Figure 6 visualizes more complicated example of trajectories. ', 'modified_lines': '', 'original_lines': '7 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
|
2017-10-27 21:36:57
|
ICLR.cc/2018/Conference
|
HkzZPQWC-
|
BJqHqQb0b
|
[]
|
2017-10-27 21:50:57
|
ICLR.cc/2018/Conference
|
BJqHqQb0b
|
ByA-jteZM
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'Program Induction and Synthesis Recently, there have been a few attempts to infer a program from examples (Reed & De Freitas, 2015; Cai et al., 2017; Parisotto et al., 2016). For example, ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'it. However, most of the prior work has focused on a setting where the program specifies what to do, and the agent just needs to learn how to do. In contrast, our work explores a new form of pro- gram, called task graph (see Figure 1), which describes properties of several tasks and dependencies ', 'modified_lines': 'between them, and the agent is required to figure out what to do as well as how to do it. ', 'original_lines': 'between them, and the agent is required to figure out what to do as well as how to do. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'and top-down process, and outputs the reward score (preward ). The observation module encodes observation using CNN and outputs the cost score (pcost ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Figure 2: Neural task graph solver architecture. The task module encodes the task graph through a bottom-up ', 'original_lines': 'Figure 2: Neural task graph solver architecture. The task module encodes the task graph through bottom-up ', 'after_paragraph_idx': 12, 'before_paragraph_idx': None}, {'section': '3 THE TASK GRAPH EXECUTION PROBLEM', 'after_section': '3 THE TASK GRAPH EXECUTION PROBLEM', 'context_after': '(1999)) (O) that performs subtasks by executing one or more primitive actions. More specifically, we define a semi-MDP (SMDP) as M(cid:48) = (S, O, G, R, γ). The goal is to learn a multi-task policy π : S × G → O which chooses the optimal subtask given the current state and the task graph. ', 'paragraph_idx': 15, 'before_section': None, 'context_before': '3 THE TASK GRAPH EXECUTION PROBLEM ', 'modified_lines': 'Let S be a set of states, G be a set of task graphs, A be a set of actions, and γ be a discount factor. The task graph execution problem is defined as a Markov Decision Process (MDP): M = (S, A, G, R, γ) where the reward function is defined as R : S × G × A → R. We assume that the agent has a set of pre-learned options (Precup (2000); Stolle & Precup (2002); Sutton et al. ', 'original_lines': 'Let S be a set of state, G be a set of task graphs, A be a set of actions, and γ be a dis- count factor. The task graph execution problem is defined as a Markov Decision Process (MDP): M = (S, A, G, R, γ) where the reward function is defined as R : S × G × A → R. We assume that the agent has a set of pre-learned options (Precup (2000); Stolle & Precup (2002); Sutton et al. ', 'after_paragraph_idx': 15, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A subtask is eligible if and only if its precondition is satisfied and it has never been executed by the agent. The agent receives the reward associated with the subtask i if and only if the agent executes ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'in Figure 1. A precondition of a subtask is defined as a logical expression of other subtasks in sum- of-products (SoP) form where multiple AND terms are combined with an OR term (e.g. OR(AND(A, B), AND(B, C, NOT(D)))) in Figure 1). Since SoP can represent any logical expression, we can define ', 'modified_lines': 'complex task dependencies in the form of a task graph. ', 'original_lines': 'complex task dependencies in the form of task graph. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'each step as a cost (r−). To maximize the overall reward (r = r+ + r−), the agent needs to achieve the balance between two sources of rewards by minimizing costs while maximizing subtask rewards. Thus, the agent is required to take into account subtask dependencies in the task graph as well as ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 't = 1 if and only if the precondition of subtask i is satisfied. These two vectors xt, et are available to the agent as additional inputs. ', 'modified_lines': 'In addition to subtask reward defined in the task graph (r+), the agent receives a time penalty for ', 'original_lines': 'In addtion to subtask reward defined in the task graph (r+), the agent receives a time penalty for ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'parametric policy, it is used to pre-train NTS through policy distillation. Section 4.1 describes the NTS architecture, and Section 4.2 describes how to construct the reward propagation policy. ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 reward. To address the difficulty of training due to the complex nature of the problem, we propose ', 'modified_lines': 'reward-propagation policy, which propagates the reward information between related subtasks to model their dependencies. Since the reward-propagation policy acts as a reasonably good non- ', 'original_lines': 'reward-propagation policy, which is propagates the reward information between related subtasks to model their dependencies. Since the reward-propagation policy acts as a reasonably good non- ', 'after_paragraph_idx': 6, 'before_paragraph_idx': None}]
|
2017-12-02 20:20:21
|
ICLR.cc/2018/Conference
|
ByA-jteZM
|
H16RVRBzz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'on a 2D visual domain show that our method to pre-train from the gradient-based policy significantly improves the performance of NTS. We also demonstrate that our agent can perform a complex reasoning to find the optimal way of executing ', 'modified_lines': 'the task graph and generalize well to unseen task graphs. In addition, we com- pare our agent with a Monte-Carlo Tree Search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of our agent can be further improved by combining with MCTS. ', 'original_lines': 'the task graph and generalize well to unseen task graphs. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'the agent should consider the long-term effect of each subtask due to deep dependencies among subtasks. In addition, the agent is required to generalize over unseen task graphs during evaluation. ', 'modified_lines': '', 'original_lines': 'To solve the problem, we propose a new deep RL architecture, called neural task graph solver (NTS), which encodes a task graph using a recursive-reverse-recursive neural network (R3NN) (Parisotto et al., 2016) to consider the long-term effect of each subtask. To address the ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'difficulty of learning, we propose to pre-train the NTS to approximate our novel non-parametric gradient-based policy called reward-propagation policy. The key idea of reward propagation pol- icy is to construct a differentiable representation of the task graph such that taking a gradient ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'agent chose to satisfy the preconditions of I and execute it (blue), and chose to execute remaining subtasks later (green). ', 'modified_lines': 'To solve the problem, we propose a new deep RL architecture, called neural task graph solver (NTS), which encodes a task graph using a recursive-reverse-recursive neural network (R3NN) (Parisotto et al., 2016) to consider the long-term effect of each subtask. To address the ', 'original_lines': '', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 RELATED WORK ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'propagation policy is crucial for training our NTS agent, and our agent outperforms all the baselines. We also provide empirical evidences that our agent implicitly performs a complex reasoning by taking into account long-term task dependencies as well as the cost of executing each subtask from ', 'modified_lines': 'the observation, and it can successfully generalize to unseen and larger task graphs. In addition, we compare our agent with a Monte-Carlo tree search (MCTS) algorithm. The results show that our method is computationally much more efficient than MCTS. Finally, we also show that the performance of our NTS agent can be further improved by combining with MCTS, achieving a near-optimal performance. ', 'original_lines': 'the observation, and it can successfully generalize to unseen and larger task graphs. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '3 THE TASK GRAPH EXECUTION PROBLEM Let S be a set of states, G be a set of task graphs, A be a set of actions, and γ be a discount ', 'paragraph_idx': 12, 'before_section': '2 RELATED WORK', 'context_before': 'on how to train the high-level controller to deal with delayed reward and long-term dependencies between subtasks. ', 'modified_lines': 'Planning with Hierarchical Task Network One of the most closely related problem to our task graph execution problem is the planning problem considered in hierarchical task network (HTN) approaches (Sacerdoti, 1975; Erol, 1996; Erol et al., 1994; Nau et al., 1999; Castillo et al., 2005) in that HTN approaches also aim to find the optimal way to execute tasks given task dependencies and cost information. However, HTN approaches aim to execute a single goal task, while the goal of our problem is to maximize the cumulative reward without a particular goal task. Thus, the agent in our problem not only needs to consider complex dependencies among different tasks but also needs to infer the cost from the observation. These additional challenges make it difficult to directly apply HTN approaches to solve our problem. Motion Planning Another related problem to our task graph execution problem is motion planning (MP) problem (Asano et al., 1985; Canny, 1985; 1987; Faverjon & Tournassoud, 1987; Keil & Sack, 1985). Solving MP problem often involves solving graph search problem after reducing or mapping given MP problem to the graph. However, different from our problem, the MP approaches aim to find an optimal path to the goal in the graph while avoiding obstacles similar to HTN approaches. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 11}, {'section': '3 THE TASK GRAPH EXECUTION PROBLEM', 'after_section': '3 THE TASK GRAPH EXECUTION PROBLEM', 'context_after': 'A task graph G ∈ G consists of subtasks with corresponding rewards and preconditions as illustrated in Figure 1. A precondition of a subtask is defined as a logical expression of other subtasks in sum- ', 'paragraph_idx': 14, 'before_section': '3 THE TASK GRAPH EXECUTION PROBLEM', 'context_before': 'the agent has a set of pre-learned options (Precup (2000); Stolle & Precup (2002); Sutton et al. (1999)) (O) that performs subtasks by executing one or more primitive actions. More specifically, we define a semi-MDP (SMDP) as M(cid:48) = (S, O, G, R, γ). The goal is to learn a multi-task policy ', 'modified_lines': 'π : S × G → O which chooses the optimal subtask given the current state and the task graph to maximize the cumulative discounted reward R = E in an episode, where rt is the reward at time step t. t=0 γtrt (cid:104)(cid:80)T (cid:105) ', 'original_lines': 'π : S × G → O which chooses the optimal subtask given the current state and the task graph. ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 14}, {'section': 'Abstract', 'after_section': None, 'context_after': 'reward. To address the difficulty of training due to the complex nature of the problem, we propose reward-propagation policy, which propagates the reward information between related subtasks to model their dependencies. Since the reward-propagation policy acts as a reasonably good non- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'We propose neural task graph solver (NTS) which is a neural network that encodes a task graph and an observation as shown in Figure 2. Our NTS is trained through actor-critic method to maximize the ', 'modified_lines': '', 'original_lines': ' 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'after_section': None, 'context_after': '(cid:101)Rt = rT s (xt + et)/2. (7) Note that the agent receives a half of the subtask reward when it satisfies its precondition, and receives the rest of reward ', 'paragraph_idx': 27, 'before_section': '4.2 PRE-TRAINING NEURAL TASK GRAPH SOLVER FROM REWARD PROPAGATION POLICYLet rs ∈ RN be a vector of rewards of all subtasks. Let xt be a subtask completion indicator vectorand et be a eligibility vector at time-step t (see Section 3 for definitions). Then, the sum of subtaskreward until time-step t is given as:', 'context_before': '(6) ', 'modified_lines': 'We first modify the reward formulation such that it gives a partial reward for satisfying preconditions to encourage the agent to satisfy precondition of a subtask with large reward. The sum of modified subtask reward is defined as: ', 'original_lines': 'The key idea of our reward-propagation policy is to shape the reward function such that it gives a partial reward for satisfying preconditions to encourage the agent to satisfy precondition of a subtask with large reward. The shaped reward function is defined as: 4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': '5 EXPERIMENT', 'after_section': None, 'context_after': '5.1 EXPERIMENTAL SETTING ', 'paragraph_idx': 28, 'before_section': '5 EXPERIMENT', 'context_before': '• Can NTS deal with complex task dependencies under delayed reward? • Can NTS generalize well to unseen task graphs? ', 'modified_lines': ' • How does NTS perform compared to MCTS? • Can NTS be used to improve MCTS? ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '5.1 EXPERIMENTAL SETTING', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 Task graph setting ', 'paragraph_idx': 30, 'before_section': '5.1 EXPERIMENTAL SETTING', 'context_before': 'length (time budget) was randomly set for each episode in a range such that 60% − 80% of subtasks are executed on average for both training and testing. ', 'modified_lines': 'Subtask The set of subtasks is O = {pickup, transf orm} × X where X corresponds to 8 types of objects above. As we discussed in Section 3, the agent chooses options which execute subtasks rather than primitive actions. We used a pre-trained subtask executer to implement subtask execution policy (see Appendix for details). Task Graph The training set of task graphs consists of 4 layers of task dependencies. The testing set of task graphs consists of 4 or more layers of task dependencies with a larger number of subtasks. Task dependencies (AND, OR, and NOT) were randomly generated for each episode. In addition, we added the following components into task graphs to make the overall task more challenging: • Distractor subtask: A subtask without any parent node in the task graph. Executing this kind of subtask may give an immediate reward but is sub-optimal in the long run. • Negative distractor subtask: A subtask with only NOT connection to parent nodes in the task graph. Executing this subtask may give an immediate reward, but this would make other subtasks not executable. • Delayed reward: The agent may receive little or zero reward for executing subtasks in the lower layers (i.e., subtasks with few or no pre-conditions). But, the agent should execute some of them to make other subtasks eligible. More details of task graphs are described in the Appendix. 5.2 AGENTS We evaluated the following policies: • Random: A policy which executes any eligible subtask. • Greedy: A policy which executes the eligible subtask with the largest reward. • Near-Optimal: A near-optimal policy computed from exhaustive search on eligible subtasks. • RProp: Our reward-propagation policy. • NTS-Scratch: Our NTS trained with actor-critic from scratch. • NTS-RProp: Our NTS distilled from reward-propagation policy and fine-tuned with actor-critic. 6 ', 'original_lines': '5 0123a+b+c00.51outputOR and approximated OR operation0123a+b+c00.51outputAND and approximated AND operation ', 'after_paragraph_idx': None, 'before_paragraph_idx': 29}, {'section': 'Abstract', 'after_section': None, 'context_after': 'delayed by backpropagating the reward signal from the subtasks in the higher layers to the subtasks in the lower layers. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'propagation policy plays a key role in pre-training our NTS. We observed that NTS trained from scratch fails to capture complex task dependencies and only outperforms the random baseline. We believe that the reward-propagation policy gives a meaningful learning signal even if the reward is ', 'modified_lines': '', 'original_lines': ' 6 020406080100120140Epoch2.52.01.51.00.50.00.5Average rewardNTS-RProp(Ours)NTS-Scratch(Ours)RProp(Ours)GreedyNear-OptimalRandom Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '5.5 ANALYSIS OF TASK GRAPH COMPONENTS', 'after_section': '5.5 ANALYSIS OF TASK GRAPH COMPONENTS', 'context_after': 'Figure 5: Example trajectories of Greedy, RProp, and NTS-RProp agents given 25 steps. Greedy agent fails to execute the subtask ‘F’ which gives the largest reward within the time limit, whereas RProp and NTS-RProp agents execute them by executing its pre-conditions. NTS-RProp agent found a shorter trajectory of subtasks, ', 'paragraph_idx': 43, 'before_section': None, 'context_before': '5.4 QUALITATIVE RESULT ', 'modified_lines': 'Figure 5 visualizes an example of different agents’ trajectories given the same initial observation and the task graph. As Greedy agent chooses the subtask that gives the largest reward among all eligible subtasks, it fails to execute the subtask ‘F’ at the highest layer within the time limit. In contrast, RProp agent receives a higher reward by executing the subtask ‘F’, which shows that it can consider the long-term effect of initial subtasks (e.g., ‘A’, ‘B’) on the later subtasks (e.g., ‘D’, ‘E’) through our reward-propagation method. Furthermore, our NTS-RProp agent found the optimal sequence of subtasks. Even though the optimal subtasks (‘B-C-E-F’) give a smaller amount of rewards compared to RProp agent’s trajectory in the task graph, they require much less costs (i.e., time) to execute. This demonstrates that our NTS considers not only the task graph but also the expected costs for executing each subtask from the observation to make a better decision. Figure 6 visualizes more complicated example of trajectories. 5.5 ANALYSIS OF TASK GRAPH COMPONENTS To investigate how agents deal with different types of task graph components, we evaluated all agents on the following types of task graphs: 7 020406080100120140Epoch2.52.01.51.00.50.00.5Average rewardNTS-RProp(Ours)NTS-Scratch(Ours)RProp(Ours)GreedyNear-OptimalRandom Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 44, 'before_paragraph_idx': None}, {'section': '5.5 ANALYSIS OF TASK GRAPH COMPONENTS', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 • ‘Base’ set consists of task graphs with only AND and OR operation. ', 'paragraph_idx': 46, 'before_section': '5.5 ANALYSIS OF TASK GRAPH COMPONENTS', 'context_before': 'execute subtask ‘K’ by satisfying its pre-conditions. NTS-RProp agent found a shorter path to execute subtask ‘K’ in the task graph, while RProp found a sub-optimal path to execute subtask ‘K’. ', 'modified_lines': 'Figure 7: Normalized performance on task graphs with different types of dependencies. 8 BaseBase-ORBase+DistractorBase+NOTBase+Neg-DistractorBase+delayed00.51NTS-RPropNTS-scratchRPropGreedy Figure 8: Performance of MCTS+NTS and MCTS on D2 (see Table 1) per the number of simulated episodes. NTS-RProp performs as well as MCTS with 231 simulated episodes. MCTS augmented with NTS significantly outperforms MCTS. ', 'original_lines': 'Figure 5 visualizes an example of different agents’ trajectories given the same initial observation and the task graph. As Greedy agent chooses the subtask that gives the largest reward among all eligible subtasks, it fails to execute the subtask ‘F’ at the highest layer within the time limit. In contrast, RProp agent receives a higher reward by executing the subtask ‘F’, which shows that it can consider the long-term effect of initial subtasks (e.g., ‘A’, ‘B’) on the later subtasks (e.g., ‘D’, ‘E’) through 7 our reward-propagation method. Furthermore, our NTS-RProp agent found the optimal sequence of subtasks. Even though the optimal subtasks (‘B-C-E-F’) give a smaller amount of rewards compared to RProp agent’s trajectory in the task graph, they require much less costs (i.e., time) to execute. This demonstrates that our NTS considers not only the task graph but also the expected costs for executing each subtask from the observation to make a better decision. Figure 6 visualizes more complicated example of trajectories. 5.5 ANALYSIS OF TASK GRAPH COMPONENTS Figure 7: Normalized performance on task graphs with different types of dependencies. To investigate how agents deal with different types of task graph components, we evaluated all agents on the following types of task graphs: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 45}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 Mohammad Ghavamzadeh and Sridhar Mahadevan. Hierarchical policy gradient algorithms. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'proposed a reward-propagation policy derived from a differentiable form of task graph, which plays an important role in pre-training our neural task graph solver architecture. The empirical results showed that our agent can deal with long-term dependencies between subtasks and generalize well ', 'modified_lines': 'to unseen task graphs. In addition, we showed that our agent can be used to effectively reduce the search space of MCTS so that the agent can find a near-optimal solution with a small number of simulations. REFERENCES David Andre and Stuart J. Russell. Programmable reinforcement learning agents. In NIPS, 2000. David Andre and Stuart J. Russell. State abstraction for programmable reinforcement learning agents. In AAAI/IAAI, 2002. Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In ICML, 2017. Takao Asano, Tetsuo Asano, Leonidas Guibas, John Hershberger, and Hiroshi Imai. Visibility- polygon search and euclidean shortest paths. In Foundations of Computer Science, 1985., 26th Annual Symposium on, pp. 155–164. IEEE, 1985. Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002. Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion. arXiv preprint arXiv:1704.06611, 2017. John Canny. A voronoi method for the piano-movers problem. In Robotics and Automation. Pro- ceedings. 1985 IEEE International Conference on, volume 2, pp. 530–535. IEEE, 1985. John Canny. A new algebraic method for robot motion planning and real geometry. In Foundations of Computer Science, 1987., 28th Annual Symposium on, pp. 39–48. IEEE, 1987. Luis Castillo, Juan Fdez-Olivares, ´Oscar Garc´ıa-P´erez, and Francisco Palao. Temporal enhance- ments of an htn planner. In Conference of the Spanish Association for Artificial Intelligence, pp. 429–438. Springer, 2005. Misha Denil, Sergio G´omez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Pro- grammable agents. arXiv preprint arXiv:1706.06383, 2017. Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decompo- sition. J. Artif. Intell. Res.(JAIR), 13:227–303, 2000. Kutluhan Erol. Hierarchical task network planning: formalization, analysis, and implementation. PhD thesis, 1996. Kutluhan Erol, James A Hendler, and Dana S Nau. Umcp: A sound and complete procedure for hierarchical task-network planning. In AIPS, volume 94, pp. 249–254, 1994. 10 Bernard Faverjon and Pierre Tournassoud. A local based approach for path planning of manipulators with a high number of degrees of freedom. In Robotics and Automation. Proceedings. 1987 IEEE International Conference on, volume 4, pp. 1152–1159. IEEE, 1987. ', 'original_lines': 'to unseen task graphs. 8 BaseBase-ORBase+DistractorBase+NOTBase+Neg-DistractorBase+delayed00.51NTS-RPropNTS-scratchRPropGreedy REFERENCES David Andre and Stuart J. Russell. Programmable reinforcement learning agents. In NIPS, 2000. David Andre and Stuart J. Russell. State abstraction for programmable reinforcement learning agents. In AAAI/IAAI, 2002. Jacob Andreas, Dan Klein, and Sergey Levine. Modular multitask reinforcement learning with policy sketches. In ICML, 2017. Jonathon Cai, Richard Shin, and Dawn Song. Making neural programming architectures generalize via recursion. arXiv preprint arXiv:1704.06611, 2017. Misha Denil, Sergio G´omez Colmenarejo, Serkan Cabi, David Saxton, and Nando de Freitas. Pro- grammable agents. arXiv preprint arXiv:1706.06383, 2017. Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decompo- sition. J. Artif. Intell. Res.(JAIR), 13:227–303, 2000. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 NEURAL TASK GRAPH SOLVER', 'after_section': '4.1 NEURAL TASK GRAPH SOLVER', 'context_after': 'b,o = bθo , Each submodule takes the output embeddings from its children nodes, and take element-wise sum over all input embeddings giving single 128-dimensional vector, while fANDnet and bORnet mul- ', 'paragraph_idx': 23, 'before_section': '4.1 NEURAL TASK GRAPH SOLVER', 'context_before': 'φi ', 'modified_lines': 'j∈P arenti (cid:88) j∈P arenti wi,j + φj b,a, φi f,o, ri (14) (15) ', 'original_lines': 'b,a = bθa φi (cid:88) φj b,o, φi f,a j∈P arenti (cid:88) wi,j + φj b,a, φi f,o, ri (14) (15) , j∈P arenti ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}]
|
2017-12-18 23:52:52
|
ICLR.cc/2018/Conference
|
H16RVRBzz
|
BkCcXJJSz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'the task graph and generalize well to unseen task graphs. In addition, we com- pare our agent with a Monte-Carlo Tree Search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of our agent can ', 'modified_lines': 'be further improved by combining with MCTS. The demo video is available at the following website: https://youtu.be/e_ZXVS5VutM. ', 'original_lines': 'be further improved by combining with MCTS. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
|
2018-01-19 03:18:45
|
ICLR.cc/2018/Conference
|
BkCcXJJSz
|
H1K8Ey1Sz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'the task graph and generalize well to unseen task graphs. In addition, we com- pare our agent with a Monte-Carlo Tree Search (MCTS) method showing that our method is much more efficient than MCTS, and the performance of our agent can ', 'modified_lines': 'be further improved by combining with MCTS. The demo video is available at https://youtu.be/e_ZXVS5VutM. ', 'original_lines': 'be further improved by combining with MCTS. The demo video is available at the following website: https://youtu.be/e_ZXVS5VutM. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
|
2018-01-19 03:21:53
|
ICLR.cc/2018/Conference
|
H1K8Ey1Sz
|
BkQuQzWRZ
|
[]
|
2018-01-25 15:40:19
|
ICLR.cc/2018/Conference
|
S1BavZ-Rb
|
BkVydZbAZ
|
[]
|
2017-10-27 19:24:12
|
ICLR.cc/2018/Conference
|
BkVydZbAZ
|
H1UVSIgzf
|
[{'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'Domain adversarial neural networks (DANN) is a discriminative model to learn domain-invariant features (Ganin et al., 2016). It can be formulated as a minimax problem where the feature transforma- tion component tries to learn a representation to confuse a following domain classification component. DANN also enjoys a nice theoretical justification to learn a feature map to decrease the measure (Ben-David et al., 2007) between source and target domains. Other distance measures between distributions can also be applied. Tzeng et al. (2014) and Long et al. (2015) propose similar models where the maximum mean discrepancy (MMD) (Gretton et al., 2012) between two domains A ', 'paragraph_idx': 6, 'before_section': '2 RELATED WORK', 'context_before': 'using denoising auto-encoders (Vincent et al., 2008; 2010). Other works focus on learning feature transformations such that the feature distributions in the source and target domains are close to each other (Ben-David et al., 2007; 2010; Ajakan et al., 2014; Ganin et al., 2016). In practice it was ', 'modified_lines': 'observed that unsupervised pretraining using stacked denoising auto-encoders (mSDA) (Vincent et al., 2008; Chen et al., 2012) often improves the generalization accuracy (Ganin et al., 2016). One of the limitations of mSDA is that it needs to explicitly form the covariance matrix of input features and then solves a linear system, which can be computationally expensive to solve exactly in high dimensional settings, but approximate scheme exists. -distance are minimized. Tzeng et al. (2017) also proposed a variant of DANN, known as ADDA, where the encoders for source and target domains are not shared. Instead of joint training objective classifier and domain discriminator, ADDA runs in two stages. At the first stage, source encoder and classifier are trained in a supervised way on the source domain. At the second stage, both source encoder and source classifier are fixed, and both domain discriminator and target encoder are trained using only unsupervised samples from both domains. Very recently, Bousmalis et al. (2016) propose a model where orthogonal representations that are shared between domains and unique to each domain are learned simultaneously. They achieve this goal by incorporating both similarity and difference penalties for features into the objective function. Finally, domain adaptation can also be viewed as a semi-supervised learning problem by ignoring the domain shift, where source instances are treated as labeled data and target instances are unlabeled data (Dai et al., 2007; Rasmus et al., 2015). ', 'original_lines': 'observed that unsupervised pretraining using stacked denoising auto-encoders (mSDA) (Vincent et al., 2008; Chen et al., 2012) often improves the generalization accuracy (Ganin et al., 2016). One of the limitations of mSDA is that it needs to explicitly form the covariance matrix of input features and then solves a linear system, which can be computationally expensive in high dimensional settings. On the other hand, it is also not clear how to extend mSDA so that it can also be applied for time-series modeling. -distance are minimized. Very recently, Bousmalis et al. (2016) propose a model where orthogonal repre- sentations that are shared between domains and unique to each domain are learned simultaneously. They achieve this goal by incorporating both similarity and difference penalties for features into the objective function. Finally, domain adaptation can also be viewed as a semi-supervised learning problem by ignoring the domain shift, where source instances are treated as labeled data and target instances are unlabeled data (Dai et al., 2007; Rasmus et al., 2015). ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 6}, {'section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'after_section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'context_after': '(1) ', 'paragraph_idx': 10, 'before_section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'context_before': 'p(x, y; φ, ψ) = p(x; φ)p(y x; ψ)p(φ, ψ) ', 'modified_lines': ' | ', 'original_lines': '', 'after_paragraph_idx': 10, 'before_paragraph_idx': 10}, {'section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'after_section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'context_after': 'parameters are shared in both the generation process of x and y factorization assumption of the prior distribution does not hold anymore, and we cannot hope to recover a discriminative model by simply optimizing ψ. To make our discussion concrete, think if ζ), where ζ are the shared ', 'paragraph_idx': 12, 'before_section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'context_before': 'models (Ng & Jordan, 2002): discriminative training usually wins at predictive accuracy, while generative modeling provides a principled way to use unlabeled data. To achieve the best of both worlds, now let us consider the case where φ and ψ have a common subspace, i.e., some model ', 'modified_lines': 'x. Clearly under this case the ', 'original_lines': 'x. Clearly under this case the ', 'after_paragraph_idx': 12, 'before_paragraph_idx': 12}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '(4) where w > 0 is the bandwidth and f : Rd Rd are two feature transformations. Our definition of KDE differs from the original one (Wassermann, 2006) by the additional parametric f = I, the identity map, our definition reduces to ◦ the original one. Note that when applied to the source and target domains separately, the original KDE does not give similar density estimations if their empirical distributions are far from each other, ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'ζ) \\ ', 'modified_lines': 'transformations g ', 'original_lines': ' (cid:19) transformations g ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 AN INSTANTIATION USING KERNEL DENSITY ESTIMATION', 'after_section': '3.2 AN INSTANTIATION USING KERNEL DENSITY ESTIMATION', 'context_after': ', typical choices x, depending on whether y include linear regression or logistic regression. While both these two models are linear and limited, we can first augment them with rich nonlinear transformation f applied to the input instance. ', 'paragraph_idx': 15, 'before_section': '3.2 AN INSTANTIATION USING KERNEL DENSITY ESTIMATION', 'context_before': 'from source and target domains, respectively. f transforms both XS and XT to RD so that they are close in RD; g transforms them back so that they have close density estimations in Rd. ', 'modified_lines': 'For the conditional distribution y ', 'original_lines': 'For the conditional distribution y ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 14}, {'section': '3.3 LEARNING BY MAXIMIZING JOINT LIKELIHOOD', 'after_section': None, 'context_after': 'n (cid:88) ', 'paragraph_idx': 21, 'before_section': '3.3 LEARNING BY MAXIMIZING JOINT LIKELIHOOD', 'context_before': 'log p(yi | ', 'modified_lines': 'xi; ψ) + λ ', 'original_lines': 'xi; φ) + λ ', 'after_paragraph_idx': None, 'before_paragraph_idx': 21}, {'section': '3.3 LEARNING BY MAXIMIZING JOINT LIKELIHOOD', 'after_section': '3.3 LEARNING BY MAXIMIZING JOINT LIKELIHOOD', 'context_after': 'ζ) ', 'paragraph_idx': 21, 'before_section': None, 'context_before': 'xj − || ', 'modified_lines': 'g(f (xj; ζ); φ ', 'original_lines': 'g(f (xj; ζ); ψ ', 'after_paragraph_idx': 21, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '4.1 DATASETS AND EXPERIMENTAL SETUP ', 'paragraph_idx': 5, 'before_section': None, 'context_before': '4 EXPERIMENTS We first evaluate DAuto on synthetic experiments with MNIST, and then compare it with state-of-the- ', 'modified_lines': 'art models, including mSDA, the Ladder network, DANN and ADDA. We report experimental results on the Amazon benchmark dataset, three digit datasets (MNIST, SVHN and USPS) and a large-scale time-series dataset for speech recognition. ', 'original_lines': 'art models, including mSDA, the Ladder network and DANN. We report experimental results on the Amazon benchmark dataset and a large-scale time-series dataset for speech recognition. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 A PROBABILISTIC FRAMEWORK FOR DOMAIN ADAPTATION', 'after_section': None, 'context_after': '∼ } . There are 16 pairs of experiments altogether for each possible the others are digits not in 3, 7, 8, 9 ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '750 images are digit i and ; we sample 3, 7, 8, 9 ', 'modified_lines': '{ ', 'original_lines': '{ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1,500 images from the original test set, of which ', 'paragraph_idx': 5, 'before_section': None, 'context_before': '} pair of i, j as source and target domains. We design a well-controlled experiment to ', 'modified_lines': 'compare DAuto with a standard multilayer perceptron (MLP) and DANN: all algorithms share the same network structure. Also, we apply the same training procedure to all algorithms so that the difference in performance can only be explained by the additional domain regularizer as well as the reconstruction loss in DAuto. ', 'original_lines': 'compare DAuto with a standard MLP and DANN: all algorithms share the same network structure. Also, we apply the same training procedure to all algorithms so that the difference in performance can only be explained by the additional domain regularizer as well as the reconstruction loss in DAuto. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 DATASETS AND EXPERIMENTAL SETUP', 'after_section': '4.1 DATASETS AND EXPERIMENTAL SETUP', 'context_after': '33 hours of labeled audio data from 25,000 user utterances, and we randomly sample 80 percent of them as training set and the rest are used as test set. ', 'paragraph_idx': 32, 'before_section': '4.1 DATASETS AND EXPERIMENTAL SETUP', 'context_before': 'autoencoder as well as the adversarial loss from the domain classifier. We evaluate DAuto and compare it with other algorithms for an adaptation task across three different accented datasets, of which one is recorded from native English speakers, and the other two are recorded from speakers ', 'modified_lines': 'with Mandarin and Indian accents, respectively. Each dataset contains ', 'original_lines': 'with Mandarin and Indian accents, respectively. Each dataset contains ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 32}, {'section': '4.1 DATASETS AND EXPERIMENTAL SETUP', 'after_section': None, 'context_after': '4.2 RESULTS AND ANALYSIS ', 'paragraph_idx': 33, 'before_section': '4.1 DATASETS AND EXPERIMENTAL SETUP', 'context_before': 'to be 0.5 in training mSDA, and stack the same number of layers of autoencoders as in DAuto. 3. Ladder Network (Ladder). The Ladder network (Rasmus et al., 2015) is a novel structure aiming for semi-supervised learning. It is a hierarchical denoising autoencoder where reconstruction errors ', 'modified_lines': 'between each pair of hidden layers are incorporated into the objective function. 4. DANN and 5. ADDA. Again, we use exactly the same inference structure for DANN and ADDA as in No-Adapt, Ladder, and DAuto. For all the experiments, we use early-stop to avoid overfitting. We implement all the models and ensure that all the preprocessing for data are the same for all the algorithms, so that the differences in experimental results can only be explained by the differences in models themselves. We defer detailed description about models used in each experiment to the supplementary material. ', 'original_lines': 'between each pair of hidden layers are incorporated into the objective function. 4. DANN. Again, we use exactly the same inference structure for DANN as in No-Adapt, Ladder, and DAuto. For all the experiments, we use early-stop to avoid overfitting. We implement all the models and ensure that all the preprocessing for data are the same for all the algorithms, so that the differences in experimental results can only be explained by the differences in models themselves. We defer detailed description about models used in each experiment to the supplementary material. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'DAuto in Table 1. Besides the 12 pairs of tasks under the domain adaptation setting, we also show 4 additional tasks where both the training and test sets are from the same domain. The scores from these 4 tasks can be used as empirical upper bounds to compare with the performance of domain ', 'modified_lines': '', 'original_lines': 'adaptation algorithms. DAuto significantly improves over the No-Adapt baseline in 10 out of the total 12 possible pairs, showing that it indeeds has the desired capability for domain adaptation. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.2 RESULTS AND ANALYSIS', 'after_section': '4.2 RESULTS AND ANALYSIS', 'context_after': 'principal directions of learned representations from both domains are well-aligned with each other; DAuto still works and will not degrade. 8 and 7 ', 'paragraph_idx': 38, 'before_section': '4.2 RESULTS AND ANALYSIS', 'context_before': 'using DAuto even the algorithm does not see any instances from the target category during training. To qualitatively study both the successful and failure cases, we project both representations with and without DAuto adaptation onto 2 dimensional space using PCA, shown in Fig. 3. Several interesting ', 'modified_lines': 'observations can be made from Fig. 3: when domain adaptation is successful (Fig. 3g, 3f), the on the other hand, when adaptation fails (Fig. 3e), representations do not share the same principal directions. As a special case, when the source and target domains share the same distribution (Fig. 3h), ', 'original_lines': 'observations can be made from Fig. 3: when domain adaptation is successful (Fig. 3d, 3h), the on the other hand, when adaptation fails (Fig. 3b), representations do not share the same principal directions. As a special case, when the source and target domains share the same distribution (Fig. 3f), ', 'after_paragraph_idx': 38, 'before_paragraph_idx': 38}, {'section': '8 and 7', 'after_section': None, 'context_after': 'MNIST Amazon. To show the effectiveness of different domain adaptation algorithms when labeled instances are scarce, we evaluate the five algorithms on the 16 tasks by gradually increasing the size of the ', 'paragraph_idx': 39, 'before_section': '8 and 7', 'context_before': 'Multi-class Classification. The results on multi-class classification of digits are shown in Table 2, where we highlight the successful domain adaptations using green colors and failure cases using ', 'modified_lines': 'red colors. The datasets in row correspond to the source domains and those in columns correspond to the target domains. DANN, ADDA and DAuto all contain one failure case (USPS SVHN, USPS USPS), which may be explained by the intrinsic difference between the SVHN datasets and the other two. However, on those three datasets, whenever both DANN and DAuto succeed in domain adaptation, DAuto usually outperforms DANN by around 2 percent USPS than both DANN accuracy. On the other hand, ADDA achieves far better result on MNIST and DAuto, while it also performs worse than DAuto on SVHN MNIST. Note that in this experiment all the methods share exactly the same experimental protocol, hence the → MNIST and USPS SVHN and MNIST → → → → → 8 300200100010020030040050015010050050100150200250300train-digit-7test-digit-3200100010020030040050010050050100150200train-digit-9test-digit-3400200020040060015010050050100150200250300train-digit-3test-digit-7300200100010020030040050015010050050100150200250300train-digit-9test-digit-9302010010203040506015105051015train-digit-7test-digit-3500501001502010010203040train-digit-9test-digit-3402002040608015105051015202530train-digit-3test-digit-720100102030105051015train-digit-9test-digit-9 Under review as a conference paper at ICLR 2018 Figure 4: Test set performances of MLP, Ladder, mSDA, DANN and DAuto with increasing training set sizes: from 0.2 to 1.0. DAuto achieves the best accuracy on 12 out of 16 tasks. Table 2: Classification accuracy on the digits experiment from No-Adapt, DANN, ADDA and DAuto. Improvements over baseline method are highlighted in green, and decreases in performance are shown in red. Table best viewed in color. No Adapt DANN SVHN MNIST USPS SVHN MNIST USPS SVHN USPS 0.8553 0.2054 0.1628 0.5459 0.9883 0.3396 ADDA 0.5277 0.6442 0.9507 0.8596 0.2241 0.1585 0.5426 0.6500 0.9517 0.5690 0.9880 0.3562 DAuto SVHN MNIST USPS SVHN MNIST USPS SVHN MNIST USPS 0.8707 0.2091 0.1602 0.5542 0.9894 0.3570 0.5561 0.6856 0.9512 0.8626 0.2086 0.1717 0.5864 0.9869 0.3762 0.5655 0.6428 0.9537 difference can only be explained by their different objective functions and model designs. In other words, the adaptation can benefit from the reconstruction error from autoencoders, which works as an unsupervised regularizer. ', 'original_lines': 'red colors. The datasets in row correspond to the source domains and those in columns corre- spond to the target domains. Both DANN and DAuto contain one failure case (USPS SVHN and USPS), which may be explained by the intrinsic difference between the SVHN datasets and the other two. However, on those three datasets, whenever both DANN and DAuto succeed in domain adaptation, DAuto usually outperforms DANN by around 2 percent accuracy. Note that in this experi- ment both DANN and DAuto share exactly the same experimental protocol, hence the difference can only be explained by their different objective functions. In other words, the adaptation can benefit from the reconstruction error from autoencoders, which works as an unsupervised regularizer. → → 8 300200100010020030040050015010050050100150200250300train-digit-7test-digit-3302010010203040506015105051015train-digit-7test-digit-3400200020040060015010050050100150200250300train-digit-3test-digit-7402002040608015105051015202530train-digit-3test-digit-7300200100010020030040050015010050050100150200250300train-digit-9test-digit-920100102030105051015train-digit-9test-digit-9200100010020030040050010050050100150200train-digit-9test-digit-3500501001502010010203040train-digit-9test-digit-3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 39}, {'section': 'Abstract', 'after_section': None, 'context_after': '5 CONCLUSION We propose a probabilistic framework that incorporates both generative and discriminative modeling ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'framework, DAuto achieves this goal by maximizing the marginal probability of unlabeled instances, and due to the shared component, this further helps training the discriminative model. ', 'modified_lines': '', 'original_lines': 'Figure 4: Test set performances of MLP, Ladder, mSDA, DANN and DAuto with increasing training set sizes: from 0.2 to 1.0. DAuto achieves the best accuracy on 12 out of 16 tasks. 9 0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.720.740.760.780.800.82Classification Accuracybooks->booksMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.700.720.740.760.78Classification Accuracybooks->dvdMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.700.710.720.730.740.75Classification Accuracybooks->electronicsMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.700.710.720.730.740.750.760.770.780.79Classification Accuracybooks->kitchenMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.680.700.720.740.760.78Classification Accuracydvd->booksMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.720.740.760.780.800.82Classification Accuracydvd->dvdMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.710.720.730.740.750.760.770.78Classification Accuracydvd->electronicsMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.730.740.750.760.770.780.790.800.81Classification Accuracydvd->kitchenMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.650.660.670.680.690.700.710.720.73Classification Accuracyelectronics->booksMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.670.680.690.700.710.720.730.740.75Classification Accuracyelectronics->dvdMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.780.790.800.810.820.830.840.850.860.87Classification Accuracyelectronics->electronicsMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.780.790.800.810.820.830.840.850.860.87Classification Accuracyelectronics->kitchenMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.660.670.680.690.700.710.720.73Classification Accuracykitchen->booksMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.670.680.690.700.710.720.730.740.75Classification Accuracykitchen->dvdMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.770.780.790.800.810.820.830.840.85Classification Accuracykitchen->electronicsMLPmSDALadderDANNDAuto0.20.30.40.50.60.70.80.91.0Proportion of Training Set0.800.810.820.830.840.850.860.870.880.89Classification Accuracykitchen->kitchenMLPmSDALadderDANNDAuto Under review as a conference paper at ICLR 2018 Table 2: Classification accuracy on the digits experiment from No-Adapt, DANN and DAuto. Improvements over baseline method are highlighted in green, and decreases in performance are shown in red. Table best viewed in color. No Adapt DANN DAuto SVHN MNIST USPS SVHN MNIST USPS SVHN MNIST USPS SVHN MNIST USPS 0.8553 0.2054 0.1628 0.5459 0.9883 0.3396 0.5277 0.6442 0.9507 0.8596 0.2241 0.1585 0.5690 0.9880 0.3562 0.5426 0.6500 0.9517 0.8626 0.2086 0.1717 0.5864 0.9869 0.3762 0.5655 0.6428 0.9537 Table 3: CTC loss on 9 tasks from LSTM (No-Adapt), DANN and DAuto. A lower loss indicates a better speech model. Improvements over baseline method are highlighted in green, and decreases in performance are shown in red. Table best viewed in color. LSTM (No Adapt) DANN DAuto US CN IN US CN IN US CN IN US 263.7 CN 226.6 IN 389.7 160.4 110.9 245.5 408.9 375.4 376.5 189.3 186.8 498.4 112.4 66.4 429.1 428.9 453.0 244.7 185.9 160.7 493.0 97.9 45.7 470.3 486.1 494.8 241.3 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pp. 343–351, 2016. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'John Blitzer, Mark Dredze, Fernando Pereira, et al. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, volume 7, pp. 440–447, 2007. ', 'modified_lines': '', 'original_lines': '10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 DATASETS AND EXPERIMENTAL SETUP', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 30, 'before_section': None, 'context_before': 'Larry Wassermann. All of nonparametric statistics, 2006. ', 'modified_lines': '12 ', 'original_lines': '11 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-14 19:46:53
|
ICLR.cc/2018/Conference
|
H1UVSIgzf
|
SJNGaUxA-
|
[]
|
2018-01-25 15:42:05
|
ICLR.cc/2018/Conference
|
HJkhPfWAW
|
rkWU0z-RZ
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'and ImageNet (ILSVRC 2012) (Russakovsky et al., 2015) datasets. On CIFAR-10, we trained a simple 4-layer convolutional network and the 8-layer convolutional network of Zhou et al. (2016). On ImageNet, we trained AlexNet (Krizhevsky et al., 2012), the most common model in the quantization ', 'modified_lines': 'literature, and ResNet-18 (He et al., 2015a). Experiment details are provided in Appendix B, along with training curves for all experiments. ', 'original_lines': 'literature, and ResNet-18 (He et al., 2015a). Experiment details are provided in Appendix B. 4.1 CIFAR-10 Test accuracies for the 4-layer and 8-layer convolutional network on CIFAR-10 are shown in Table 1. For the simpler 4-layer model, FTP-SH shows a consistent 0.5-1% accuracy gain over SSTE for the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 3: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset figures show the test accuracy for the final 25 epochs in detail. In ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ImageNet. The left-hand plot shows that training sign activations with FTP-SH provides consistently better test accuracy than SSTE throughout the training trajectory, despite the hyperparameters being optimized for SSTE. This improvement is even larger for the 2-bit qReLU activation in the right- ', 'modified_lines': '', 'original_lines': 'hand plot, where the FTP-SH qReLU even outperforms the full-precision ReLU for part of its trajectory, and outperforms the SSTE-trained qReLU by almost 2%. Interestingly, we find that the saturated ReLU outperforms the standard ReLU by almost a full point of accuracy. We believe that this is due to the regularization effect caused by saturating the activation. This may also account ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Internal Covariate Shift. In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37, pp. 448–456, Lille, France, 2015. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Processing Systems, pp. 1–17, 2016. Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Neural Networks with Continuous or Discrete Weights. In Advances in Neural Information Processing Systems, pp. 963–971. 2014. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Proceedings of the 34th International Conference on Machine Learning, pp. 1–33, 2017. Daniel Soudry, Itay Hubara, and Ron Meir. Expectation Backpropagation: Parameter-Free Training of Multilayer ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 20:59:53
|
ICLR.cc/2018/Conference
|
rkWU0z-RZ
|
S1iEG7-CZ
|
[]
|
2017-10-27 21:16:35
|
ICLR.cc/2018/Conference
|
S1iEG7-CZ
|
Bk4oGQbCW
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'entire training trajectory, resulting in the 0.7% improvement shown in Table 1. However, for the 2-bit qRELU activation, SSTE and FTP-SH perform nearly identically in the 4-layer model. Conversely, for the more complex 8-layer model, the FTP-SH accuracy is only 0.3% above SSTE, but for the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'with per-layer soft hinge losses (FTP-SH) and the saturated straight-through estimator (SSTE). Bold numbers denote the best performing activation in each pair. ', 'modified_lines': '', 'original_lines': '4.1 CIFAR-10 Test accuracies for the 4-layer and 8-layer convolutional network on CIFAR-10 are shown in Table 1. For the simpler 4-layer model, FTP-SH shows a consistent 0.5-1% accuracy gain over SSTE for the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': 'Figure 3: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset figures show the test accuracy for the final 25 epochs in detail. In ', 'paragraph_idx': 38, 'before_section': '4 EXPERIMENTS', 'context_before': 'ImageNet. The left-hand plot shows that training sign activations with FTP-SH provides consistently better test accuracy than SSTE throughout the training trajectory, despite the hyperparameters being optimized for SSTE. This improvement is even larger for the 2-bit qReLU activation in the right- ', 'modified_lines': 'hand plot, where the FTP-SH qReLU even outperforms the full-precision ReLU for part of its trajectory, and outperforms the SSTE-trained qReLU by almost 2%. Interestingly, we find that the saturated ReLU outperforms the standard ReLU by almost a full point of accuracy. We believe that this is due to the regularization effect caused by saturating the activation. This may also account ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 38}, {'section': 'Abstract', 'after_section': None, 'context_after': 'for the surprisingly good performance of the FTP-SH qReLU relative to full-precision ReLU, as hard-threshold activations also provide a strong regularization effect. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '01020304050607080Epoch303540455055Top-1 AccuracySign (FTP-SH)Sign (SSTE)55606570758045464701020304050607080Epoch304050607080Top-1 AccuracyqReLU (FTP-SH)qReLU (SSTE)ReLUSaturated ReLU556065707580586062 Under review as a conference paper at ICLR 2018 ', 'modified_lines': '', 'original_lines': 'hand plot, where the FTP-SH qReLU even outperforms the full-precision ReLU for part of its trajectory, and outperforms the SSTE-trained qReLU by almost 2%. Interestingly, we find that the saturated ReLU outperforms the standard ReLU by almost a full point of accuracy. We believe that this is due to the regularization effect caused by saturating the activation. This may also account ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034, 2015b. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'arXiv preprint arXiv:1512.03385, 2015a. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'pp. 318–362. The MIT Press, 1986. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 21:18:19
|
ICLR.cc/2018/Conference
|
Bk4oGQbCW
|
SJx7w7b0W
|
[]
|
2017-10-27 21:37:27
|
ICLR.cc/2018/Conference
|
SJx7w7b0W
|
rJB9bz-0Z
|
[]
|
2018-01-25 15:40:24
|
ICLR.cc/2018/Conference
|
rJB9bz-0Z
|
HyCfh6TvM
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'ImageNet for AlexNet and ResNet-18, with multiple types of hard-threshold activation. RELATED WORK The most common method for learning deep hard-threshold networks is to use backpropagation with the straight-through estimator (STE) (Hinton, 2012; Bengio et al., 2013), which simply replaces the Another common approach to training with hard-threshold units is to use randomness, either via stochastic neurons (e.g., Bengio et al. (2013); Hubara et al. (2016)) or probabilistic training methods, such as those of Soudry et al. (2014) or Williams (1992), both of which are methods for softening hard-threshold units. In contrast, our goal is to learn networks with deterministic hard-threshold units. Lee et al., 2015; Taylor et al., 2016) is a method that explicitly associates a target with the output of each activation in the network, and then updates each layer’s weights to make its activations more 2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS i=1 with vector-valued inputs x(i) ∈ Rn and binary targets t ∈ Given a dataset D = {(x(i), t(i))}m {−1, +1}, we are interested in learning an (cid:96)-layered deep neural network with hard-threshold units (1) with weight matrices W = {Wd : Wd ∈ Rnd×nd−1}(cid:96) d=1 and element-wise activation function hd = g(Wd . . . g(W1x) . . . ) denote the output of each hidden layer, where hd = (hd1, . . . , hdnd ) denote the pre-activation output of layer d. For compactness, we have incorporated the bias term into the weight matrices. We denote a row or column of a matrix Wd as Wd,:j and Wd,j:, respectively, and the entry in the jth row and kth column as Wd,jk. Using matrix notation, we can write this model as Y = f (X; W ) = g(W(cid:96) . . . g(W1X) . . . ), where X is the n × m matrix of dataset instances and Y is the n(cid:96) × m matrix of outputs. We let T(cid:96) denote the matrix of final-layer targets, Hd denote the nd × m matrix of hidden activations at layer d, and Zd denote the nd × m matrix of pre-activations at layer d. Our goal will be to learn f by finding the weights W that minimize an aggregate loss L(Y, T(cid:96)) = (cid:80)m Definition 1. A dataset {(x(i), t(i))}m real number γ > 0 such that (w · x(i))t(i) > γ for all i = 1 . . . m. When a dataset is linearly separable, the perceptron algorithm is guaranteed to find its separating hy- perplane in a finite number of steps (Novikoff, 1962), where the number of steps required is dependent ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Bengio et al., 2013), which can now be seen as an instance of FTPROP with a specific choice of per-layer loss function and target heuristic. Finally, we develop a novel per-layer loss function that improves learning of deep hard-threshold networks. Empirically, we show improvements for our ', 'modified_lines': 'algorithm over the straight-through estimator on CIFAR-10 for two convolutional networks and on derivative of each hard-threshold unit with the identity function. The STE is used in the quantized net- work literature (see citations above) to propagate gradients through quantized activations, and is used in Shalev-Shwartz et al. (2017) for training with flat activations. Later work generalized the STE to replace the hard-threshold derivative with other functions, including saturated versions of the identity function (Hubara et al., 2016). However, while the STE tends to work quite well in practice, we know of no rigorous justification or analysis of why it works or how to choose replacement derivatives. Beyond being unsatisfying in this regard, the STE is not well understood and can lead to gradient mis- match errors, which compound as the number of layers increases (Lin & Talathi, 2016). We show here that the STE, saturated STE, and all types of STE that we have seen are special cases of our framework, thus providing a principled justification for it and a basis for exploring and understanding alternatives. Finally, target propagation (TP) (LeCun, 1986; 1987; Carreira-Perpi˜n´an & Wang, 2014; Bengio, 2014; similar to the targets. Our framework can be viewed as an instance of TP that uses combinatorial optimization to set discrete targets, whereas previous approaches employed continuous optimization to set continuous targets. The MADALINE Rule II algorithm (Winter & Widrow, 1988) can also be seen as a special case of our framework and of TP, where only one target is set at a time. y = f (x; W ) = g(W(cid:96) g(W(cid:96)−1 . . . g(W1x) . . . )), g(x) = sign(x), where sign is the sign function such that sign(x) = 1 if x > 0 and −1 oth- erwise. Each layer d has nd units, where we define n0 = n for the input layer, and we let and hdj ∈ {−1, +1} for each layer d and each unit j. Similarly, we let zd = Wd g(. . . g(W1x) . . . ) 2 Published as a conference paper at ICLR 2018 Figure 1: After setting the hidden-layer targets T1 of a deep hard-threshold network, the network decomposes into independent perceptrons, which can then be learned with standard methods. i=1 L(y(i), t(i)) for some convex per-instance loss L(y, t). In the simplest case, a hard-threshold network with no hidden layers is a perceptron Y = g(W1X), as introduced by Rosenblatt (1958). The goal of learning a perceptron, or any hard-threshold network, is to classify unseen data. A useful first step is to be able to correctly classify the training data, which we focus on here for simplicity when developing our framework; however, standard generalization tech- niques such as regularization are easily incorporated into this framework and we do this for the exper- iments. Since a perceptron is a linear classifier, it is only able to separate a linearly-separable dataset. i=1 is linearly separable iff there exists a vector w ∈ Rn and a ', 'original_lines': 'algorithm over the straight-through estimator on CIFAR10 for two convolutional networks and on derivative of each hard-threshold unit with the identity function. The STE is used in the quantized network literature (see citations above) to propagate gradients through quantized activations, and is used in Shalev-Shwartz et al. (2017) for training with flat activations. Later work generalized the STE to replace the hard-threshold derivative with other functions, including saturated versions of the identity function (Hubara et al., 2016). However, while the STE tends to work quite well in practice, we know of no rigorous justification or analysis of why it works or how to choose replacement derivatives. Beyond being unsatisfying in this regard, the STE is not well understood and can lead to gradient mismatch errors, which compound as the number of layers increases (Lin & Talathi, 2016). We show here that the (saturated) STE is a special case of our framework, thus providing a principled justification for it and a basis for exploring and understanding alternatives. Finally, target propagation (LeCun, 1986; 1987; Carreira-Perpi˜n´an & Wang, 2014; Bengio, 2014; similar to the targets. Our framework can be viewed as an instance of target propagation that uses combinatorial optimization to set discrete targets, whereas previous approaches employed continuous optimization. The MADALINE Rule II (MRII) algorithm (Winter & Widrow, 1988) can also be seen as a special case of our framework and of target propagation, where only one target is set at a time. y = f (x; W ) = g(W(cid:96)g(W(cid:96)−1 . . . g(W1x) . . . )), g(x) = sign(x), where sign is the sign function such that sign(x) = 1 if x > 0 and −1 otherwise. Each layer d has nd units, where we define n0 = n for the input layer, and we let and hdj ∈ {−1, +1} for each layer d and each unit j. Similarly, we let zd = Wdg(. . . g(W1x) . . . ) In the simplest case, a hard-threshold network with no hidden layers is a perceptron Y = g(W1X), i=1 L(y(i), t(i)) for some convex per-instance loss L(y, t). 2 Under review as a conference paper at ICLR 2018 Figure 1: After setting the hidden-layer targets T1 of a deep hard-threshold network, the network decomposes into independent perceptrons, which can then be learned with standard methods. as introduced by Rosenblatt (1958). The goal of learning a perceptron, or any hard-threshold network, is to classify unseen data. A useful first step is to be able to correctly classify the training data, which we focus on here for simplicity when developing our framework; however, standard generalization techniques such as regularization are easily incorporated into this framework. Since a perceptron is a linear classifier, it is only able to separate a linearly-separable dataset. i=1 is linearly separable iff there exists a vector w ∈ Rn and a ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS', 'after_section': '2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS', 'context_after': 'Auxiliary-variable-based approaches, such as ADMM (Taylor et al., 2016; Carreira-Perpi˜n´an & Wang, 2014) and other target propagation methods (LeCun, 1986; Lee et al., 2015) use a similar process for ', 'paragraph_idx': 14, 'before_section': '2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS', 'context_before': 'functions, such as XOR, are not linearly separable and thus cannot be learned by a perceptron (Minsky & Papert, 1969). We would thus like to be able to learn multilayer hard-threshold networks. ', 'modified_lines': 'Consider a simple single-hidden-layer hard-threshold network Y = f (X; W ) = g(W2 g(W1X)) = g(W1H1) for a dataset D = (X, T2), where H1 = g(W1X) are the hidden-layer activations. An example of such a network is shown on the left side of Figure 1. Clearly, Y and H1 are both collections of (single-layer) perceptrons. Backpropagation cannot be used to train the input layer’s weights W1 because of the hard-threshold activations but, since each hidden activation h1j is the output of a perceptron, if we knew the value t1j ∈ {−1, +1} that each hidden unit should take for each input x(i), we could then use the perceptron algorithm to set the first-layer weights, W1, to produce these target values. We refer to t1j as the target of h1j. Given a matrix of hidden-layer targets T1 ∈ {−1, +1}n1×m, each layer (and in fact each perceptron in each layer) can be learned separately, as they no longer depend on each other, where the goal of perceptron learning is to update the weights of each layer d so that its activations Hd equal its targets Td given inputs Td−1. Figure 1 shows an example of this decomposition. We denote the targets of an (cid:96)-layer network as T = {T1, . . . , T(cid:96)}, where Tk for k = 1 . . . (cid:96) − 1 are the hidden-layer targets and T(cid:96) are the dataset targets. We often let T0 = X for notational convenience. ', 'original_lines': 'Consider a simple single-hidden-layer hard-threshold network Y = f (X; W ) = g(W2g(W1X)) = g(W1H1) for a dataset D = (X, T2), where H1 = g(W1X) are the hidden-layer activations. Clearly, Y and H1 are both collections of (single-layer) perceptrons. Backpropagation cannot be used to train the input layer’s weights W1, but since each hidden activation h1j is the output of a perceptron, then if we knew the value t1j ∈ {−1, +1} that each hidden unit should take for each input x(i), we could use the perceptron algorithm to set W1 to produce these values. We refer to t1j as the target of h1j. Given a matrix of hidden-layer targets T1 ∈ {−1, +1}n1×m, each layer (and in fact each perceptron in each layer) can be learned separately, as they no longer depend on each other, where the goal of perceptron learning is to update the weights of each layer d so that its activations Hd equal its targets Td given inputs Td−1. Figure 1 shows an example of this decomposition. We denote the targets of an (cid:96)-layer network as T = {T1, . . . , T(cid:96)}, where Tk for k = 1 . . . (cid:96) − 1 are the hidden-layer targets, T(cid:96) are the dataset targets, and we often let T0 = X for notational convenience. ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 13}, {'section': '2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS', 'after_section': '2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS', 'context_after': 'condition for f (X; W ) to separate the data is that the hidden-layer targets induce linear separability in all units in both layers of the network. We refer to this property as feasibility. Definition 2. A setting of the targets T = {T1, . . . , T(cid:96)} of an (cid:96)-layer deep hard-threshold network f (X; W ) is feasible for a dataset D = (X, T(cid:96)) iff for each unit j = 1 . . . nd in each layer d = 1 . . . (cid:96) the dataset formed by its inputs Td−1 and targets Td,j: is linearly separable, where T0 (cid:44) X. Feasibility is a much weaker condition than linear separability, since the output decision boundary of a multilayer hard-threshold network with feasible targets is in general highly nonlinear. It follows from the definition of feasibility and convergence of the perceptron algorithm that if a feasible setting of a network’s targets on a dataset exists, the network can separate the training data. Proposition 1. Let D = {(x(i), t(i))} be a dataset and let f (X; W ) be an (cid:96)-layer hard-threshold network with feasible targets T = {T1, . . . , T(cid:96)} in which each layer d of f was trained separately with inputs Td−1 and targets Td, where T0 (cid:44) X, then f will correctly classify each instance x(i), ', 'paragraph_idx': 16, 'before_section': '2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS', 'context_before': 'Since the final layer is a perceptron, the training instances can only be separated if the hidden-layer activations H1 are linearly separable with respect to the dataset targets T2. Thus, the hidden-layer targets T1 must be set such that they are linearly separable with respect to the dataset targets T2, ', 'modified_lines': 'since the hidden-layer targets T1 are the intended values of their activations H1. However, in order to ensure that the hidden-layer activations H1 will equal their targets T1 after training, the hidden-layer targets T1 must be able to be produced (exactly) by the first layer, which is only possible if the hidden-layer targets T1 are also linearly separable with respect to the inputs X. Thus, a sufficient 3 x1x2W1W2h1h2h3y1y2t21t22setT1x1x2t11h1W1,1:x1x2t12h2W1,2:x1x2t13h3W1,3:y1t21t11t12t13W2,1:y2t22t11t12t13W2,2:X<latexit sha1_base64="AqaPp2FW5aMgiXxPCXtXwJOuxuc=">AAAB8nicbZBNS8NAEIYn9avWr6pHL8EieJCSiqDeil48tmBsoQ1ls5m0SzebsLsplNBf4FVPnsSrf8iD/8VtmoO2Diw8vO8MM/v6CWdKO86XVVpb39jcKm9Xdnb39g+qh0dPKk4lRZfGPJZdnyjkTKCrmebYTSSSyOfY8cf3c78zQalYLB71NEEvIkPBQkaJNlK7O6jWnLqTl70KjQJqUFRrUP3uBzFNIxSacqJUr+Ek2suI1IxynFX6qcKE0DEZYs+gIBGqi2DCEpWjl+U3z+wzYwZ2GEvzhLZz9fdwRiKlppFvOiOiR2rZm4v/eb1UhzdexkSSahR0sShMua1jex6AHTCJVPOpAUIlM2fbdEQkodrEVDF5NJZ/vwruZf227rSvas27IpgynMApnEMDrqEJD9ACFyggPMMLvFqp9Wa9Wx+L1pJVzBzDn7I+fwDs5ZFX</latexit><latexit sha1_base64="AqaPp2FW5aMgiXxPCXtXwJOuxuc=">AAAB8nicbZBNS8NAEIYn9avWr6pHL8EieJCSiqDeil48tmBsoQ1ls5m0SzebsLsplNBf4FVPnsSrf8iD/8VtmoO2Diw8vO8MM/v6CWdKO86XVVpb39jcKm9Xdnb39g+qh0dPKk4lRZfGPJZdnyjkTKCrmebYTSSSyOfY8cf3c78zQalYLB71NEEvIkPBQkaJNlK7O6jWnLqTl70KjQJqUFRrUP3uBzFNIxSacqJUr+Ek2suI1IxynFX6qcKE0DEZYs+gIBGqi2DCEpWjl+U3z+wzYwZ2GEvzhLZz9fdwRiKlppFvOiOiR2rZm4v/eb1UhzdexkSSahR0sShMua1jex6AHTCJVPOpAUIlM2fbdEQkodrEVDF5NJZ/vwruZf227rSvas27IpgynMApnEMDrqEJD9ACFyggPMMLvFqp9Wa9Wx+L1pJVzBzDn7I+fwDs5ZFX</latexit><latexit sha1_base64="AqaPp2FW5aMgiXxPCXtXwJOuxuc=">AAAB8nicbZBNS8NAEIYn9avWr6pHL8EieJCSiqDeil48tmBsoQ1ls5m0SzebsLsplNBf4FVPnsSrf8iD/8VtmoO2Diw8vO8MM/v6CWdKO86XVVpb39jcKm9Xdnb39g+qh0dPKk4lRZfGPJZdnyjkTKCrmebYTSSSyOfY8cf3c78zQalYLB71NEEvIkPBQkaJNlK7O6jWnLqTl70KjQJqUFRrUP3uBzFNIxSacqJUr+Ek2suI1IxynFX6qcKE0DEZYs+gIBGqi2DCEpWjl+U3z+wzYwZ2GEvzhLZz9fdwRiKlppFvOiOiR2rZm4v/eb1UhzdexkSSahR0sShMua1jex6AHTCJVPOpAUIlM2fbdEQkodrEVDF5NJZ/vwruZf227rSvas27IpgynMApnEMDrqEJD9ACFyggPMMLvFqp9Wa9Wx+L1pJVzBzDn7I+fwDs5ZFX</latexit>H1<latexit sha1_base64="4gUgMmOFCVqkT4nbva9DNXB9Ofk=">AAAB9HicbZDNSsNAFIVv6l+tf1WXbgaL4EJKIoK6K7rpsqKxhTaUyWTSDp1MwsykUkIfwa2uXIlb38eF7+I0zUJbLwx8nHMv987xE86Utu0vq7Syura+Ud6sbG3v7O5V9w8eVZxKQl0S81h2fKwoZ4K6mmlOO4mkOPI5bfuj25nfHlOpWCwe9CShXoQHgoWMYG2k+2bf6Vdrdt3OCy2DU0ANimr1q9+9ICZpRIUmHCvVdexEexmWmhFOp5VeqmiCyQgPaNegwBFVZ8GYJSpHL8uvnqITYwYojKV5QqNc/T2c4UipSeSbzgjroVr0ZuJ/XjfV4ZWXMZGkmgoyXxSmHOkYzSJAAZOUaD4xgIlk5mxEhlhiok1QFZOHs/j7ZXDP69d1++6i1rgpginDERzDKThwCQ1oQgtcIDCAZ3iBV+vJerPerY95a8kqZg7hT1mfP/7Bkes=</latexit><latexit sha1_base64="4gUgMmOFCVqkT4nbva9DNXB9Ofk=">AAAB9HicbZDNSsNAFIVv6l+tf1WXbgaL4EJKIoK6K7rpsqKxhTaUyWTSDp1MwsykUkIfwa2uXIlb38eF7+I0zUJbLwx8nHMv987xE86Utu0vq7Syura+Ud6sbG3v7O5V9w8eVZxKQl0S81h2fKwoZ4K6mmlOO4mkOPI5bfuj25nfHlOpWCwe9CShXoQHgoWMYG2k+2bf6Vdrdt3OCy2DU0ANimr1q9+9ICZpRIUmHCvVdexEexmWmhFOp5VeqmiCyQgPaNegwBFVZ8GYJSpHL8uvnqITYwYojKV5QqNc/T2c4UipSeSbzgjroVr0ZuJ/XjfV4ZWXMZGkmgoyXxSmHOkYzSJAAZOUaD4xgIlk5mxEhlhiok1QFZOHs/j7ZXDP69d1++6i1rgpginDERzDKThwCQ1oQgtcIDCAZ3iBV+vJerPerY95a8kqZg7hT1mfP/7Bkes=</latexit><latexit sha1_base64="4gUgMmOFCVqkT4nbva9DNXB9Ofk=">AAAB9HicbZDNSsNAFIVv6l+tf1WXbgaL4EJKIoK6K7rpsqKxhTaUyWTSDp1MwsykUkIfwa2uXIlb38eF7+I0zUJbLwx8nHMv987xE86Utu0vq7Syura+Ud6sbG3v7O5V9w8eVZxKQl0S81h2fKwoZ4K6mmlOO4mkOPI5bfuj25nfHlOpWCwe9CShXoQHgoWMYG2k+2bf6Vdrdt3OCy2DU0ANimr1q9+9ICZpRIUmHCvVdexEexmWmhFOp5VeqmiCyQgPaNegwBFVZ8GYJSpHL8uvnqITYwYojKV5QqNc/T2c4UipSeSbzgjroVr0ZuJ/XjfV4ZWXMZGkmgoyXxSmHOkYzSJAAZOUaD4xgIlk5mxEhlhiok1QFZOHs/j7ZXDP69d1++6i1rgpginDERzDKThwCQ1oQgtcIDCAZ3iBV+vJerPerY95a8kqZg7hT1mfP/7Bkes=</latexit>Y<latexit sha1_base64="hRlM8jTv7lxxhUlv7opCdCdltYE=">AAAB8nicbZBNS8NAEIYnftb6VfXoJVgED1JSEdRb0YvHFoyttKFsNpN26WYTdjeFEvoLvOrJk3j1D3nwv7hNc9DWgYWH951hZl8/4Uxpx/myVlbX1jc2S1vl7Z3dvf3KweGjilNJ0aUxj2XHJwo5E+hqpjl2Eokk8jm2/dHdzG+PUSoWiwc9SdCLyECwkFGijdR66leqTs3Jy16GegFVKKrZr3z3gpimEQpNOVGqW3cS7WVEakY5Tsu9VGFC6IgMsGtQkAjVeTBmicrRy/Kbp/apMQM7jKV5Qtu5+ns4I5FSk8g3nRHRQ7XozcT/vG6qw2svYyJJNQo6XxSm3NaxPQvADphEqvnEAKGSmbNtOiSSUG1iKps86ou/Xwb3onZTc1qX1cZtEUwJjuEEzqAOV9CAe2iCCxQQnuEFXq3UerPerY9564pVzBzBn7I+fwDuc5FY</latexit><latexit sha1_base64="hRlM8jTv7lxxhUlv7opCdCdltYE=">AAAB8nicbZBNS8NAEIYnftb6VfXoJVgED1JSEdRb0YvHFoyttKFsNpN26WYTdjeFEvoLvOrJk3j1D3nwv7hNc9DWgYWH951hZl8/4Uxpx/myVlbX1jc2S1vl7Z3dvf3KweGjilNJ0aUxj2XHJwo5E+hqpjl2Eokk8jm2/dHdzG+PUSoWiwc9SdCLyECwkFGijdR66leqTs3Jy16GegFVKKrZr3z3gpimEQpNOVGqW3cS7WVEakY5Tsu9VGFC6IgMsGtQkAjVeTBmicrRy/Kbp/apMQM7jKV5Qtu5+ns4I5FSk8g3nRHRQ7XozcT/vG6qw2svYyJJNQo6XxSm3NaxPQvADphEqvnEAKGSmbNtOiSSUG1iKps86ou/Xwb3onZTc1qX1cZtEUwJjuEEzqAOV9CAe2iCCxQQnuEFXq3UerPerY9564pVzBzBn7I+fwDuc5FY</latexit><latexit sha1_base64="hRlM8jTv7lxxhUlv7opCdCdltYE=">AAAB8nicbZBNS8NAEIYnftb6VfXoJVgED1JSEdRb0YvHFoyttKFsNpN26WYTdjeFEvoLvOrJk3j1D3nwv7hNc9DWgYWH951hZl8/4Uxpx/myVlbX1jc2S1vl7Z3dvf3KweGjilNJ0aUxj2XHJwo5E+hqpjl2Eokk8jm2/dHdzG+PUSoWiwc9SdCLyECwkFGijdR66leqTs3Jy16GegFVKKrZr3z3gpimEQpNOVGqW3cS7WVEakY5Tsu9VGFC6IgMsGtQkAjVeTBmicrRy/Kbp/apMQM7jKV5Qtu5+ns4I5FSk8g3nRHRQ7XozcT/vG6qw2svYyJJNQo6XxSm3NaxPQvADphEqvnEAKGSmbNtOiSSUG1iKps86ou/Xwb3onZTc1qX1cZtEUwJjuEEzqAOV9CAe2iCCxQQnuEFXq3UerPerY9564pVzBzBn7I+fwDuc5FY</latexit> Published as a conference paper at ICLR 2018 ', 'original_lines': 'since the hidden-layer targets T1 are the intended values of their activations H1. However, in order to ensure that the hidden-layer activations H1 will equal their targets T1 after training, the hidden-layer targets T1 must be able to be produced (exactly) by the first layer, which is only possible if the hidden-layer targets T1 are linearly separable with respect to the inputs X. Thus, a sufficient 3 x1x2W1W2h1h2h3y1y2t21t22setT1x1x2t11h1W1,1:x1x2t12h2W1,2:x1x2t13h3W1,3:y1t21t11t12t13W2,1:y2t22t11t12t13W2,2: Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': '3 FEASIBLE TARGET PROPAGATION', 'after_section': '3 FEASIBLE TARGET PROPAGATION', 'context_after': '// feasible Of course, modern deep networks will not always have a feasible setting of their targets for a given dataset. For example, a convolutional layer imposes a large amount of structure on its weight matrix, ', 'paragraph_idx': 26, 'before_section': '3 FEASIBLE TARGET PROPAGATION', 'context_before': '// check if targets Td−1 are feasible ', 'modified_lines': 'return False As the name implies, FTPROP is a form of target propagation (LeCun, 1986; 1987; Lee et al., 2015) that uses discrete optimization to set discrete targets, instead of using continuous optimization to set continuous targets. FTPROP is also highly related to RDIS (Friesen & Domingos, 2015), a powerful nonconvex optimization algorithm based on satisfiability (SAT) solvers that recursively chooses and sets subsets of variables in order to decompose the underlying problem into simpler subproblems. While RDIS is applied only to continuous problems, the ideas behind RDIS can be generalized to discrete variables via the sum-product theorem (Friesen & Domingos, 2016). This suggests an interesting connection between FTPROP and SAT that we leave for future work. ', 'original_lines': '14: return False ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 26}, {'section': 'Abstract', 'after_section': None, 'context_after': '3.1 TARGET HEURISTICS When the activations of each layer are differentiable, backpropagation provides a method for telling each layer how to adjust its outputs to improve the loss. Conversely, in hard-threshold networks, target propagation provides a method for telling each layer how to adjust its outputs to improve the ∂ ∂hdj and Zd+1 is either the pre-activation or post-activation output, depending on the choice of loss. When used to update only a single target at a time, this heuristic will often set the target value that correctly results in the lowest loss. In particular, when Ld+1 is convex, its negative partial derivative ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Fortunately, it is straightforward to convert FTPROP to a mini-batch algorithm and to relax the feasibility requirements. In particular, since it is important not to overcommit to any one mini-batch, the mini-batch version of FTPROP (i) only updates the weights and targets of each layer once per ', 'modified_lines': 'mini-batch; (ii) only takes a small gradient step on each layer’s weights, instead of optimizing them fully; (iii) sets the targets of the downstream layer in parallel with updating the current layer’s weights, since the weights will not change much; and (iv) removes all checks for feasibility. We call this algorithm FTPROP-MB and present pseudocode in Algorithm 2. FTPROP-MB closely resembles backpropagation-based methods, allowing us to easily implement it with standard libraries. 5 Published as a conference paper at ICLR 2018 Algorithm 2 Train an (cid:96)-layer hard-threshold network Y = f (X; W ) on dataset D = (X, T(cid:96)) with mini-batch feasible target propagation (FTPROP-MB) using loss functions L = {Ld}(cid:96) 1: initialize weights W = {W1, . . . , W(cid:96)} randomly 2: for each minibatch (Xb, Tb) from D do 3: 4: 5: initialize targets T1, . . . , T(cid:96)−1 as the outputs of their hidden units in f (Xb; W ) // forward pass set T0 ← Xb, set T(cid:96) ← Tb, and set T ← {T0, . . . , T(cid:96)} FTPROP-MB(W, T, L, (cid:96)) d=1. 6: function FTPROP-MB(weights W , targets T , losses L, and layer index d) 7: 8: 9: ˆTd−1 ← set targets for upstream layer based on current weights Wd and loss Ld(Zd, Td) update Wd with respect to layer loss Ld(Zd, Td) if d > 1 then FTPROP-MB(W, {T0, . . . , ˆTd−1, . . . , T(cid:96)}, L, d − 1) // where Zd = WdTd−1 = WdHd−1 next layer’s loss. While gradients cannot propagate through hard-threshold units, the derivatives within a layer can still be computed. An effective and efficient heuristic for setting the target tdj for an activation hdj of layer d is to use the (negative) sign of the partial derivative of the next layer’s loss. Specifically, we set tdj = r(hdj), where r(hdj) (cid:44) sign (cid:18) − Ld+1(Zd+1, Td+1) (cid:19) (2) ', 'original_lines': 'mini-batch; (ii) only takes a small gradient step on each layer’s weights, instead of optimizing them fully; (iii) sets the targets of the downstream layer in parallel with updating the current layer’s weights, since the weights will not change much; and (iv) removes all checks for feasibility. We call this algorithm FTPROP-MB and present pseudocode in Algorithm 2 in Appendix A. FTPROP-MB closely resembles backpropagation-based methods, allowing us to easily implement it with standard libraries. next layer’s loss, as long as the targets are set effectively. While gradients cannot propagate through hard-threshold units, the derivatives within a layer can still be computed. An effective and efficient 5 Under review as a conference paper at ICLR 2018 (a) (b) (c) (d) Figure 2: Figures (a)-(c) show different per-layer loss functions (solid blue line) and their derivatives (dashed red line). Figure (d) shows the quantized ReLU activation (solid blue line), which is a sum of step functions, its corresponding sum of saturated-hinge-loss derivatives (dashed red line), and the soft-hinge-loss approximation to this sum that was found to work best (dotted yellow line). heuristic for setting the target tdj for an activation hdj of layer d is to use the (negative) sign of the partial derivative of the next layer’s loss. Specifically, we set tdj = r(hdj), where Ld+1(Zd+1, Td+1) r(hdj) (cid:44) sign − (2) (cid:18) (cid:19) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 TARGET HEURISTICS', 'after_section': '3.1 TARGET HEURISTICS', 'context_after': 'combining information across the batch. We leave such investigations for future work. 3.2 LAYER LOSS FUNCTIONS The hinge loss, shown in Figure 2a, is a robust version of the perceptron criterion and is thus a natural Ideally, the highest priority targets would be those with the largest effect on the output loss. The first issue can be solved by saturating (truncating) the hinge loss, thus making it less sensitive ', 'paragraph_idx': 33, 'before_section': '3.1 TARGET HEURISTICS', 'context_before': 'hdj and r(hdj) indicates a lack of confidence in the current value of hdj. A natural choice is thus to set tdj to push the pre-activation value of hdj towards 0, making hdj more likely to flip. Setting tdj = r(hdj) = +1 accomplishes this. We note that, while this heuristic performs well, there is still ', 'modified_lines': 'room for improvement, for example by extending r(·) to better handle the hdj (cid:54)= r(hdj) case or by per-layer loss function to use for finding good settings of the targets and weights, even when there are no feasible target settings. However, in preliminary experiments we found that learning tended to stall and become erratic over time when using the hinge loss for each layer. We attribute this to two separate issues. First, the hinge loss is sensitive to noisy data and outliers (Wu & Liu, 2007), which can cause learning to focus on instances that are unlikely to ever be classified correctly, instead of on instances near the separator. Second, since with convolutional layers and large, noisy datasets it is unlikely that a layer’s inputs are entirely linearly separable, it is important to prioritize some targets over others. ', 'original_lines': 'room for improvement, for example by extending r(·) to better handle the hdh (cid:54)= r(hdj) case or by per-layer loss function to use for finding a feasible (or nearly feasible) setting of the targets and weights. However, in preliminary experiments, we found that learning tended to stall and become erratic over time when using the hinge loss for each layer. We attribute this to two separate issues. First, the hinge loss is sensitive to noisy data and outliers (Wu & Liu, 2007), which can cause learning to focus on instances that are unlikely to ever be classified correctly, instead of on instances near the separator. Second, since with convolutional layers and large, noisy datasets it is unlikely that a layer’s inputs are entirely linearly separable, it is thus important to prioritize some targets over others. ', 'after_paragraph_idx': 33, 'before_paragraph_idx': 33}, {'section': '3.2 LAYER LOSS FUNCTIONS', 'after_section': '3.2 LAYER LOSS FUNCTIONS', 'context_after': 'While the saturated hinge loss works well, if the input zdj ever moves out of the range [−1, +1] then its derivative will become zero and the unit will no longer be trainable. To avoid this, we propose hinge, the soft hinge has slope 1 at the threshold and has a symmetric derivative; however, it also benefits from having a larger input region with non-zero derivative. Note that Bengio et al. (2013) report that using the derivative of a sigmoid as the STE performed worse than the identity function. Based on our experiments with other loss functions, including variations of the squared hinge loss ', 'paragraph_idx': 35, 'before_section': None, 'context_before': '(cid:12) (cid:12) ', 'modified_lines': 'the soft hinge loss, shown in Figure 2c, where soft hinge(z, t) = tanh(−tz) + 1. Like the saturated ', 'original_lines': '. (3) the soft hinge loss, shown in Figure 2c, where soft hinge(z, t) = tanh(−zt) + 1. Like the saturated 6 -1112Hinge Loss-1112SaturatedHinge Loss-1112SoftHinge Loss-111QuantizedReLU Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 35, 'before_paragraph_idx': None}, {'section': '3.3 RELATIONSHIP TO THE STRAIGHT-THROUGH ESTIMATOR', 'after_section': None, 'context_after': '3.4 QUANTIZED ACTIVATIONS Straight-through estimation is also commonly used to backpropagate through quantized variants of standard activations, such as the ReLU. Figure 2d shows a quantized ReLU (qReLU) with 6 k ∂x 4 EXPERIMENTS We evaluated FTPROP-MB with soft hinge per-layer losses (FTP-SH) for training deep networks with sign and 2- and 3-bit qReLU activations by comparing models trained with FTP-SH to those trained with the saturated straight-through estimators (SSTEs) described earlier (although, as discussed, these 4.1 CIFAR-10 qRELU activation, SSTE and FTP-SH perform nearly identically in the 4-layer model. Conversely, of the performance gap between 2-bit qReLU with FTP-SH and full-precision ReLU is encouraging. 4.2 ', 'paragraph_idx': 36, 'before_section': '3.3 RELATIONSHIP TO THE STRAIGHT-THROUGH ESTIMATOR', 'context_before': 'weight updates that result from using the scaled saturated hinge loss from (3) and the target heuristic in (2) are exactly those of the saturated straight-through estimator (SSTE) defined in Hubara et al. (2016), which replaces the derivative of sign(z) with 1|z|≤1, where 1(·) is the indicator function. ', 'modified_lines': 'Other STEs correspond to different choices of per-layer loss function. For example, the original STE corresponds to the linear loss L(z, t) = −tz with the above target heuristic. This connection provides a justification for existing STE approaches, which can now each be seen as an instance of FTPROP with a particular choice of per-layer loss function and target heuristic. We believe that this will enable more principled investigations and extensions of these methods in future work. evenly-spaced quantization levels. The simplest and most popular straight-through estimator (STE) for qReLU is to use the derivative of the saturated (or clipped) ReLU ∂ sat ReLU(x) = 10<x<1, where sat ReLU(x) = min(1, max(x, 0)). However, if we instead consider the qReLU activation from the viewpoint of FTPROP, then the qReLU becomes a (normalized) sum of step functions qReLU(z) = 1 k−1 ), where step(z) = 1 if z > 0 and 0 otherwise, and is a linear transformation of sign(z). The resulting derivative of the sum of saturated hinge losses (one for each step function) is shown in red in Figure 2d, and is clearly quite different than the STE described above. In initial experiments, this performed as well as or better than the STE; however, we achieved additional performance improvements by using the softened approximation shown in yellow in Figure 2d, which is simply the derivative of a soft hinge that has been scaled and shifted to match the i=0 step(z − i (cid:80)k−1 7 -1112Hinge Loss-1112SaturatedHinge Loss-1112SoftHinge Loss-111QuantizedReLU Published as a conference paper at ICLR 2018 Table 1: The best top-1 test accuracy for each network over all epochs when trained with sign, qReLU, and full-precision baseline activations on CIFAR-10 and ImageNet. The hard-threshold activations are trained with both FTPROP-MB with per-layer soft hinge losses (FTP-SH) and the saturated straight-through estimator (SSTE). Bold numbers denote the best performing quantized activation in each experiment. Sign qReLU Baselines SSTE FTP-SH SSTE FTP-SH ReLU Sat. ReLU 4-layer convnet (CIFAR-10) 8-layer convnet (CIFAR-10) AlexNet (ImageNet) ResNet-18 (ImageNet) 80.6 84.6 46.7 49.1 81.3 84.9 47.3 47.8 85.6 88.4 59.4 60.6 85.5 89.8 60.7 64.3 86.5 91.2 61.3 69.1 87.3 91.2 61.9 66.9 qReLU domain. This is a natural choice because the derivative of a sum of a small number of soft hinge losses has a shape similar to that of the derivative of a single soft hinge loss. SSTEs can also be seen as instances of FTPROP-MB). We compared to these SSTEs because they are the standard approach in the literature and they significantly outperformed the STE in our initial exper- iments (Hubara et al. (2016) observed similar behavior). Computationally, FTPROP-MB has the same performance as straight-through estimation; however, the soft hinge loss involves computing a hyper- bolic tangent, which requires more computation than a piecewise linear function. This is the same per- formance difference seen when using sigmoid activations instead of ReLUs in soft-threshold networks. We also trained each model with ReLU and saturated-ReLU activations as full-precision baselines. We did not use weight quantization because our main interest is training with hard-threshold ac- tivations, and because recent work has shown that weights can be quantized with little effect on performance (Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016). We tested these training methods on the CIFAR-10 (Krizhevsky, 2009) and ImageNet (ILSVRC 2012) (Russakovsky et al., 2015) datasets. On CIFAR-10, we trained a simple 4-layer convolutional network and the 8-layer convolutional network of Zhou et al. (2016). On ImageNet, we trained AlexNet (Krizhevsky et al., 2012), the most common model in the quantization literature, and ResNet-18 (He et al., 2015a). Further experiment details are provided in Appendix A, along with learning curves for all experiments, and code is available at https://github.com/afriesen/ftprop. Test accuracies for the 4-layer and 8-layer convolutional networks on CIFAR-10 are shown in Table 1. For the 4-layer model, FTP-SH shows a consistent 0.5-1% accuracy gain over SSTE for the entire training trajectory, resulting in the 0.7% improvement shown in Table 1. However, for the 2-bit for the more complex 8-layer model, the FTP-SH accuracy is only 0.3% above SSTE for the sign activation, but for the qReLU activation FTP-SH achieves a consistent 1.4% improvement over SSTE. We posit that the decrease in performance gap for the sign activation when moving from the 4- to 8- layer model is because both methods are able to effectively train the higher-capacity model to achieve close to its best possible performance on this dataset, whereas the opposite is true for the qReLU activation; i.e., the restricted capacity of the 4-layer model limits the ability of both methods to train the more expressive qReLU effectively. If this is true, then we expect that FTP-SH will outperform SSTE for both the sign and qReLU activations on a harder dataset. Unsurprisingly, none of the low- precision methods perform as well as the baseline high-precision methods; however, the narrowness ', 'original_lines': 'Other STEs correspond to different choices of per-layer loss function. This connection provides a justification for existing STE approaches, which can now be seen as instances of FTPROP with a specific choice of per-layer loss function and target heuristic. We believe that this will enable more-principled investigations and extensions of these methods in future work. (cid:80)k−1 i=0 step(z − i evenly spaced thresholds. The simplest and most popular straight-through estimator (STE) for qReLU is to use the derivative of the saturated (or clipped) ReLU ∂ sat ReLU(x) = 10<x<1, where sat ReLU(x) = min(1, max(x, 0)). However, if we instead consider the qReLU activation from the viewpoint of FTPROP, then the qReLU becomes a (normalized) sum of step functions qReLU(z) = 1 k−1 ), where step(z) = 1 if z > 0 and 0 otherwise, and is a linear transformation of sign(z). The resulting derivative of the sum of saturated hinge losses (one for each step function) is shown in red in Figure 2d, and is clearly quite different than the STE described above. In initial experiments, this performed as well as or better than the STE; however, we achieved additional performance improvements by using the softened approximation shown in yellow in Figure 2d, which is simply the derivative of a soft hinge that has been scaled and shifted to match the qReLU domain. This is a natural choice because the derivative of a sum of a small number of soft hinge losses has a shape similar to that of the derivative of a single soft hinge loss. SSTEs can also be seen as instances of FTPROP-MB). We also trained each model with full-precision ReLU and saturated ReLU activations as baselines. We did not use weight quantization because our main interest is training with hard-threshold activations, and because recent work has shown that weights can be quantized with little effect on performance (Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016). We tested these training methods on the CIFAR-10 (Krizhevsky, 2009) and ImageNet (ILSVRC 2012) (Russakovsky et al., 2015) datasets. On CIFAR-10, we trained a simple 4-layer convolutional network and the 8-layer convolutional network of Zhou et al. (2016). On ImageNet, we trained AlexNet (Krizhevsky et al., 2012), the most common model in the quantization literature, and ResNet-18 (He et al., 2015a). Experiment details are provided in Appendix B, along with learning curves for all experiments. Test accuracies for the 4-layer and 8-layer convolutional network on CIFAR-10 are shown in Table 1. For the simpler 4-layer model, FTP-SH shows a consistent 0.5-1% accuracy gain over SSTE for the 7 Under review as a conference paper at ICLR 2018 Sign qReLU SSTE FTP-SH SSTE FTP-SH ReLU Saturated ReLU 4-layer convnet (CIFAR-10) 8-layer convnet (CIFAR-10) AlexNet (ImageNet) ResNet-18 (ImageNet) 80.6 84.6 46.7 49.1 81.3 84.9 47.3 47.8 85.6 88.4 59.4 60.6 85.5 89.8 60.7 64.3 86.5 91.2 61.3 69.1 87.3 91.2 61.9 66.9 Table 1: The best top-1 test accuracy for each network over all epochs when trained with sign, qReLU, and full-precision activations on CIFAR-10 or ImageNet. The hard-threshold activations are trained with FTPROP-MB with per-layer soft hinge losses (FTP-SH) and the saturated straight-through estimator (SSTE). Bold numbers denote the best performing activation in each pair. entire training trajectory, resulting in the 0.7% improvement shown in Table 1. However, for the 2-bit for the more complex 8-layer model, the FTP-SH accuracy is only 0.3% above SSTE, but for the qReLU activation FTP-SH achieves a consistent 1.4% improvement over SSTE. We posit that the decrease in performance gap for the sign activation when moving from the 4- to 8-layer model is because both methods are able to effectively train the higher-capacity model to achieve close to its best possible performance on this dataset, whereas the opposite is true for the qReLU activation; i.e., the restricted capacity of the 4-layer model limits the ability of both methods to train the more expressive qReLU effectively. If this is true, then we expect that FTP-SH will outperform SSTE for both the sign and qReLU activations on a harder dataset. Unsurprisingly, none of the low-precision methods perform as well as the high-precision methods; however, the narrowness ', 'after_paragraph_idx': None, 'before_paragraph_idx': 36}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'for the surprisingly good performance of the FTP-SH qReLU relative to full-precision ReLU, as hard-threshold activations also provide a strong regularization effect. 5 CONCLUSION ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'trajectory, and outperforms the SSTE-trained qReLU by almost 2%. Interestingly, we find that the saturated ReLU outperforms the standard ReLU by almost a full point of accuracy. We believe that this is due to the regularization effect caused by saturating the activation. This may also account ', 'modified_lines': 'Finally, we ran a single experiment with ResNet-18 on ImageNet, using hyperparameters from previ- ous works that used SSTE, to check (i) whether the soft hinge loss exhibits vanishing gradient behavior due to its diminishing slope away from the origin, and (ii) to evaluate the performance of FTP-SH for a less-quantized ReLU (we used k = 5 steps, which is less than the full range of a 3-bit ReLU). While FTP-SH does slightly worse than SSTE for the sign function, we believe that this is because the hyper- parameters were tuned for SSTE and not due to vanishing gradients, as we would expect much worse accuracy in that case. Results from the qReLU activation provide further evidence against vanishing gradients as FTP-SH for qReLU outperforms SSTE by almost 4% in top-1 accuracy (Table 1). ', 'original_lines': ' Figure 3: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset figures show the test accuracy for the final 25 epochs in detail. In both figures, FTPROP-MB with soft hinge (FTP-SH, red) outperforms the saturated straight-through estimator (SSTE, blue). The left figure shows the network with sign activations. The right figure shows that the 2-bit quantized ReLU (qReLU) trained with our method (FTP-SH) performs nearly as well as the full-precision ReLU. Interestingly, saturated ReLU outperforms standard ReLU. Best viewed in color. 8 01020304050607080Epoch303540455055Top-1 AccuracySign (FTP-SH)Sign (SSTE)55606570758045464701020304050607080Epoch304050607080Top-1 AccuracyqReLU (FTP-SH)qReLU (SSTE)ReLUSaturated ReLU556065707580586062 Under review as a conference paper at ICLR 2018 Finally, we ran a single experiment with ResNet-18 on ImageNet, using hyperparameters set from previous works that used SSTE, to check (i) whether the soft hinge loss exhibits vanishing gradient behavior due to its diminishing slope away from the origin, and (ii) to evaluate the performance of FTP- SH for a less-quantized ReLU (we used k = 5 steps, which is less than the full range of a 3-bit ReLU). While FTP-SH does slightly worse than SSTE for the sign function, we believe that this is because the hyperparameters were tuned for SSTE and not due to vanishing gradients, as we would expect much worse accuracy in that case. Results from the qReLU activation provide further evidence against van- ishing gradients as FTP-SH for qReLU outperforms SSTE by almost 4% in top-1 accuracy (Table 1). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'In future work, we plan to develop novel target heuristics and layer loss functions by investigating research clearly shows their ability to reduce computation and energy requirements, they should also be less susceptible to vanishing and exploding gradients and may be less susceptible to covariate shift and adversarial examples. REFERENCES Yoshua Bengio. How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or Propagating Gradients Through Stochastic Miguel ´A. Carreira-Perpi˜n´an and Weiran Wang. Distributed optimization of deeply nested systems. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'targets for the hard-threshold hidden units, such that each unit only has a linearly-separable problem to solve. The network then decomposes into individual perceptrons, which can be learned with standard convex approaches, given these targets. Based on this, we developed a recursive algorithm ', 'modified_lines': 'for learning deep hard-threshold networks, which we call feasible target propagation (FTPROP), and an efficient mini-batch variant (FTPROP-MB). We showed that the commonly-used but poorly-justified saturating straight-through estimator (STE) is the special case of FTPROP-MB that results from using a saturated hinge loss at each layer and our target heuristic and other types of STE correspond to other heuristic and loss combinations in FTPROP-MB. Finally, we defined the soft hinge loss and showed that FTPROP-MB with a soft hinge loss at each layer improves classification accuracy for multiple models on CIFAR-10 and ImageNet when compared to the saturating STE. connections between our framework and constraint satisfaction and satisfiability. We also intend to further explore the benefits of deep networks with hard-threshold units. In particular, while recent 9 01020304050607080Epoch303540455055Top-1 AccuracySign (FTP-SH)Sign (SSTE)55606570758045464701020304050607080Epoch304050607080Top-1 AccuracyqReLU (FTP-SH)qReLU (SSTE)ReLUSaturated ReLU556065707580586062 Published as a conference paper at ICLR 2018 ACKNOWLEDGMENTS This research was partly funded by ONR grant N00014-16-1-2697. The GPU machine used for this research was donated by NVIDIA. arXiv preprint arXiv:1407.7906 [cs.LG], 2014. Neurons for Conditional Computation. arXiv preprint arXiv:1308.3432 [cs.LG], 2013. ', 'original_lines': 'for learning deep hard-threshold networks, which we call feasible target propagation (FTPROP), and an efficient mini-batch version (FTPROP-MB). We showed that the commonly-used but poorly-justified straight-through estimator (STE) is the special case of FTPROP-MB that results from using a saturated hinge loss at each layer and our target heuristic. Finally, we defined the soft hinge loss and showed that FTPROP-MB with a soft hinge loss at each layer improves classification accuracy for multiple models on CIFAR-10 and ImageNet when compared to the STE. connections between our framework, constraint satisfaction, and satisfiability. We also intend to further explore the benefits of deep networks with hard-threshold units. In particular, while recent arXiv preprint, pp. 1–34, 2014. URL http://arxiv.org/abs/1407.7906. Neurons for Conditional Computation. arXiv preprint, pp. 1–12, 2013. URL http://arxiv.org/abs/ 1308.3432. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
|
2018-02-23 17:15:02
|
ICLR.cc/2018/Conference
|
SJnCkMZ0b
|
S1Nx_f-RZ
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'from Description and Examples Anonymous authors ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Neural Program Search: Solving Programming Tasks ', 'original_lines': 'Neural Program Search: Solving Data Processing Tasks ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Abstract ', 'modified_lines': 'We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input / output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the ap- proach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outper- forms sequence-to-sequence model with attention baseline. ', 'original_lines': 'We consider a problem of solving simple data processing tasks from their descrip- tion and a small number of input / output pairs, which is a required step towards automated programming. We propose an algorithm that combines deep learning and search techniques for generating programs from natural language description and examples. To evaluate its performance we also present partially synthetic dataset of descriptions and input / output pairs with corresponding programs. Our model significantly outperforms sequence-to-sequence model with attention base- line. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 Introduction', 'after_section': '1 Introduction', 'context_after': 'Program synthesis from description has not been applied widely in practice yet. One of the chal- lenges is that the natural language is very ambiguous, yet there are very strict requirements for the ', 'paragraph_idx': 4, 'before_section': '1 Introduction', 'context_before': '& Gulwani (2017)) and program synthesis from descriptions (e.g. Desai et al. (2016), Zhong et al. (2017), Lin et al. (2017), Ling et al. (2016)). ', 'modified_lines': 'Programming by example techniques such as Flash Fill (Gulwani et al. (2012)) and BlinkFill (Singh (2016)) were developed to help users perform data transformation tasks using examples instead of writing programs. These methods rely on a small domain-specific language (DSL) and then develop algorithms to efficiently search the space of programs. Two shortcomings of these approaches are that DSL limits types of programs that can be synthesized, and that large engineering effort is needed to fine-tune such systems. ', 'original_lines': 'Data processing and transformation is a key problem faced by engineers and data scientists in many domains. Programming by example techniques such as Flash Fill (Gulwani et al. (2012)) and Blink- Fill (Singh (2016)) were developed to help users perform data transformation tasks using examples instead of writing programs. These methods rely on a small domain-specific language (DSL) and then develop algorithms to efficiently search the space of programs. Two shortcomings of these ap- proaches are that DSL limits types of programs that can be synthesized, and that large engineering effort is needed to fine-tune such system. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 3}, {'section': '3.1 Domain Specific Language', 'after_section': '3.1 Domain Specific Language', 'context_after': 'A program in the DSL comprises a set of arguments (where each argument is defined by its name and argument, function call, function, or lambda. See Figure 1 for a partial specification of the DSL. The DSL also has a library of standard functions. Each function has a return type and a constant 2 ', 'paragraph_idx': 15, 'before_section': '3.1 Domain Specific Language', 'context_before': 'ciently general. Second, designing a DSL from scratch allows to add constrains that would simplify its automated generation. ', 'modified_lines': 'Our DSL is inspired by LISP – functional language that can be easily represented as an Abstract Syntax Tree and supports high-order functions. We augmented our DSL with a type system. While types do not appear in programs, each constant, argument or function has a type. A type is either an integer, a string, a boolean, a function or an array of other non-function types. type) and a program tree where each node belongs to one of the following symbol types: constant, number of arguments, with each argument having its own type. The type system greatly reduces the number of possible combinations for each node in the program tree during search. ', 'original_lines': 'Our DSL is inspired by LISP – functional language that can be easily represented as Abstract Syntax Tree, supports high-order functions and library of functions. We augmented our DSL with type system. While types do not appear in programs, each constant, argument or function has a type. A type is either an integer, a string, a boolean, a function or an array of other non-function types. type) and a program tree where each node belongs to one of the following symbol types: constants, number of arguments, with each argument having its own type. Return types and argument types might be constants, for example a return type of operator + is an integer, and the types of its argu- ments are integers as well. However return types and argument types can also be computed based on the type of arguments for which their values are already computed, as well as the context in which the function is called. For instance consider a function reduce (array, init, func). If the function ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 14}, {'section': 'Abstract', 'after_section': None, 'context_after': '3.2 Seq2Tree ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'lambda (cid:70) lambda function call Figure 1: Partial specification of the DSL used for this work. ', 'modified_lines': '', 'original_lines': ' appears as an argument of operator +, it is expected that it will return an integer, and as such the init value must be integer. There are no constraints on the type of array except that it’s an array. If without loss of generality it was synthesized to be something that is an array of strings, the type of func is now fully known, it’s a function that takes an integer and a string as arguments and returns an integer. Such type system greatly reduces the number of possible combinations for each node in the program tree during search. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(2) where xp(i), xs(i) are the vectors representing the previous siblings and parents values, respec- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i ) (1) ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'After the node’s output symbol ˆli has been obtained by sampling from oi, xi is obtained by embed- ding ˆli using W T . Then the cell passes (hp ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'oi = so f tmax(Whi) (7) ', 'modified_lines': '', 'original_lines': ' 3 Under review as a conference paper at ICLR 2018 Figure 2: Example of Seq2Tree encoder-decoder model for ”given an array, return values divisible by two”. Left part is an encoder with embeddings+GRU cell, right is doubly-recurrent decoder with attention. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 Introduction', 'after_section': None, 'context_after': 'Search algorithm described in Algorithm 1 starts with a priority queue with a single empty program. At all times, we only keep the top QueueN most probable trees built so far in the priority queue. Algorithm 1 Tree-Beam Search 1: queue ← HeapCreate() ', 'paragraph_idx': 7, 'before_section': None, 'context_before': '3.3 Search One of the central ideas of this work is to use Tree-Beam search in the program space using a deep ', 'modified_lines': 'learning model to score symbols in each AST node. The search continues until a complete program is found that passes given sample input / output pairs. ', 'original_lines': 'learning model to score symbols in each AST node. The search continues until complete program is found that passes given sample input / output pairs. If a program on the top of the queue is complete (no more nodes need to be added), we run evaluation with given sample input / output examples. If the results from current program match expected out- puts, search is stopped. Alternatively, if over MAX VISITED programs has already been evaluated, the search stops without program found. Each program in the priority queue is represented as an incomplete tree with some nodes already synthesized and some still empty. When such incomplete tree T is popped from the queue, we locate the first empty node n in the pre-order traversal of the tree, and use Seq2Tree model to compute probabilities of each possible symbol being in that node. At that point we already know the type of the symbol the node should contain, and thus only consider symbols of that type. For each such symbol s we construct a new tree by replacing n with s. If s is a function, we consider both using that function as a leaf node (passing it as an argument) and using that function as a non-leaf node (invoking the function), in the latter case several new empty nodes would be introduced in the new tree as children of s. In most cases either the invocation or passing the function would be rejected due to its type (for instance, if the expected type is integer, only invocation of reduce would make sense, while if the expected type is function, only passing reduce as an argument would make sense), but in some cases both options would be accepted. We then push all the new trees, no matter how unlikely they are into the priority queue, and then remove least probable trees until the size of the priority queue is not QueueN or less. 4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 Domain Specific Language', 'after_section': None, 'context_after': 'In our experiments evaluating the Seq2Tree model takes comparable amount of time to cloning trees and pushing them to the queue, so optimizing both steps would contribute to the performance of the search. We use the following optimization techniques: ', 'paragraph_idx': 24, 'before_section': None, 'context_before': 'an argument, and filter. The search continues until either D trees are generated, or a tree that passes all the sample tests is found. Such tree is shown on the far right. ', 'modified_lines': 'If a program on the top of the queue is complete (no more nodes need to be added), we run evaluation with given sample input / output examples. If the results from current program match expected out- puts, search is stopped. Alternatively, if over MAX VISITED programs has already been evaluated, the search stops without program found. Each program in the priority queue is represented as an incomplete tree with some nodes already synthesized and some still empty. When such incomplete tree T is popped from the queue, we locate the first empty node n in the pre-order traversal of the tree, and use Seq2Tree model to compute probabilities of each possible symbol being in that node. At that point we already know the type of the symbol the node should contain, and thus only consider symbols of that type. For each such symbol s we construct a new tree by replacing n with s. We then push all the new trees, no matter how unlikely they are into the priority queue, and then remove least probable trees until the size of the priority queue is not QueueN or less. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'of the tree, which for larger trees is significantly smaller than the total number of nodes in the tree. The tree is then represented as a pointer to the root node. Batched search. During training we need to read trees of different shapes, which is a challenging ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'is to use persistent trees. When a new tree Tnew is created from a tree T by introducing a new node s, it is sufficient to clone only the nodes on the path from root to s, and replace their corresponding children on that path with the cloned version. This takes time and memory proportional to the height ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2018 Table 1: AlgoLisp statistics. Train 79, 708 38.72 8.12 28.31 Dev 9, 352 39.95 8.23 29.31 230 # tasks Avg text len Avg code depth Avg code len Vocab size Test 10, 940 37.58 7.97 27.16 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 AlgoLisp', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 33, 'before_section': '4 AlgoLisp', 'context_before': 'several dozen tasks from homework assignments for basic computer science and algorithms courses. For each task, we parameterized assignments (e.g. in statement ”find all even elements in an array” even could be replaced by {prime, even, odd, divisible by three, positive, negative})and matching ', 'modified_lines': 'code. The final dataset is then random combination of such tasks, where other tasks can be passed into the given statement as input (e.g. two statements ”find all even elements in an array” and ”sort an array” will be combined to ”find all even elements in an array and return them in sorted order”). This dataset is designed for the task of learning basic composition and learning to use simple con- cepts and routines in the DSL. Due to the fact that the number of homework assignments used for this dataset was relatively low, it is unlikely that the models trained on this dataset would generalize to new types of algorithm. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': 'Abstract', 'after_section': None, 'context_after': 'To make sure that the models are learning to compose simpler concepts for novel problems, the dataset split into train, dev, and test by surface form of the code. Thus ensuring that at training time the model has not observed any programs it will be evaluated on. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '-inf max) ', 'modified_lines': '', 'original_lines': 'code. The final dataset is then random combination of such tasks, where other tasks can be passed into the given statement as input (e.g. two statements ”find all even elements in an array” and ”sort an array” will be combined to ”find all even elements in an array and return them in sorted order”). This dataset is designed for the task of learning basic composition and learning to use simple con- cepts and routines in the DSL. Due to the fact that the number of homework assignments used for this dataset was relatively low, it is unlikely that the models trained on this dataset would generalize to new types of algorithm. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'own is presented, to show result of search through program space without machine learning model guidance by only validating on input / output examples. Explicitly modeling tree structure of code in Seq2Tree improves upon attentional sequence to se- ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '61.0% 0.6% ', 'modified_lines': '', 'original_lines': '5.1 Results We compare our model with Attentional Sequence to Sequence similar to Luong et al. (2015). Se- quence to sequence models have shown near state of the art results at machine translation, question answering and semantic parsing. The Table 3 presents results on AlgoLisp dataset for Seq2Seq+Att and Seq2Tree model with and without applying search described in section 3.3. Additionally performance of the Search on its ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'y c ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Search combines both approaches into one model and improves to the best result – 90.1%. 5.2 Analysis ', 'modified_lines': '', 'original_lines': ' 100 90 80 70 60 50 0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 6 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2 ', 'modified_lines': '', 'original_lines': '20 40 60 80 100 MAX VISITED Seq2Tree+Search ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '14 Program depth', 'after_section': '14 Program depth', 'context_after': 'Seq2Tree+Search Figure 4: Analysis of results on dev set. Left plot shows accuracy of the model varying ', 'paragraph_idx': 43, 'before_section': '14 Program depth', 'context_before': 'Att. Seq2Seq Search ', 'modified_lines': 'Seq2Tree+Search 20 40 60 80 100 MAX VISITED ', 'original_lines': '', 'after_paragraph_idx': 43, 'before_paragraph_idx': 43}, {'section': '5.1 Results', 'after_section': None, 'context_after': 'Depth of the program is a reasonable proxy for complexity of the problem. Right part of Fig- ure 4 shows accuracy of the models based on gold program depth. Note that there are relatively few 6 Conclusion programming patterns with conventional search technique that allows to find correct program in dis- crete space which neural models struggle with. We presented a semi-synthetic dataset to empirically evaluate learning of program composition and usage of programing constructs. Our empirical re- ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'it predicts correct symbols with high accuracy, and therefore the correct tree is more likely to be found early during the search. ', 'modified_lines': 'programs with depth below 5 in the dev set, which leads to higher variance. As expected, with the growth of the depth of the tree, the accuracy reduces, since more nodes need to be predicted. 8 Under review as a conference paper at ICLR 2018 We have presented an algorithm for program synthesis from textual specification and a sample of input / output pairs, that combines deep learning network for understanding language and general ', 'original_lines': '8 Under review as a conference paper at ICLR 2018 programs with depth below 5 in the dev set, which leads to higher variance. We have presented a model for program synthesis from textual specification and a sample of in- put / output pairs, that combines deep learning network for understanding language and general ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Sumit Gulwani. 2014. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'CA, USA, 1969. Morgan Kaufmann Publishers Inc. URL http://dl.acm.org/citation. cfm?id=1624562.1624585. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 20:32:43
|
ICLR.cc/2018/Conference
|
S1Nx_f-RZ
|
SkOkkGZ0b
|
[]
|
2018-01-25 15:40:31
|
ICLR.cc/2018/Conference
|
ryjgIUT7z
|
HyDR4S3EG
|
[{'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'As discussed above, the master-slave architecture has already been studied in several multi-agent scenarios. [18] utilized the master-slave architecture to resolve conflicts between multiple soccer ', 'paragraph_idx': 15, 'before_section': '2 RELATED WORK', 'context_before': 'micromanagement tasks. [27] proposed episodic exploration strategy for deterministic policy search and [5] proposed the concept of stabilizing experience replay for MARL. ', 'modified_lines': 'Note that the above works take only one of the two perspectives and are then inherently missing out the advantages of the other. Perhaps the most related works are from [3], [4] and [12]. [23] proposed the ”CommNet”, where a broadcasting communication channel among all agents was set up to share global information realized as summation of the output from all individual agents. This design represents an initial version of the proposed master-slave framework, however it does not facilitate an independently reasoning master agent which takes in messages from all agents step by step and processes such information in an recurrent manner. In [4] and [12], a global critic was proposed, which could potentially work at a centralized level, however since critics are basically value networks, they do not provide explicit policy guidance. Therefore they tend to work more like a commentator of a game who job is to analyze and criticize the play, rather than a coach coaching the game. ', 'original_lines': 'Note that the above works take only one of the two perspectives and are then inherently missing out the advantages of the other. Perhaps the most related works are from [3], [4] and [12]. [23] proposed the ”CommNet”, where a broadcasting communication channel among all agents was set up to share global information realized as summation of the output from all individual agents. This design represents an initial version of the proposed master-slave framework, however the summed global signal is hand-crafted information and moreover, this design does not facilitate an independently reasoning master agent. In [4] and [12], a global critic was proposed, which could potentially work at a centralized level, however since critics are basically value networks, they do not provide explicit policy guidance. Therefore they tend to work more like a commentator of a game who job is to analyze and criticize the play, rather than a coach coaching the game. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '3.2 LEARNING STRATEGY', 'after_section': '3.2 LEARNING STRATEGY', 'context_after': 'Observe reward ri t (e.g. according to (2)); Accumulate rewards Ri = Ri + ri ', 'paragraph_idx': 24, 'before_section': None, 'context_before': 't, hm Feed-forward: (hi t ) = M SN et(ot, si ', 'modified_lines': 'Sample action ai t according to softmax policy or Gaussian policy; Execute action ai t; ', 'original_lines': 'Sample action at i according to softmax policy or Gaussian policy; Execute action at i; ', 'after_paragraph_idx': 24, 'before_paragraph_idx': None}, {'section': '4.2', 'after_section': None, 'context_after': '7 Under review as a conference paper at ICLR 2018 State Features For the traffic junction task and the combat task, we just apply the original state features designed in [23]. For ”15M vs. 16M”, ”10M vs. 13Z” and ”15W vs. 17W”, we adopt ', 'paragraph_idx': 37, 'before_section': '4.2', 'context_before': 'LSTM for the master module, and RNN for the slave module. The dimension of the hidden states in RNN or LSTM (including cell states) are all set to 50 for both the master and slave agents. The GCM module in Figure 4 (c) is noted as the ”Gated Composition Module”, which is introduced in ', 'modified_lines': 'section 3.1 in detail. Note that the action output is different for discrete and continuous tasks. For the traffic junction task and the combat task, the output of the network is designed as the probability of a number of actions since the action space is discrete (actually a Softmax policy). As a contrast, for ”15M vs. 16M” ”10M vs. 13Z” and ”15W vs. 17W”, our network directly generates a continuous action following Gaussian policy as described in section 3. The Softmax/Gaussian action modules are illustrated by Figure 4 D). Figure 3: State definition of unit i in the task of {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W and 2D3Z} Figure 4: A) master module, B) slave module, C) gated composition module of slave i, D) Soft- max/Gaussian action module and E) specific model architecture. ', 'original_lines': 'section 3.1 in detail. Note that the action output is different for discrete and continuous tasks. For the traffic junction task and the combat task, the output of the network is designed as the probability of a number of actions since the action space is discrete. As a contrast, for ”15M vs. 16M” ”10M vs. 13Z” and ”15W vs. 17W”, our network directly generates a continuous action following Gaussian policy as described in section 3. Figure 3: State definition of unit i in the task of {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W} Figure 4: A) master module, B) slave module, C) gated composition module of slave i, and D) specific model architecture. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '4.3 PERFORMANCE', 'after_section': '4.3 PERFORMANCE', 'context_after': '9 ', 'paragraph_idx': 49, 'before_section': None, 'context_before': '4.3 PERFORMANCE Table 1 and Table 2 demonstrate the performance improvement of our method when compared with ', 'modified_lines': 'the baselines. For CommNet we directly run the released code on the traffic junction task and ', 'original_lines': 'the baselines. For CommNet we directly run the released code on the traffic junction task and the ', 'after_paragraph_idx': 50, 'before_paragraph_idx': None}, {'section': '4.1 EVALUATION ENVIRONMENTS', 'after_section': None, 'context_after': 'The results on the more challenging StarCraft micromanagement tasks {15M vs. 16M, 10M vs. 13Z, 15W vs. 17W, 2D3Z} are displayed in Table 2. Obviously, on all the tasks, our MS-MARL method ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'Figure 5: Comparing winning rates of different methods on all three tasks ', 'modified_lines': 'the combat task using hyper-parameters provided in [23]. We compute the mean winning rates in Table 1 by testing the trained models for 100 rounds. However, since the code of GMEZO and BiCNet is not released yet, there is no report of their performance on traffic junction and combat tasks. Therefore we only compare with CommNet on these two tasks. And it can be seen that our MS-MARL model performs better than CommNet on both of the two tasks. To further clarify the contribution of the proposed master-slave scheme, we add another baseline ”CommNet + Occupancy Map” which takes its original state as well as the occupancy map explicitly. As is shown, CommNet performs better with global information. However, with an explicit global planner (the ”master” agent) designed in our model, such information seems better utilized to facilitate learning of more powerful collaborative polices. ', 'original_lines': 'combat task using hyper-parameters provided in [23]. We compute the mean winning rates in Table 1 by testing the trained models for 100 rounds. However, since the code of GMEZO and BiCNet is not released yet, there is no report of their performance on traffic junction and combat tasks. Therefore we only compare with CommNet on these two tasks. And it can be seen that our MS-MARL model performs better than CommNet on both of the two tasks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.3 PERFORMANCE', 'after_section': '4.3 PERFORMANCE', 'context_after': 'Traffic Junction Combat 0.94 ± 0.02 0.44 ± 0.12 0.97 ± 0.01 0.61 ± 0.03 ', 'paragraph_idx': 50, 'before_section': '4.3 PERFORMANCE', 'context_before': 'CommNet ', 'modified_lines': ' CommNet + Occupancy Map 0.95 ± 0.01 0.49 ± 0.01 MS-MARL ', 'original_lines': 'MS-MARL ', 'after_paragraph_idx': 50, 'before_paragraph_idx': 50}, {'section': 'Abstract', 'after_section': None, 'context_after': '2Note that the mean win rates of GMEZO, CommNet and BiCNet on the StarCraft tasks are all available from [19]. (Except for the win rate of CommNet on the task ”2D3Z”, which actually comes from our own implementation) However, it is notable that the performance of GMEZO reproduced by [19] is much lower ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'MS-MARL model clearly enjoys better, faster and more stable convergence, which highlights the importance of such a design facilitating independent thinking. ', 'modified_lines': '', 'original_lines': '4.4 ABLATION ANALYSIS Analysis of Master/Slave Behaviours In the setting of the combat task, we further analyze how different components of our proposal contribute individually. Specifically, we compare the performance among the CommNet model, our ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.4 ABLATION ANALYSIS', 'after_section': '4.4 ABLATION ANALYSIS', 'context_after': '(a) Ablation of win rates ', 'paragraph_idx': 51, 'before_section': '4.4 ABLATION ANALYSIS', 'context_before': 'the same message to all agents. Further more, by providing the master agent with its unique state, our full model finally achieves a significant improvement over the CommNet model. Note here that, every information revealed from the extra occupancy map is by definition already included to each ', 'modified_lines': 'agents state as their locations. ', 'original_lines': 'agents state as their positions. ', 'after_paragraph_idx': 51, 'before_paragraph_idx': 51}, {'section': 'Abstract', 'after_section': None, 'context_after': '4.5 ANALYSIS OF LEARNED POLICIES ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Figure 7: Analysis of learned policies: (a) a failure case that CommNet misses the targets (b) a successful case of our MS-MARL model (c) Visualizing the master’s hidden states of two versions of our MS-MARL model ', 'modified_lines': '', 'original_lines': ' (a) CommNet (b) Our Model Figure 8: Illustration of the policy learned by our MS-MARL method: (a) is a failure case of Comm- Net (b) showcases the successful ”Pincer Movement” policy learned by our MS-MARL model 12 -120-100-80-60-40-20020406080-80-60-40-20020406080 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '[5] Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Philip Torr, Pushmeet Kohli, Shimon Whiteson, et al. Stabilising experience replay for deep multi-agent reinforcement learning. arXiv preprint arXiv:1702.08887, 2017. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '[4] Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon White- son. Counterfactual multi-agent policy gradients. arXiv preprint arXiv:1705.08926, 2017. ', 'modified_lines': '', 'original_lines': '13 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. Advances in Neural Information Processing Systems, pp. 3675–3683, 2016. [9] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Recognition (CVPR), July 2017. [8] Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical ', 'modified_lines': 'In ', 'original_lines': 'In ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}]
|
2018-01-17 03:36:46
|
ICLR.cc/2018/Conference
|
HyDR4S3EG
|
BJylRMWAb
|
[]
|
2018-01-25 15:39:41
|
ICLR.cc/2018/Conference
|
Syydre-C-
|
Bye8DH0DG
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'to address the issue of efficiently training DNNs. These include heuristics such as dropouts Sri- vastava et al. (2014), but also considering alternate deep architectures such as convolutional neural networks Sermanet et al. (2014), deep belief networks Hinton et al. (2006), and deep Boltzmann ma- chines Salakhutdinov & Hinton (2009). In addition, deep architectures based on new non-saturating activation functions have been suggested to be more effectively trainable – the most successful and widely popular of these is the rectified linear unit (ReLU) activation, i.e., σ(x) = max , which 0, x } In this paper, we formally study deep neural networks with rectified linear units; we refer to these deep architectures as ReLU DNNs. Our work is inspired by these recent attempts to understand ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'age classification, and natural language processing based on deep neural nets Hinton et al. (2012); Dahl et al. (2013); Krizhevsky et al. (2012); Le (2013); Sutskever et al. (2014). While there is less of evidence now that pre-training actually helps, several other solutions have since been put forth ', 'modified_lines': ' ∗Department of Computer Science, Email: [email protected] †Department of Applied Mathematics and Statistics, Email: [email protected] ‡Department of Computer Science, Email: [email protected] §Department of Applied Mathematics and Statistics, Email: [email protected] 1 Published as a conference paper at ICLR 2018 { is the focus of study in this paper. ', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 { is the focus of study in this paper. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '1.1 NOTATION AND DEFINITIONS', 'after_section': '1.1 NOTATION AND DEFINITIONS', 'context_after': 'ReLU DNN, and is said to have k hidden layers. The function f : Rn1 represented by this ReLU DNN is Rwi for i = 1, . . . , k and a linear transformation Tk+1 : Rwk ', 'paragraph_idx': 7, 'before_section': '1.1 NOTATION AND DEFINITIONS', 'context_before': 'transformations Ti : Rwi−1 → Rwk+1 corresponding to weights of the hidden layers. Such a ReLU DNN is called a (k + 1)-layer ', 'modified_lines': 'Rn2 computed or ', 'original_lines': 'Rn2 computed or ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 7}, {'section': '1.1 NOTATION AND DEFINITIONS', 'after_section': None, 'context_after': 'wi i wi−1∀ wk+1 wk } Many of our important statements will be phrased in terms of the following simplex. Definition 4. Let M > 0 be any positive real number and p ', 'paragraph_idx': 11, 'before_section': '1.1 NOTATION AND DEFINITIONS', 'context_before': 'pieces of f is the number of maximal connected subsets of Rn over which f is affine linear (which is finite). ', 'modified_lines': 'T1 : Ti ∈ A Tk ◦ · · · ◦ Tk+1 ◦ 1, . . . , k F{ := ∈ { k+1 i=0 → → wi σ σ { ◦ ◦ } } ', 'original_lines': '1, . . . , k ∈ { → (1.2) ', 'after_paragraph_idx': None, 'before_paragraph_idx': 10}, {'section': '2 EXACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY', 'after_section': '2 EXACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY', 'context_after': '(2.1) ', 'paragraph_idx': 14, 'before_section': None, 'context_before': '(cid:19) , ', 'modified_lines': ' (cid:96)i ', 'original_lines': '', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}, {'section': '2 + |', 'after_section': '2 + |', 'context_after': 'Lq norm (which for a function f is given by | hidden layers. Moreover, for n = 1, any such Lq function can be arbitrarily at most ', 'paragraph_idx': 17, 'before_section': '2 + |', 'context_before': 'where µ is the Lebesgue measure on Rn (see Royden Royden & Fitzpatrick (2010)). Theorem 2.3. Every function in Lq(Rn), (1 ) can be arbitrarily well-approximated in the ', 'modified_lines': 'q)1/q) by a ReLU DNN function with ', 'original_lines': 'q)1/q) by a ReLU DNN function with ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': 'Abstract', 'after_section': None, 'context_after': '(ii) Telgarsky’s family of hard functions is parameterized by a single natural number k. In contrast, we show that for every pair of natural numbers w and k, and a point from the set in equation 3.1, there exists a “hard” function which to be represented by a depth k(cid:48) network ', 'paragraph_idx': 2, 'before_section': None, 'context_before': ') which is exponentially (in depth) larger than the lower bound of Ω(2k) that Telgarsky can get for this scenario. ', 'modified_lines': '', 'original_lines': 'k ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 EXACT CHARACTERIZATION OF FUNCTION CLASS REPRESENTED BY', 'after_section': None, 'context_after': 'R function that can be represented by a 3-layer DNN, that takes exponential in n number of nodes to be approximated to within some constant by a 2-layer DNN. While their results are not immediately comparable with Telgarsky’s or our results, it is an interesting open question to extend their results to a constant depth hierarchy statement analogous to the recent ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'different activation functions, whereas, our results are specifically for neural nets with rectified linear units. In this sense, Telgarsky’s results from (Telgarsky, 2016) are more general than our results in this paper, but with weaker gap guarantees. Eldan-Shamir (Shamir, 2016; Eldan & Shamir, 2016) ', 'modified_lines': 'show that there exists an Rn ', 'original_lines': 'show that there exists an Rn ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1.1 NOTATION AND DEFINITIONS', 'after_section': None, 'context_after': 'One measure of complexity of a family of Rn R “hard” functions represented by ReLU DNNs → is the asymptotics of the number of pieces as a function of dimension n, depth k + 1 and size s of functions such that for every R function representable by a ReLU DNN with Definition 5 (comp R function from H ', 'paragraph_idx': 9, 'before_section': '1.1 NOTATION AND DEFINITIONS', 'context_before': '→ ', 'modified_lines': 'of the ReLU DNNs. More precisely, suppose one has a family n, k, w → depth at most k + 1 and maximum width at most w. The following definition formalizes a notion of complexity for such a N the family contains at least one Rn H ∈ . H (n, k, w)). The measure comp of pieces (see Definition 3) of a Rn H with depth at most k + 1 and maximum width at most w. ', 'original_lines': 'N the family contains at least one Rn of the ReLU DNNs. More precisely, suppose one has a family n, k, s ∈ → depth at most k + 1 and size at most s. The following definition formalizes a notion of complexity for such a pieces (see Definition 3) of a Rn with depth at most k + 1 and size at most s. (n, k, s) is defied as the maximum number of that can be represented by a ReLU DNN (n, k, s)). The measure comp ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': '2 ≥', 'after_section': None, 'context_after': 'H H . 5 (a) H 1 2 ', 'paragraph_idx': 31, 'before_section': '2 ≥', 'context_before': 'H ', 'modified_lines': '(n, k, w) is defined as the maximum number that can be represented by a ReLU DNN Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. (2013); Raghu et al. (2016). The best known families are the ones from Theorem 4 of (Mont- ufar et al., 2014) and a mild generalization of Theorem 1.1 of (Telgarsky, 2016) to k layers (cid:1))and of ReLU activations with width w; these constructions achieve ((cid:80)n (cid:19)(k 1)n (cid:18) − (cid:0)w j j=0 ( w n ) (cid:99) (cid:98) (n, k, s) = O(wk), respectively. At the end of this section we would explain the precise comp sense in which we improve on these numbers. An analysis of this complexity measure is done using integer programming techniques in (Serra et al., 2017). Definition 6. Let b1, . . . , bm Rn. The zonotope formed by b1, . . . , bm Rn is defined as ∈ ∈ Z(b1, . . . , bm) := λ1b1 + . . . + λmbm : { 1 − λi ≤ ≤ 1, i = 1, . . . , m } Published as a conference paper at ICLR 2018 ', 'original_lines': 'H Similar measures have been studied in previous works Montufar et al. (2014); Pascanu et al. are the ones from Corollary 5 of (Mont- (2013); Raghu et al. (2016). The best known families ufar et al., 2014) and a mild generalization of Theorem 1.1 of (Telgarsky, 2016) to k layers of ReLU activations with width w; these constructions achieve comp nn(k−1) ) and comp kk ), respectively. In comparison to these, we give the first construction that, for any fixed k and s, achieves an exponential dependence on n. In particular, we construct a class (n, k, s) = Ω(sn). Moreover, for fixed n, k, s, our functions are of functions for which comp smoothly parameterized. In what follows, we first give a few structural definitions and lemmas, which will be used in constructing our class of hard functions. The main result of this section is given by Theorem 3.9. (n, k, s) = O( (s/k)kn (n, k, s) = O( sk H H H H Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 31}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The set of vertices of Z(b1, . . . , bm) will be denoted by vert(Z(b1, . . . , bm)). The support func- tion γZ(b1,...,bm) : Rn ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': '1 ', 'modified_lines': '', 'original_lines': 'Definition 6. Let b1, . . . , bm ∈ Z(b1, . . . , bm) := Rn. The zonotope formed by b1, . . . , bm λ1b1 + . . . + λmbm : { λi ≤ − ≤ 1, 1 ∈ i = 1, . . . , m . } Rn is defined as ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '4 , 1', 'after_section': None, 'context_after': '| | ≤ Rn ', 'paragraph_idx': 33, 'before_section': None, 'context_before': '1. vert(Z(b1, . . . , bm)) ', 'modified_lines': 'that this does not hold at equality is a 0 measure set. 1 − i=0 − i 1 (cid:80)n (cid:0)m (cid:1). The set of (b1, . . . , bm) ', 'original_lines': '(m that this DOES NOT hold at equality is a 0 measure set. 1. The set of (b1, . . . , bm) 1)n − − ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 , 1', 'after_section': None, 'context_after': '× | ', 'paragraph_idx': 32, 'before_section': None, 'context_before': 'Rn, there exists a 2-layer ReLU DNN with size 2m which ', 'modified_lines': 'vert(Z(b1, . . . , bm)) | 1 − i=0 Rn (cid:0)m ∈ − i 1 ', 'original_lines': '1. S(n, m) is the so-called “extremal vert(Z(b1, . . . , bm)) 1)n Rn − × ∈ − | ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 , w', 'after_section': None, 'context_after': '4 TRAINING 2-LAYER Rn R RELU DNNS TO GLOBAL OPTIMALITY In this section we consider the following empirical risk minimization problem. Given D data points R ReLU DNNs ', 'paragraph_idx': 49, 'before_section': '1 , w', 'context_before': '× ', 'modified_lines': 'Comparison to the results in (Montufar et al., 2014) Firstly we note that the construction in (Montufar et al., 2014) requires all the hidden layers to have width at least as big as the input dimensionality n. In contrast, we do not impose such restrictions and the network size in our construction is independent of the input dimensionality. Thus our result probes networks with bottleneck architectures whose complexity cant be seen from their result. Secondly, in terms of our complexity measure, there seem to be regimes where our bound does better. One such regime, for example, is when n log(n) ), by setting in our construction m < n. w < 2n and k Ω( n ≤ ∈ Thirdly, it is not clear to us whether the construction in (Montufar et al., 2014) gives a smoothly parameterized family of functions other than by introducing small perturbations of the construc- tion in their paper. In contrast, we have a smoothly parameterized family which is in one-to-one correspondence with a well-understood manifold like the higher-dimensional torus. → ', 'original_lines': '− ≥ wk 1 + w(k 1) and set m = wk N such that s 1. Now we recall the Lets choose w ∈ definition of comp (n, k, s) given at the beginning of the section and see that for this choice of m (n, k, s) = Ω(sn). In other words the functions in and w our hard function family attains comp ZONOTOPEn sn pieces. To the best of our knowledge, such exponential dependence on the input dimension of the number of affine pieces in a neurally representable function has not been demonstrated in previous constructions. H k,w,m have − ∼ H − → ', 'after_paragraph_idx': None, 'before_paragraph_idx': 49}, {'section': '4 TRAINING 2-LAYER Rn', 'after_section': '4 TRAINING 2-LAYER Rn', 'context_after': 'R is a convex loss function (common loss functions are the squared loss, ). Our main (cid:96)(y, y(cid:48)) = (y result of this section gives an algorithm to solve the above empirical risk minimization problem to ', 'paragraph_idx': 52, 'before_section': '4 TRAINING 2-LAYER Rn', 'context_before': 'y(cid:48))2, and the hinge loss function given by (cid:96)(y, y(cid:48)) = max i=1 ', 'modified_lines': 'where (cid:96) : R ', 'original_lines': 'where (cid:96) : R ', 'after_paragraph_idx': 52, 'before_paragraph_idx': 52}, {'section': '4 TRAINING 2-LAYER Rn', 'after_section': '4 TRAINING 2-LAYER Rn', 'context_after': 'xj + ˜bi ≤ ˜ai 0 · ', 'paragraph_idx': 54, 'before_section': None, 'context_before': 'end for return {˜a}, {˜b}, s corresponding to OPT’s iterate ', 'modified_lines': 'Rw. If we denote the i-th row Let T1(x) = Ax + b and T2(y) = a(cid:48) · ∈ of the matrix A by ai, and write bi, a(cid:48)i to denote the i-th coordinates of the vectors b, a(cid:48) respectively, due to homogeneity of ReLU gates, the network output can be represented as n and b, a(cid:48) y for A Rw ∈ × f (x) = w (cid:88) i=1 a(cid:48)i max 0, ai { x + bi} · = w (cid:88) i=1 si max 0, ˜ai { x + ˜bi} . · · ∈ i:j = ˜ai j=1 }\\ R and si ∈ {− 1, . . . , D + = { Rn, ˜bi ∈ where ˜ai 1, +1 } ∈ , the pair (˜ai, ˜bi) induces a partition 1 . . . , w } P { and P i ', 'original_lines': '', 'after_paragraph_idx': 54, 'before_paragraph_idx': None}, {'section': '5 DISCUSSION', 'after_section': None, 'context_after': '8 REFERENCES ', 'paragraph_idx': 58, 'before_section': None, 'context_before': '5 DISCUSSION ', 'modified_lines': 'The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w. The exponential dependence on n can not be removed unless P = N P ; see Shalev-Shwartz & Ben-David (2014); Blum & Rivest (1992); DasGupta et al. (1995). However, we are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and/or the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good Published as a conference paper at ICLR 2018 open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths. ACKNOWLEDGMENTS We would like to thank Christian Tjandraatmadja for pointing out a subtle error in a previous version of the paper, which affected the complexity results for the number of linear regions in our construc- tions in Section 3.2. Anirbit would like to thank Ramprasad Saptharishi, Piyush Srivastava and Rohit Gurjar for extensive discussions on Boolean and arithmetic circuit complexity. This paper has been immensely influenced by the perspectives gained during those extremely helpful discussions. Amitabh Basu gratefully acknowledges support from the NSF grant CMMI1452820. Raman Arora was supported in part by NSF BIGDATA grant IIS-1546482. ', 'original_lines': 'The running time of the algorithm that we give in this work to find the exact global minima of a two layer ReLU-DNN is exponential in the input dimension n and the number of hidden nodes w. It is unlikely, due to complexity theory beliefs such as P = N P , that the exponential dependence on n can be removed, if one requires the global optimality guarantee; see the book by Shalev-Schwartz and Ben-David Shalev-Shwartz & Ben-David (2014) and Blum & Rivest (1992); DasGupta et al. (1995). However, it is not clear to us whether the globally optimal algorithms have to necessarily be exponential in the number of hidden nodes. We are not aware of any complexity results which would rule out the possibility of an algorithm which trains to global optimality in time that is polynomial in the data size and the number of hidden nodes, assuming that the input dimension is a fixed constant. Resolving this dependence on network size would be another step towards clarifying the theoretical complexity of training ReLU DNNs and is a good open question for future research, in our opinion. Perhaps an even better breakthrough would be to get optimal training algorithms for DNNs with two or more hidden layers and this seems like a substantially harder nut to crack. It would also be a significant breakthrough to get gap results between consecutive constant depths or between logarithmic and constant depths. (cid:54) Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Jiri Matousek. Lectures on discrete geometry, volume 212. Springer Science & Business Media, 2002. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '2015. Shiyu Liang and R Srikant. Why deep neural networks for function approximation? 2016. ', 'modified_lines': '', 'original_lines': ' 9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A EXPRESSING PIECEWISE LINEAR FUNCTIONS USING RELU DNNS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'arXiv:1610.01145, 2016. G¨unter M. Ziegler. Lectures on polytopes, volume 152. Springer Science & Business Media, 1995. ', 'modified_lines': '', 'original_lines': ' 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 , w', 'after_section': None, 'context_after': 'slope of the above sum is = sR for x > am bi = f (ai) + g1(ai) + . . . + gm The above corresponds to asking for the existence of a solution to the following set of simultaneous linear equations in r, t1, . . . , tm ', 'paragraph_idx': 51, 'before_section': None, 'context_before': '1. Such a decomposition of h would be valid if we can find 1 such that (1) the slope of the above sum is = sL for x < a1, (2) the values for r, t1, . . . , tm ', 'modified_lines': 'we have ', 'original_lines': 'we have ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'B BENEFITS OF DEPTH B.1 CONSTRUCTING A CONTINUUM OF HARD FUNCTIONS FOR R ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '≤ ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '≥ (∆w (cid:124) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '≥ (a1, . . . , ak) ', 'modified_lines': '', 'original_lines': '(cid:91) ∈ M >0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 BENEFITS OF DEPTH', 'after_section': '3 BENEFITS OF DEPTH', 'context_after': '∆w ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '1 − M × ', 'modified_lines': ' (cid:91) ∈ M >0 ', 'original_lines': '', 'after_paragraph_idx': 21, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proof of Theorem 3.5. Given k ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'DNN with size wk; in fact, each hidden layer has exactly w nodes. Proof of Theorem 3.1. Follows from Theorem 3.2 and Lemma D.6. ', 'modified_lines': '', 'original_lines': ' Figure 2: Top: ha1 with a1 1 with 2 pieces in the range [0, 1]. Bottom: Ha1,a2 = ha2 3 = 6 pieces in the range [0, 1]. The dotted line in the bottom panel corresponds to the function in the top panel. It shows that for every piece of the dotted graph, there is a full copy of the graph in the middle panel. 1 with 3 pieces in the range [0, 1]. Middle: ha2 with a2 ha1 with 2 ∆1 ∆2 ∈ ∈ ◦ · ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'z 2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'y1 − ', 'modified_lines': '', 'original_lines': 'y1) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proof of Lemma 3.7. By Theorem 3.6 part 3., γZ(b1,...,bm)(r) = suffices to observe r, b1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '≥ → ', 'modified_lines': '', 'original_lines': ' δ = swk gp(cid:107)1 ≥ − δ. ⇒ (cid:107) r, b1 + . . . + = max r, bm r, b1 r, b1 , |(cid:104) (cid:105)| |(cid:104) (cid:105)| {(cid:104) (cid:105) −(cid:104) (cid:105)} + . . . + max r, bm {(cid:104) , (cid:105) r, bm −(cid:104) . (cid:105)} ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 TRAINING 2-LAYER Rn', 'after_section': None, 'context_after': 'ai) a(cid:48)i| | ', 'paragraph_idx': 54, 'before_section': None, 'context_before': '| { ', 'modified_lines': 'a(cid:48)i max { x + i=1 i=1 = · · 0, ai f (x) = w (cid:88) i=1 si max 0, ˜ai { x + ˜bi} · ', 'original_lines': 'x + ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(C.1) i · ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '. bi} ', 'modified_lines': '', 'original_lines': '· In other words, the family of functions over which we are searching is of the form f (x) = w (cid:88) i=1 si max 0, ˜ai { x + ˜bi} · 13 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1.1 NOTATION AND DEFINITIONS', 'after_section': None, 'context_after': 'xj + ˜bi ≤ xj + ˜bi ≥ w constraints. Subject to which are imposed for all i = 1, . . . , w. Thus, we have a total of D · ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'R for w decision variables). The feasible region of the ', 'modified_lines': '· ', 'original_lines': 'Rn and a real number δ such that P i − ), i = 1, . . . , w. So, for each i = 1, . . . , w, P i + and P i − xj + δ ≤ ), i = 1, . . . , w and a vector s in 1, . . . , D { = , P i } j : c { P i } \\ j : ˜ai xj + δ > 0 + = 1 } +, P i − − ∈ P i − + ∪ j : c { w. − = ∈ { { · · · ˜ai ˜ai · · 0 0 j ∀ j ∀ ∈ ∈ P i − P i + (C.2) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'More formally, we parametrize piecewise linear functions with w pieces by the w slope-intercept values (a1, b1), . . . , (a2, b2), . . . , (aw, bw) of the w different pieces. This means that between break- points j and j + 1, 1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '− ', 'modified_lines': '', 'original_lines': '14 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 BENEFITS OF DEPTH', 'after_section': None, 'context_after': '≤ w ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '≤ last pieces are a1x + b1 and awx + bw, respectively. ', 'modified_lines': ' − ', 'original_lines': '− ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ≥', 'after_section': None, 'context_after': 'Rm is represented by a n, m ReLU DNN with depth k + 1 and size s2, then f1 + f2 can be represented by a n, m ReLU DNN with depth k + 1 and size s1 + s2. → Proof. We simply put the two ReLU DNNs in parallel and combine the appropriate coordinates of the outputs. Lemma D.3. [Taking maximums/minimums] Let f1, . . . , fm : Rn be represented by Rn ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'Rm is represented by a n, m ReLU DNN with Lemma D.2. [Function Addition] If f1 : Rn → ', 'modified_lines': 'depth k + 1 and size s1, and f2 : Rn ', 'original_lines': 'depth k + 1 and size s1, and f2 : Rn 15 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1.1 NOTATION AND DEFINITIONS', 'after_section': None, 'context_after': '2 (cid:99)} < m when m +1, . . . , fm} ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'f1, . . . , f m ', 'modified_lines': '{ ', 'original_lines': '{ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 ≥', 'after_section': None, 'context_after': '≥ max log( (cid:98) ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'm , 2 (cid:99) ', 'modified_lines': '(cid:98) ', 'original_lines': '(cid:98) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proof. We prove this by induction on k. The base case is k = 1, i.e, we have a 2-layer ReLU DNN. Since every activation node can produce at most one breakpoint in the piecewise linear function, we ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '∈ } ', 'modified_lines': '', 'original_lines': ' 16 Inputx1Inputx2x1+x22+|x1−x2|211-1-1-111-112−121212-0.200.20.40.60.811.200.20.40.60.81-0.200.20.40.60.811.2-0.4-0.200.20.40.6 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-24 02:00:39
|
ICLR.cc/2018/Conference
|
Bye8DH0DG
|
Hkm9wB0Pz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': 'One can do better in terms of size when the rightmost piece of the given function is flat, i.e., sR = 0. In this case r = 0, which means that f = 0; thus, the decomposition of h above is of size p ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'of p − Lemma D.6. ', 'modified_lines': '', 'original_lines': ' − ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-24 02:01:46
|
ICLR.cc/2018/Conference
|
Hkm9wB0Pz
|
HJpg_BADM
|
[{'section': '1 (cid:80)n', 'after_section': None, 'context_after': '(cid:105)| Rn such that ', 'paragraph_idx': 38, 'before_section': None, 'context_before': '. (cid:105)| ', 'modified_lines': '|(cid:104) ', 'original_lines': ' |(cid:104) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 BENEFITS OF DEPTH', 'after_section': None, 'context_after': '≤ w ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '≤ last pieces are a1x + b1 and awx + bw, respectively. ', 'modified_lines': ' − ', 'original_lines': '− ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-24 02:03:32
|
ICLR.cc/2018/Conference
|
HygIVefff
|
rJJHB0KfG
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient ', 'modified_lines': 'descent. However, there is a dearth of work to explain why one can effectively cap- ture the features in data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geom- etry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Fur- thermore, the results and analysis used on BNNs are shown to generalize to neural networks with ternary weights and activations. Compared to previous research that demonstrated good classification performance with BNNs, our work explains why these BNNs work in terms of HD geometry. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks. ', 'original_lines': 'descent. However, there is a dearth of work to explain why one can effectively capture the features in data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using (2016) work because of the high- the method of Courbariaux, Hubara et al. dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well- approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated good classification performance for such BNNs, our work explains why these BNNs work in terms of the HD geometry. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 network is composed of binary convolution transformers (dashed green box). Each oval corresponds to a tensor and the derivative of the cost with respect to that tensor. Rectangles correspond to transformers that specify forward and backward propagation functions. Associated with each binary weight, wb, is a continuous weight, wc, that is used to accumulate gradients. k denotes the kth The forward function simply binarizes the inputs. In the backward propagation step, one normally computes the derivative of the cost with respect to the input of a transformer via the Jacobian of the forward function and the derivative of the cost with respect to the output of that transformer ', 'paragraph_idx': 7, 'before_section': None, 'context_before': 'has sought to understand weight matrices as extracting out continuous features in data (e.g. Zeiler & Fergus (2014)). Summary of contributions: ', 'modified_lines': 'Figure 1: Review of the Courbariaux et al. (2016) BNN Training Algorithm: (a) A binary neural layer of the network. (b) Each binarize transformer has a forward function and a backward function. ', 'original_lines': '1. Angle Preservation Property: We demonstrate that binarization approximately preserves the direction of high dimensional vectors. In particular, we show that the angle between a Figure 1: Review of the Courbariaux et al. (2016) BNN Training Algorithm: a. A binary neural layer of the network. b. Each binarize transformer has a forward function and a backward function. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'random vector (from a standard normal distribution) and its binarized version converges to arccos (cid:112)2/π ≈ 37◦ as the dimension of the vector goes to infinity. This angle is an exceedingly small angle in high dimensions. Furthermore, we show that this property is ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'non-differentiable, the straight-through estimator (Bengio et al. (2013)), which is a smoothed version of the forward function, is used for the backward function . ', 'modified_lines': '1. Angle Preservation Property: We demonstrate that binarization approximately preserves the direction of high-dimensional vectors. In particular, we show that the angle between a ', 'original_lines': '', 'after_paragraph_idx': 4, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'operations. Furthermore, we illustrate how a GBT (rotate, binarize, rotate back) is useful for embedding low dimensional data in a high-dimensional binary space. 2 RELATED WORK ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'in the rest of the network because correlations in the data result in high variance principal components that are not randomly oriented relative to the binarization. Thus we recommend an architecture that uses a continuous convolution for the first layer to embed the image ', 'modified_lines': 'in a high-dimensional binary space, after which it can be manipulated with cheap binary 4. Generalization to Ternary Neural Networks: We show that the same analysis applies to ternary neural networks. In particular, the angle between a random vector from a standard normal distribution and the ternarized version of that vector predicts the empirical distri- bution of such angles in a network trained on CIFAR10. Furthermore, the dot product proportionality property is shown to hold for ternary neural networks. 2 BinarizeF-prop: y = ff(x) = 𝜽(x)Backprop: δx= δy· d/dx[fb(x)]xyBinarizeBinarizeWckWbkAkMatMulor Conv+MPBatch NormAk+1ab Under review as a conference paper at ICLR 2018 ', 'original_lines': 'in a high dimensional binary space, after which it can be manipulated with cheap binary ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 4}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'to additionally include a weight sharing quantization step and Huffman coding of the weights. More low precision floating point numbers or fixed point numbers, which allow for cheaper multiplications (Courbariaux et al. (2014), Gupta et al. (2015), Judd et al. (2015), Gysel et al. (2016), Lin et al. (2016), Lai et al. (2017)). ', 'paragraph_idx': 6, 'before_section': '2 RELATED WORK', 'context_before': 'The first approach is to try and compress a pre-trained network. Kim et al. (2015) uses a Tucker decomposition of the kernel tensor and fine tunes the network afterwards. Han et al. (2015b) train a network, prune low magnitude connections, and retrain. Han et al. (2015a) extend their previous work ', 'modified_lines': 'recently, Han et al. (2017) train a dense network, sparsify it, and then retrain a dense network with the pruned weights initialized to zero. Second, researchers have sought to train networks using either ', 'original_lines': ' 2 BinarizeF-prop: y = ff(x) = 𝜽(x)Backprop: δx= δy· d/dx[fb(x)]xyBinarizeBinarizeWckWbkAkMatMulor Conv+MPBatch NormAk+1ab Under review as a conference paper at ICLR 2018 recently, Han et al. (2017) train dense network, sparsify it, and then retrain a dense network with the pruned weights initialized to zero. Second, researchers have sought to train networks using either ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '3 THEORY AND EXPERIMENTS', 'after_section': None, 'context_after': '3.1 PRESERVATION OF DIRECTION DURING BINARIZATION In this section, we analyze the angle distributions (i.e. geometry) of high-dimensional binary vectors. This is crucial for understanding binary neural networks because we can imagine that at each layer of a neural network, there are some ideal continuous weight vectors that extract out features. A binary binarization strongly impacts the direction of a vector. However, we argue that binarization does not the geometric properties of high-dimensional vectors are counter-intuitive. For instance, one key idea in the hyperdimensional computing theory of Kanerva (2009) is that two random, high-dimensional vectors of dimension d whose entries are chosen uniformly from the set {−1, 1} are approximately orthogonal. The result follows from the central limit theorem because the cosine angle between two d. Then cos θ ≈ 0 implies that θ ≈ π 2 . Building upon this work, we study the way in which binary vectors are distributed relative to continuous vectors. As binarizing a continuous vector gives the binary vector closest in angle to that continuous vector, we can get a sense of how binary vectors are distributed relative to continuous vectors in high dimensions by binarizing continuous vectors. The standard normal distribution, which serves as an informative null distribution because it is rotationally invariant, is used to generate random continuous vectors which are then binarized. This analysis gives a fundamental insight into understanding the recent success of binary neural networks. Binarizing a random continuous vector changes its direction by a small amount relative to the angle between two random vectors in moderately high dimensions (Fig. 2a). Binarization changes the direction In order to test our theory of the binarization of random vectors chosen from a rotationally invariant method) and study the weight vectors1 of that network. Remarkably, there is a close correspondence between the experimental results and the theory for the angles between the binary and continuous weights (Fig. 2b). For each layer, the distribution of the angles between the binary and continuous weights is sharply peaked near the d → ∞ expectation of arccos (cid:112)2/π. We note that there is a small but systematic deviation from the theory towards larger angles for the higher layers of the network 3.2 DOT PRODUCT PROPORTIONALITY AS A SUFFICIENT CONDITION FOR APPROXIMATING A ', 'paragraph_idx': 10, 'before_section': '3 THEORY AND EXPERIMENTS', 'context_before': 'Courbariaux et al. (2016)). Experiments on MNIST were carried out using both fully connected and convolutional networks and produced similar results. The CIFAR-10 convolutional neural network has six layers of convolutions, all of which have a 3 by 3 spatial kernel. The number of feature maps ', 'modified_lines': 'in each layer are 128, 128, 256, 256, 512, 512. After the second, fourth, and sixth convolutions, there is a 2 by 2 max pooling operation. Then there are two fully connected layers with 1024 units each. Each layer has a batch norm layer in between. The experiments using ternary neural networks use the same network architecture. The dimensionality of the weight vectors in these networks (i.e. convolution converted to a matrix multiply) is the patch size (= 3 ∗ 3 = 9) times the number of channels. 3 Under review as a conference paper at ICLR 2018 √ neural network approximates these ideal continuous vectors with a binary vectors. In low dimensions, substantially change the direction of a high-dimensional continuous vector. It is often the case that such random vectors is normally distributed with µ = 0 and σ ∼ 1/ of a vector by approximately 37◦ in high dimensions. This seems like a large change based on our low-dimensional intuition. Indeed, the angle between two randomly chosen vectors from a rotationally invariant distribution is uniform in two dimensions. However, two randomly chosen vectors are approximately orthogonal in high dimensions. Thus while it is common for two random vectors to have an angle less than 37◦ in low dimensions, it is exceedingly rare in high dimensions. Therefore 37◦ is a small angle in high dimensions. distribution, we train a multilayer binary CNN on CIFAR10 (using the Courbariaux et al. (2016) (Fig. 6). Ternary neural networks are considered in (SI Sec. 5.5) and yield a similar result. ', 'original_lines': 'in each layer are 128, 128, 256, 256, 512, 512. After the second, fourth, and sixth convolutions, there is a 2 by 2 max pooling operation. Then there are two fully connected layers with 1024 units each. Each layer has a batch norm layer in between. The dimensionality of the weight vectors in these networks (i.e. convolution converted to a matrix multiply) is the patch size (= 3 ∗ 3 = 9) times the number of channels. neural network approximates these ideal continuous vectors with a binary vectors. In low-dimensions, substantially change the direction of a high dimensional continuous vector. It is often the case that such random vectors is normally distributed with µ = 0 and σ ∼ 1/ √ 3 Under review as a conference paper at ICLR 2018 Figure 2: Binarization of Random Vectors Approximately Preserves their Direction: (a) Distribution of angles between two random vectors (blue), and between a vector and its binarized version (red), for a rotationally invariant distribution of dimension d. The red distribution is peaked near the d → ∞ limit of arccos (cid:112)2/π ≈ 37◦ (SI, Sec. 1). While 37◦ may seem like a large angle, that angle is small as compared to the angle between two random vectors in moderately high dimensions (i.e. the blue and red curves are well-separated). (b) Angle distribution between continuous and binary weight vectors by layer for a binary CNN trained on CIFAR-10. For the higher layers, there is a close correspondence to the theory. There is a small, but systematic deviation towards large angles (SI, Fig. 6). d is the dimension of the filters at each layer. (c) Standard deviations of the angle distributions from (b) by layer. We see a correspondence to the theoretical expectation that standard deviations of each of the angle distributions scales as d−0.5 (SI, Sec. 1). (d) Histogram of the components of the continuous weights at each layer. The distribution is approximately Gaussian for all but the first layer. Furthermore, there is a high density of weights near zero, which is the threshold for the binarization function. of a vector by approximately 37◦ in high dimensions. This seems like a large change based on our low-dimensional intution. Indeed, the angle between two randomly chosen vectors from a rotationally invariant distribution is uniform in two dimensions. Howevever, two randomly chosen vectors are approximately orthogonal in high dimensions. Thus while it is common for two random vectors to have an angle less than 37◦ in low dimensions, it is exceedingly rare in high dimensions. Therefore 37◦ is a small angle in high dimensions. distibution, we train a multilayer binary CNN on CIFAR10 (using the Courbariaux et al. (2016) (Fig. 6). 1If each convolution is written as the matrix multiplication W x where x is a column vector, then the weight vectors are the rows of W . 4 abcd Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 10}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'proportional then straight-through estimator gradient is proportional to the continuous weight network gradient. The key to the analysis is to focus on the transformers in the network whose forward and backward propagation functions are not related in the way that they would normally be related in typical gradient descent. ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'establishes the fundamental point that while the weights and activations are technically binary, they are operating as if the weights are continuous. For instance, one could imagine using an exhaustive search over all binary weights in the network. However, the additional structure in the problem ', 'modified_lines': 'associated with taking dot products makes the optimization simpler than that. Furthermore, we show that if the dot products of the activations with the pre-binarized and post-binarized weights are 1If each convolution is written as the matrix multiplication W x where x is a column vector, then the weight vectors are the rows of W . 4 Under review as a conference paper at ICLR 2018 Figure 2: Binarization of High-Dimensional Vectors Approximately Preserves their Direction in Theory and Practice: (a) Distribution of angles between two random vectors (blue), and between a vector and its binarized version (red), for a rotationally invariant distribution of dimension d. The red distribution is peaked near the d → ∞ limit of arccos (cid:112)2/π ≈ 37◦ (SI, Sec. 1). While 37◦ may seem like a large angle, that angle is small as compared to the angle between two random vectors in moderately high dimensions (i.e. the blue and red curves are well-separated). (b) Angle distribution between continuous and binary weight vectors by layer for a binary CNN trained on CIFAR-10. For the higher layers, there is a close correspondence to the theory. There is a small, but systematic deviation towards large angles (SI, Fig. 6). d is the dimension of the filters at each layer. (c) Standard deviations of the angle distributions from (b) by layer. We see a correspondence to the theoretical expectation that standard deviations of each of the angle distributions scales as d−0.5 (SI, Sec. 1). (d) Histogram of the components of the continuous weights at each layer. The distribution is approximately Gaussian for all but the first layer. Furthermore, there is a high density of weights near zero, which is the threshold for the binarization function. ', 'original_lines': 'associated with taking dot products makes the optimization simpler than that. Furthermore, we show that if the the dot products of the activations with the pre-binarized and post-binarized weights are ', 'after_paragraph_idx': 4, 'before_paragraph_idx': None}, {'section': '3.2 DOT PRODUCT PROPORTIONALITY AS A SUFFICIENT CONDITION FOR APPROXIMATING A', 'after_section': '3.2 DOT PRODUCT PROPORTIONALITY AS A SUFFICIENT CONDITION FOR APPROXIMATING A', 'context_after': 'function of the weight-activation dot products. Then L(cid:48)(x) = M (cid:48)(a · x) (cid:12) a where (cid:12) denotes a pointwise multiply. Thus the sufficient condition is M (cid:48)(a · wb) ∼ M (cid:48)(a · wc). Since the dot products are followed by a batch normalization, M (k(cid:126)x) = M ((cid:126)x) → M (cid:48)((cid:126)x) = kM (cid:48)(k(cid:126)x). Therefore, it is ', 'paragraph_idx': 18, 'before_section': None, 'context_before': 'weight, wc, f (u) is the pointwise binarize function, g(u) is the identity function2, and L is the loss of the network as a function of the weights in a particular layer. Given the network architecture, L(x) = M (a · x) where a are the activations corresponding to that layer and M is the loss as a ', 'modified_lines': ' 2For the weights, g as in Fig. 1 is the identity function because the wc’s are clipped to be in the range [−1, 1]. 5 abcd Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 19, 'before_paragraph_idx': None}, {'section': '3.2 DOT PRODUCT PROPORTIONALITY AS A SUFFICIENT CONDITION FOR APPROXIMATING A', 'after_section': None, 'context_after': 'products between the binarized weights and the activations (horizontal axis) and the dot products between the continuous weights and the activations (vertical axis) for different layers of a network trained on CIFAR10. Surprisingly, the dot products are highly correlated (r is the Pearson correlation ', 'paragraph_idx': 13, 'before_section': None, 'context_before': 'have by training a network with continuous weights using a mixture of empirical and theoretical arguments, the ideal result would be that the learning algorithm implies the DPP property. It should be noted that in the case of stochastic binarization where E(wb) = wc is chosen by definition, the ', 'modified_lines': 'DPP property is true by design. However, it is remarkable that the property still holds in the case of deterministic binarization, which is revealing of the fundamental nature of the representations used in neural networks. While the main focus of this section is the binarization of the weights, the arguments presented can also be applied to the binarize block that corresponds to the non-linearity of the network. The analogue of the DPP property for this binarize block is: wb · ac ∼ wb · ab where ac denotes the pre-binarized (post-batch norm) activations and ab = a denotes the binarized activations. This property is empirically verified to hold. For the sake of completeness, the dot product histogram corresponding to wc · ac ∼ wb · ab is also computed, although it doesn’t directly correspond to removing one instance of a binarize transformer. This property is also empirically verified to hold (SI, Fig. 5). Impact on Classification: It is natural to ask to what extent the classification performance depends on the binarization of the weights. In experiments on CIFAR10, if the binarization of the weights on all of the convolutional layers is removed, the classification performance drops by only 3 percent relative to the original network. Looking at each layer individually, removing the weight binarization for the first layer accounts for this entire percentage, and removing the binarization of the weights for each other layer causes no degradation in performance. This result is evident by looking at the 2D dot product histograms in Fig 3. The off-diagonal quadrants show where switching the weights from binary to continuous changes the sign of the binarized weight-activation dot product. In all of the layers except the first layer, there are very few dot products in the off-diagonal quadrants. Thus we recommend the use of the dot product histograms for studying the performance of binary neural networks. Removing the binarization of the activations has a substantial impact on the classification performance because that removes the main non-linearity of the network. 3.3 INPUT CORRELATIONS AND THE GENERALIZED BINARIZATION TRANSFORMATION Not surprisingly, some distributions are impacted more strongly by binarization than others. A binary neural network must adapt its internal representations in such a way to not be degraded too much by binarization at each layer. In this section we explore the idea that the principal components of the input to the binarization function should be randomly oriented relative to the binarization. While the network can adapt the higher level representations to satisfy this property, the part of the network that interfaces with the input doesn’t have that flexibility. We make the novel observation that the difficulties in training the first layer of the network are tied to the intrinsic correlations in the input data. In order to be more precise, we define the Generalized Binarization Transformation (GBT) θR(x) = RT θ(Rx) where x is a column vector, R is a fixed rotation matrix, and θ is the pointwise binarization function from before. The rows of R are called the axes of binarization. If R is the identity matrix, then θR = θ and the axes of binarization are the canonical basis vectors (..., 0, 1, 0, ...). R can either be chosen strategically or randomly. The GBT changes the distribution being binarized through a rotation. For appropriate choices of the rotation, R, the directions of the input vectors, x, are changed insignificantly by binarization. The angle between a vector and its binarized version is dependent on the dot product: x · θR(x), which is equal to xT θR(x) = (Rx)T θ(Rx) = y · θ(y) where y = Rx. As a concrete example of the benefits 6 Under review as a conference paper at ICLR 2018 Figure 3: Binarization Preserves Dot Products: Each subplot shows a 2D histogram of the dot ', 'original_lines': ' 2For the weights, g as in Fig. 1 is the identity function because the wc’s are clipped to be in the range [−1, 1]. 5 Under review as a conference paper at ICLR 2018 Figure 3: Binarization Preserves Dot Products: Each subplot shows a 2d histogram of the dot ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'of the GBT, consider the case where x ∼ N (0, Σ) and Σi,j = δi,j exp(2ki) for k = 0.1 (therefore y ∼ N (0, RΣRT )). As the dimension goes to infinity, the angle between a vector drawn from this distribution and its binarized version approaches π/2. Thus binarization is destructive to vectors from ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'figure (labeled as Layer 1) corresponds to the input and the first convolution. Note that the correlation is weaker in the first layer. ', 'modified_lines': '', 'original_lines': 'DPP property is true by design. However, it is remarkable that the property still holds in the case of deterministic binarization, which is revealing of the fundamental nature of the representations used in neural networks. While the main focus of this section is the binarization of the weights, the arguments presented can also be applied to the binarize block that corresponds to the non-linearity of the network. The analogue of the DPP property for this binarize block is: wb · ac ∼ wb · ab where ac denotes the pre-binarized (post-batch norm) activations and ab = a denotes the binarized activations. This property is empirically verified to hold. For the sake of completeness, the dot product histogram corresponding to wc · ac ∼ wb · ab is also computed, although it doesn’t directly correspond to removing one instance of a binarize transformer. This property is also empirically verified to hold (SI, Fig. 5). Impact on Classification: It is natural to ask to what extent the classification performance depends on the binarization of the weights. In experiments on CIFAR10, if the binarization of the weights on all of the convolutional layers is removed, the classification performance drops by only 3 percent relative to the original network. Looking at each layer individually, removing the weight binarization for the first layer accounts for this entire percentage, and removing the binarization of the weights for each other layer causes no degradation in performance. This result is evident by looking at the 2d dot product histograms in Fig 3. The off-diagonal quadrants show where switching the weights from binary to continuous changes the sign of the binarized weight-activation dot product. In all of the layers except the first layer, there are very few dot products in the off-diagonal quadrants. Thus we recommend the use of the dot product histograms for studying the performance of binary neural networks. Removing the binarization of the activations has a substantial impact on the classification performance because that removes the main non-linearity of the network. 6 10010505AWcLayer 1 : r = 0.5640004001000100Layer 2 : r = 0.96300030080080Layer 3 : r = 0.965000500AWb1500150AWcLayer 4 : r = 0.984000400AWb16080080160Layer 5 : r = 0.966000600AWb1500150Layer 6 : r = 0.98109108107106105104 Under review as a conference paper at ICLR 2018 3.3 INPUT CORRELATIONS AND THE GENERALIZED BINARIZATION TRANSFORMATION Not surprisingly, some distributions are impacted more strongly by binarization than others. A binary neural network must adapt its internal representations in such a way to not be degraded too much by binarization at each layer. In this section we explore the idea that the principal components of the input to the binarization function should be randomly oriented relative to the binarization. While the network can adapt the higher level representations to satisfy this property, the part of the network that interfaces with the input doesn’t have that flexibility. We make the novel observation that the difficulties in training the first layer of the network are tied to the intrinsic correlations in the input data. In order to be more precise, we define the Generalized Binarization Transformation (GBT) θR(x) = RT θ(Rx) where x is a column vector, R is a fixed rotation matrix, and θ is the pointwise binarization function from before. The rows of R are called the axes of binarization. If R is the identity matrix, then θR = θ and the axes of binarization are the canonical basis vectors (..., 0, 1, 0, ...). R can either be chosen strategically or randomly. The GBT changes the distribution being binarized through a rotation. For appropriate choices of the rotation, R, the directions of the input vectors, x, are changed insignificantly by binarization. The angle between a vector and its binarized version is dependent on the dot product: x · θR(x), which is equal to xT θR(x) = (Rx)T θ(Rx) = y · θ(y) where y = Rx. As a concrete example of the benefits ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3', 'after_section': '3.3', 'context_after': 'Scale invariance implies a 1/f 2 power spectrum, which results in the largest PCs corresponding to low frequencies (Field (1987)). ', 'paragraph_idx': 25, 'before_section': '3.3', 'context_before': 'CIFAR10 with the mean removed. 3 PCs capture 90 percent of the variance of this data and 4 PCs capture 94.5 percent of the variance. The first two PCs are spatially uniform colors. More generally, large images such as those in IMAGENET have the same issue. Translation invariance of the image ', 'modified_lines': 'covariance matrix implies that the principal components are the filters of the 2D Fourier transform. ', 'original_lines': 'covariance matrix implies that the principal components are the filters of the 2D fourier transform. ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 CONCLUSION ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'performance if done on the first layer. Zhou et al. (2016) find that accuracy degrades by about 0.5 to 1 percent on SHVN when quantizing the first layer weights. Thus it is recommended to rotate the input data before normalization or to use continuous weights for the first layer. ', 'modified_lines': '', 'original_lines': ' 3Random rotation matrices are chosen from the Haar distribution on SO(3) using the method of Stewart (1980). 7 Under review as a conference paper at ICLR 2018 Figure 4: Left: Random Rotation Improves Angle Preservation for a Non-Isotropic Gaussian. Random vectors are drawn from a Gaussian of dimension d with a diagonal covariance matrix whose entries vary exponentially. As in Fig. 2, the red curve shows the angle between a random vector and its binarized version. Since the Gaussian is no longer isotropic, the red curve no longer peaks at θ = arccos (cid:112)2/π. However, if the binarization is replaced with a GBT with a fixed random matrix, the direction of the vector is again approximately preserved. Right: Permuting the activations shows that the correlations observed in Fig. 3 are not merely due to correlations between the binary and continuous weight vectors. The correlations are due to these weight vectors corresponding to high variance directions in the data. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '8 ', 'paragraph_idx': 4, 'before_section': None, 'context_before': 'product histograms as a heuristic to help localize the layers that are most responsible for performance degradation. Third, we discuss the impacts of the low effective dimensionality of the data on the first layer of the network. We recommend either using continuous weights for the first layer or a ', 'modified_lines': 'Generalized Binarization Transformation. Such a transformation may be useful for architectures like LSTMs where the update for the hidden state declares a particular set of axes to be important (e.g. by taking the pointwise multiply of the forget gates with the cell state). Finally, we show that neural networks with ternary weights and activations can also be understood with our approach. More broadly speaking, our theory is useful for analyzing a variety of neural network compression tech- niques that transform the weights, activations or both to reduce the execution cost without degrading performance. ', 'original_lines': 'Generalized Binarization Transformation. Such a transformation may be useful for architectures like LSTMs where the update for the hidden state declares a particular set of axes to be important (e.g. by taking the pointwise multiply of the forget gates with the cell state). More broadly speaking, our theory is useful for analyzing a variety of neural network compression techinques that transform the weights, activations or both to reduce the execution cost without degrading performance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-22 00:43:35
|
ICLR.cc/2018/Conference
|
rJJHB0KfG
|
SJSwCdgCZ
|
[]
|
2018-01-25 15:41:55
|
ICLR.cc/2018/Conference
|
SJSwCdgCZ
|
rkATA7KPG
|
[]
|
2018-02-20 05:14:46
|
ICLR.cc/2018/Conference
|
BJHCtQ-CZ
|
Sk_aiQZ0b
|
[]
|
2017-10-27 21:57:20
|
ICLR.cc/2018/Conference
|
Sk_aiQZ0b
|
SkDEywp7z
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: ', 'modified_lines': 'Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (DSRNet) and train it with BC learning. As a result, we achieved a performance surpasses the human level. ', 'original_lines': 'learning from between-class examples (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between- class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the perfor- mance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (DSRNet) and train it with BC learning. As a result, the performance of DSRNet surpasses the human level. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 The experimental results show that BC learning improves the performance on various sound recog- nition networks, datasets, and data augmentation schemes, in which BC learning proves to be al- ways beneficial. Furthermore, we constructed a new deep sound recognition network (DSRNet) and ESC-50 (Piczak, 2015b), which surpasses the human level. We argue that our approach is different from the so-called data augmentation methods we introduced ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'dataset expansion was also conducted (Salamon et al., 2014; Piczak, 2015b; Gemmeke et al., 2017). In this paper, as a novel third approach we propose a learning method for deep sound recognition: ', 'modified_lines': 'Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the network to output the mixing ratio. Our method focuses on the char- acteristic of the sound, from which we can generate a new sound simply by adding the waveform data of two sounds. The advantages of BC learning are not limited only to the increase in varia- tion of the training data; BC learning leads to an enlargement of Fisher’s criterion (Fisher, 1936) (i.e., the ratio of the between-class distance to the within-class variance) in the feature space, and a regularization of the positional relationship among the feature distributions of the classes. trained it with BC learning. As a result, we achieved a 15.1% error rate on a benchmark dataset ', 'original_lines': 'learning from between-class examples (BC learning). Our strategy is to learn a discriminative fea- ture space by recognizing the between-class sounds as between-class sounds. We generate between- class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the network to output the mixing ratio. Our method focuses on the characteristic of the sound, from which we can generate a new sound simply by adding the waveform data of two sounds. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion (Fisher, 1936) (i.e., the ratio of the between-class distance to the within-class variance) in the feature space, and a regularization of the positional relationship among the feature distributions of the classes. trained it with BC learning. As a result, we achieved a 15:10% error rate on a benchmark dataset ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'data and labels for training, without using the original pure data and labels, while the model solves a single-label classification problem in the testing phase. We believe that the key point of BC learning is not the increase in data variation but the learning method itself. To the best of our knowledge, this ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'augmentation methods aim to improve the generalization ability by generating additional training data which is likely to appear in testing phase. Thus, the most important thing is the training data variation. On the other hand, our BC learning aims to learn a classification problem by solving the ', 'modified_lines': 'problem of predicting the mixing ratio between two different classes. Our method only uses mixed ', 'original_lines': 'problem of predicting the mixing ratio between to different classes. Our method only uses mixed ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 7}, {'section': '2.1 SOUND RECOGNITION NETWORKS', 'after_section': '2.1 SOUND RECOGNITION NETWORKS', 'context_after': 'Some researchers also proposed methods to learn the sounds directly from 1-D raw waveforms, including feature extraction. Aytar et al. (2016) proposed a sound recognition network using 1- ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '2.1 SOUND RECOGNITION NETWORKS ', 'modified_lines': 'We introduce recent deep learning methods for sound recognition. Piczak (2015a) proposed to apply CNNs to the log-mel features extracted from raw waveforms. The log-mel feature is calculated for each frame of sound and represents the magnitude of each frequency area, considering human auditory perception (Davis & Mermelstein, 1980). Piczak created a 2-D feature-map by arranging the log-mel features of each frame along the time axis and calculated the delta log-mel feature, which was the first temporal derivative of the static log-mel feature. Piczak then classified these static and delta feature-maps with 2-D CNN, treating them as a two-channel input in a manner quite similar to the RGB inputs of the image. The log-mel feature-map exhibits locality in both time and frequency domains (Abdel-Hamid et al., 2014). Therefore, we can accurately classify this feature-map with CNN. We refer to this method as Logmel-CNN. ', 'original_lines': 'We introduce recent deep learning methods for sound recognition. Piczak (2015a) proposed the ex- traction of log-mel features from raw waveforms and the application of CNNs to them. The log-mel feature is calculated for each frame of sound and represents the magnitude of each frequency area, considering human auditory perception (Davis & Mermelstein, 1980). Piczak created a 2-D feature- map by arranging the log-mel features of each frame along the time axis and calculated the delta log-mel feature, which was the first temporal derivative of the static log-mel feature. Piczak then classified these static and delta feature-maps with 2-D CNN, treating them as a two-channel input in a manner quite similar to the RGB inputs of the image. The log-mel feature-map exhibits locality in both time and frequency domains (Abdel-Hamid et al., 2014). Therefore, we can accurately classify this feature-map with CNN. We refer to this method as Logmel-CNN. ', 'after_paragraph_idx': 9, 'before_paragraph_idx': None}, {'section': '2.2 APPROACHES TO ACHIEVE HIGH PERFORMANCE', 'after_section': None, 'context_after': 'and the average of the output predictions is used to classify the test sound. Salamon & Bello (2017) proposed the usage of additional training data created by time stretching (slow down or speed up the ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Random Select & Augment Dog Bird Cat Training Dataset x1 x2 mixr(x1, x2) = p x1 + (1 p2 + (1 p) x2 p)2 where p = 1 G1 20 G2 1 + 10 1 r r · mixr(x1, x2) U (0, 1) r ⇠ p Input KL Dog 0.7 Cat 0.3 Bird 0 Label r t1 + (1 r) t2 Output Model Figure 1: Pipeline of BC learning. We create each training example by mixing two sounds belonging to different classes with a random ratio. We input the mixed sound to the model and train the model to output the mixing ratio using the KL loss. ', 'original_lines': 'Figure 1: BC learning: learning from between-class examples. We create each training example by mixing two sounds belonging to different classes with a random ratio. We input the mixed sound to the model and train the model to output the mixing ratio using the KL loss. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': '1 INTRODUCTION', 'context_after': '(Piczak, 2015b), with this method. 3.1 OVERVIEW In this section, we propose a novel learning method for deep sound recognition BC learning. Fig. 1 3.2 METHOD DETAILS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'predictions of the image recognition networks and that of the sound network. They used the out- put of the hidden layer of the sound recognition network as the feature when applying to the target sound classification problem. They then classified it with linear SVM. They could train a deep sound ', 'modified_lines': 'recognition network (SoundNet8) and achieve a 74.2% accuracy on a benchmark dataset, ESC-50 3 BETWEEN-CLASS LEARNING FOR SOUND RECOGNITION shows the pipeline of BC learning. In standard learning, we select a single training example from the dataset and input it to the model. We then train the model to output 0 or 1. By contrast, in BC learning, we select two training examples from different classes and mix these two examples using a random ratio. We then input the mixed data to the model and train the model to output the mixing ratio. BC learning uses only mixed data and labels, and thus never uses pure data and labels for training. Note that we do not mix any examples in testing phase. First, we provide the details of BC learning in Section 3.2. We mainly explain the method of mixing two sounds, which should be carefully designed to achieve a good performance. Then, in Section 3.3, we explain why BC learning leads to a discriminative feature space. ', 'original_lines': 'recognition network (SoundNet8) and achieve a 74:2% accuracy on a benchmark dataset, ESC-50 3 BC LEARNING: LEARNING FROM BETWEEN-CLASS EXAMPLES shows the pipeline of the BC learning. In standard learning, we select a single training example from the dataset and input it to the model. We then train the model to output 0 or 1. By contrast, in BC learning, we select two training examples from different classes and mix these two examples using a random ratio. We then input the mixed data to the model and train the model to output the mixing ratio. BC learning uses only mixed data and labels, and thus never uses pure data and labels. First, we provide the details of BC learning in Section 3.2. We mainly explain the method of mixing two sounds, which should be carefully designed to achieve a good performance. Then, in Section 3.3, we explain why BC learning leads to a discriminative feature space. ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'Under review as a conference paper at ICLR 2018 Let x1 and x2 be two sounds belonging to different classes randomly selected from the training dataset, and t1 and t2 be their one-hot labels. Note that x1 and x2 may have already been preprocessed or applied data augmentation, and they have the same length as that of the input of to output the mixing ratio. We then explain how to mix x1 and x2 . The simplest method is energy is proportional to the square of the amplitude: (1) if the difference of the sound pressure level of x1 and x2 is large. For example, if the amplitude G2 where p = 1 20 (2) We show this mixing method performs better than Eqn. (1) in the experiments. We calculate the sound pressure level G1 and G2 using A-weighting, considering that human audi- tory perception is not sensitive to low and high frequency areas. We can also use simpler sound pres- sure metrics such as root mean square (RMS) energy instead of an A-weighting sound pressure level. 3.2.2 OPTIMIZATION L = i=1 i=1 (4) 3.3 HOW BC LEARNING WORKS ', 'paragraph_idx': 5, 'before_section': None, 'context_before': '3 ', 'modified_lines': ' the network. We generate a random ratio r from U (0, 1) , and mix two sets of data and labels r) t2 , because we aim to train the model with this ratio. We mix two labels simply by r t1 + (1 − r x1 + (1 r) x2 . However, the following mixing formula is slightly better, considering that sound − mixr(x1, x2) = r x1 + (1 − r2 + (1 r) x2 r)2 − . ! However, auditory perception of a sound mixed with Eqn. (1) would not be x1 : x2 = r : (1 r) of x1 is 10 times as large as that of x2 and we mix them with 0.2 : 0.8, the sound of x1 would 0.2, 0.8 still be dominant in the mixed sound. In this case, training the model with a label of } { is inappropriate. We then consider using a new coefficient p(r, G1, G2) instead of r , and mix two sounds by p x1 + (1 p) x2 p)2 , where G1 and G2 is the sound pressure level of x1 and x2 [dB], − √p2 + (1 respectively. We define p so that the auditory perception of the mixed sound becomes r : (1 r). We hypothesize that the ratio of auditory perception for the network is the same as that of amplitude because the main component functions of CNNs, such as conv/fc, relu, max pooling, and average pooling, satisfy homogeneity (i.e., f (α x) = αf (x)) if we ignore the bias. We then set up an G1 r) using unit conversion 20 : (1 equation about the ratio of amplitude p from decibels to amplitudes and solve it for p. Finally, we obtain the proposed mixing method: 20 = r : (1 10 10 p) − − − − − · · mixr(x1, x2) = p x1 + (1 − p2 + (1 p) x2 p)2 − 1 + 10 G1− G2 . 1 r − r · ! 0.1 However, the performance worsens, as we show in the experiments. We create short windows ( g1, g2, . . . , gt} s) on the sound and calculate a time series of A-weighted sound pressure levels . { Then, we define G as the maximum of those time series ( G = max ∼ ). g1, g2, . . . , gt} { We define the f and θ as the model function and the model parameters, respectively. We input fθ(x(i)) x(i) n n the generated mini-batch data i=1. Some distance met- i=1 and obtain the output { { } } n t(i) n fθ(x(i)) i=1. We expect that our ratio i=1 and mini-batch label rics can be found between } { } labels represent the expected class probability distribution. Therefore, we use the KL-divergence between the labels and the model outputs as the loss function. We optimize KL-divergence with back-propagation and stochastic gradient descent because it is differentiable: t(i) j fθ(x(i)) { fθ(x(i))) = ∥ DKL(t(i) t(i) j 1 n 1 n log (3) }j { m n n , " j=1 " " ∂L , ∂θ θ θ η − ← where m is the number of classes, and η is the learning rate. ', 'original_lines': 'Training DatasetDogCatLabelRandom Select& AugmentBirdx1x2mixr(x1,x2)r⇠U(0,1)InputModelOutputKLDog 0.7Cat0.3Bird 0mixr(x1,x2)=px1+(1p)x2pp2+(1p)2wherep=11+10G1G220·1rrrt1+(1r)t2 the network. We generate a random ratio r from U (0; 1) , and mix two sets of data and labels with this ratio. We mix two labels simply by r t1 + (1 (cid:0) r) t2 , because we aim to train the model r x1 + (1 (cid:0) r) x2 . However, the following mixing formula is slightly better, considering that sound mixr(x1; x2) = r x1 + (1 (cid:0) r) x2 √ r2 + (1 (cid:0) r)2 : However, auditory perception of a sound mixed with Eqn. (1) would not be x1 : x2 = r : (1 (cid:0) r) of x1 is 10 times as large as that of x2 and we mix them with 0:2 : 0:8, the sound of x1 would still be dominant in the mixed sound. In this case, training the model with a label of f0:2; 0:8g is inappropriate. We then consider using a new coefficient p(r; G1; G2) instead of r , and mix two sounds by p x1 + (1 (cid:0) p) x2 , where G1 and G2 is the sound pressure level of x1 and x2 [dB], p2 + (1 (cid:0) p)2 respectively. We define p so that the auditory perception of the mixed sound becomes r : (1 (cid:0) r). We hypothesize that the ratio of auditory perception for the network is the same as the ratio of amplitude and then solve p (cid:1) 10 20 = r : (1 (cid:0) r) . Finally, we obtain the proposed mixing method: G1 20 : (1 (cid:0) p) (cid:1) 10 p mixr(x1; x2) = p x1 + (1 (cid:0) p) x2 √ p2 + (1 (cid:0) p)2 G1(cid:0)G2 : (cid:1) 1 (cid:0) r r 1 + 10 However, the performance worsens, as we show in the experiments. We create short windows ((cid:24) 0:1 s) on the sound and calculate a time series of A-weighted sound pressure levels fg1; g2; : : : ; gtg . Then, we define G as the maximum of those time series ( G = maxfg1; g2; : : : ; gtg ). We define the f and (cid:18) as the model function and the model parameters, respectively. We input the i=1 and obtain the output ff(cid:18)(xi)gn generated mini-batch data fxign i=1. Some distance metrics can be found between ff(cid:18)(xi)gn i=1. We expect that our ratio labels represent the expected class probability distribution. Therefore, we use the KL-divergence between the labels and the model outputs as the loss function. We optimize KL-divergence with back-propagation and stochastic gradient descent because it is differentiable: i=1 and mini-batch label ftign n∑ DKL(ti∥f(cid:18)(xi)) = n∑ m∑ j=1 tij log tij ff(cid:18)(xi)gj ; (cid:18) (cid:18) (cid:0) (cid:17) @L @(cid:18) ; (3) where m is the number of classes, and (cid:17) is the learning rate. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '10 0', 'after_section': None, 'context_after': '5 Under review as a conference paper at ICLR 2018 is an undesirable state because there is little possi- bility that a mixed sound of two classes becomes a sound of other classes. In this case, we assume ', 'paragraph_idx': 22, 'before_section': '10 0', 'context_before': 'a class other than A or B. Fig. 4(lower left) shows an example of transition of output probability of standard-learned model when we input a mixture of two particular training sounds (dog bark and rain) to the model changing the mixing ratio from 0 to 1. The output probability of dog bark ', 'modified_lines': 'monotonically increases and that of rain monotonically decreases as we expected, but the model classifies the mixed sound as baby cry when the mixing ratio is within the range of 0.45 – 0.8. This ', 'original_lines': 'Feature SpaceBC learning (ours)ABInput spaceclass Aclass BStandard learningABf(mixr(x1,x2))f(x1)f(x2)fmixr(A,B)mixr(A,B)f(mixr(x1,x2))f(x1)f(x2)x2mixr(x1,x2)x1-30-20-1001020-30-20-1001020dog barkrainothersmixedr=1r=0.8r=0 monotonically increases and that of rain monotoni- cally decreases as we expected, but the model clas- sifies the mixed sound as baby cry when the mix- ing ratio is within the range of 0:45 – 0:8. This ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': '1 r', 'after_section': '1 r', 'context_after': 'BC learning can avoid the situation in which the decision boundary of other class appears between ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'C appears between class A and class B, and the tra- jectory of the features of the mixed sounds crosses the decision boundary of class C. ', 'modified_lines': ' Standard learning BC learning (ours) C r = 1 A r = 0 B 1 0.8 0.6 n o i t c i d e r p 0.4 C 1 0.8 0.6 r = 1 A r = 0 B n o i t c i d e r p 0.4 0 0 0 1 0.2 0.6 0.2 0.8 0.2 A: dog bark B: rain 0.4 mixing ratio A: dog bark B: rain C: baby cry ', 'original_lines': '', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '4 EXPERIMENTS ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'the Fisher’s criterion, and at the same time, regularizes the positional relationship among the classes in the feature space. Hence, BC learning improves the generalization ability. ', 'modified_lines': 'Figure 4: BC learning regularizes the positional re- lationship of the classes in the feature space, by training the model not to misclassify the mixed sound as different classes. BC learning avoids the situation in which the decision boundary of other class appears between any two classes. 0.4 mixing ratio 0.8 0.2 0.6 0 1 ', 'original_lines': 'Figure 4: BC learning regularizes the posi- tional relationship of the classes in the feature space, by training the model not to misclassify the mixed sound as different classes. BC learn- ing avoids the situation in which the decision boundary of other class appears between any two classes. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'silent sections in which the value was equal to 0 at the beginning or end of examples in the ESC-50 and ESC-10 datasets. We converted all sound files to monaural 16-bit WAV files. We evaluated the performance of the methods using a K-fold cross-validation (K = 5 for ESC-50 and ESC-10, and K = 10 for UrbanSound8K), using the original fold settings. We performed cross-validation 5 times for ESC-50 and ESC-10, and showed the standard error. on each side of a training sound and randomly cropped a T -s section from the padded sound. We mixed two cropped sounds with a random ratio when using BC learning. In the testing phase, we sound at regular intervals. We then input these 10 crops to the network and averaged all softmax that is, the full range of 16-bit recordings. settings between standard and BC learning is the number of training epochs. BC learning tends to require more training epochs than does standard learning, while standard learning tends to overfit with many training epochs. To validate the comparison, we first identified an appropriate standard 6 Under review as a conference paper at ICLR 2018 Error rate (%) on EnvNet (Tokozume & Harada, 2017) SoundNet5 (Aytar et al., 2016) ', 'paragraph_idx': 34, 'before_section': '4 EXPERIMENTS', 'context_before': 'In this section, we train various types of sound recognition networks with both standard and BC learning, and demonstrate the effectiveness of BC learning. ', 'modified_lines': 'Datasets. We used ESC-50, ESC-10 (Piczak, 2015b), and UrbanSound8K (Salamon et al., 2014) to train and evaluate the models. ESC-50, ESC-10, and UrbanSound8K contain a total of 2,000, 400, and 8,732 examples consisting of 50, 10, and 10 classes, respectively. We removed completely Preprocessing and data augmentation. We used a simple preprocessing and data augmentation scheme. Let T be the input length of a network [s]. In the training phase, we padded T /2 s of zeros also padded T /2 s of zeros on each side of a test sound and cropped 10 T -s sections from the padded 1 to +1 by dividing it by 32,768, outputs. Each input data was regularized into a range of from − Learning settings. All models were trained with Nesterov’s accelerated gradient using a momen- tum of 0.9, weight decay of 0.0005, and mini-batch size of 64. The only difference in the learning learning setting for each network and dataset (details are provided in the appendix), and we dou- bled the number of training epochs when using BC learning. Later in this section, we examine the relationship between the number of training epochs and the performance. Table 1: Comparison between standard learning and our BC learning. We performed K-fold cross validation using the original fold settings. We performed cross-validation 5 times for the ESC-50 and ESC-10 datasets, and show the standard error. BC learning improves the performance of all models on all datasets, even when we use a strong data augmentation scheme. Our DSRNet trained with BC learning performs the best and surpasses the human performance on ESC-50. Model Learning ESC-50 ESC-10 UrbanSound8K ', 'original_lines': 'Datasets We used ESC-50, ESC-10 (Piczak, 2015b), and UrbanSound8K (Salamon et al., 2014) to train and evaluate the models. ESC-50, ESC-10, and UrbanSound8K contain a total of 2;000, 400, and 8;732 examples consisting of 50, 10, and 10 classes, respectively. We removed completely Preprocessing and data augmentation We used a simple preprocessing and data augmentation scheme. Let T be the input length of a network [s]. In the training phase, we padded T =2 s of zeros also padded T =2 s of zeros on each side of a test sound and cropped 10 T -s sections from the padded outputs. Each input data was regularized into a range of from (cid:0)1 to +1 by dividing it by 32;768, Learning settings All models were trained with Nesterov’s accelerated gradient using a momen- tum of 0:9, weight decay of 0:0005, and mini-batch size of 64. The only difference in the learning learning setting for each network and dataset (details are provided in the appendix A), and we dou- 00.20.40.60.81mixing ratio00.20.40.60.81predictionA: dog barkB: rainC: baby cry00.20.40.60.81mixing ratio00.20.40.60.81predictionA: dog barkB: rainStandard learningABCBC learning (ours)ABCr=0r=1r=0r=1 Table 1: Comparison between standard learning and our BC learning. We performed K-fold cross validation using the original fold settings. We performed cross-validation 5 times for the ESC-50 and ESC-10 datasets, and show the standard error. BC learning improves the performance of all models on all datasets, even when we use a strong data augmentation scheme. Our DSRNet trained with BC learning performs the best and surpasses the human performance on ESC-50. Model ', 'after_paragraph_idx': 34, 'before_paragraph_idx': 33}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Standard BC (ours) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'DSRNet (ours) DSRNet (ours) + strong augment ', 'modified_lines': '', 'original_lines': ' Learning ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '12 10', 'after_section': '12 10', 'context_after': 'pressure levels of two sounds) and the calculation method for sound pressure levels (RMS vs. A- 8 Under review as a conference paper at ICLR 2018 Comparison of Setting Mixing method Eqn. (1) Single Multi ', 'paragraph_idx': 47, 'before_section': '12 10', 'context_before': 'trained EnvNet on ESC-50 using various settings. All results are shown in Table 2. We also per- formed 5-fold cross-validation five times and show the standard error. ', 'modified_lines': 'Mixing method. We compared the mixing formula (Eqn. 1 vs. Eqn. 2, which consider the sound weighting). As shown in Table 2, the proposed mixing method using Eqn. 2 and A-weighting per- formed the best. Considering the difference in the sound pressure levels is important for BC learning, and the method used to define the sound pressure levels also has an effect on the performance. 0.1 0.2 0.2 0.2 0.3 0.2 0.2 0.3 0.2 0.2 0.2 0.2 0.3 0.3 0.2 0.1 0.2 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± − Label. We compared the different labels that we applied to the mixed sound. As shown in Table 2, the proposed ratio label of t = r t1 + (1 r) t2 performed the best. When we applied a single label of the dom- inant sound (i.e., t = t1 if r > 0.5, oth- erwise t = t2) and trained the model using softmax cross entropy loss, the performance was also improved compared to that of stan- dard learning. However, the degree of im- provement was small. When we applied a multi-label (i.e., t = t1 + t2) and trained the model using sigmoid cross entropy loss, the performance was better than when using a single label. However, the performance was worse than when using our ratio label. The model can learn the between-class examples more efficiently when using our ratio label. Table 2: Ablation analysis. We trained EnvNet on ESC-50 using various settings. The results show that the training data variation is not the only matter. Err. rate (%) Label # mixed classes (2) + RMS (2) + A-weighting (proposed) ', 'original_lines': 'Mixing methods We compared the mixing formula (Eqn. 1 vs. Eqn. 2, which consider the sound weighting). As shown in Table 2, the proposed mixing method using Eqn. 2 and A-weighting performs the best. Considering the difference of the sound pressure levels of two sounds is important for BC learning, and the method used to define the sound pressure levels also has an effect on the performance. Table 2: Ablation analysis. We trained EnvNet using various settings on ESC-50. The results show that the training data variation is not the only matter. Eqn. (2) + RMS Eqn. (2) + A-weighting (proposed) Label # of mixed classes Where to mix ', 'after_paragraph_idx': 47, 'before_paragraph_idx': 46}, {'section': '0 0', 'after_section': None, 'context_after': '12 Under review as a conference paper at ICLR 2018 ksize ', 'paragraph_idx': 25, 'before_section': None, 'context_before': 'and pooling layers with decreasing their kernel size in a similar manner to SoundNet (Aytar et al., 2016). Furthermore, we stack multiple convolutional layers with a small kernel size in a similar manner to M18 (Dai et al., 2017) and VGG (Simonyan & Zisserman, 2015), to extract more rich ', 'modified_lines': 'features. Finally, we produce output predictions with fc11–fc13 and the following softmax acti- vation. Single output prediction is calculated from 66,650 input samples (approximately 1.5 s at 44.1 kHz). We do not use padding in convolutional layers. We apply ReLU activation for all the hidden layers and batch normalization (Ioffe & Szegedy, 2015) to the output of conv1–conv10. We also apply 0.5 of dropout (Srivastava et al., 2014) to the output of fc11 and fc12. We use a weight initialization of He et al. (2015) for all convolutional layers. We initialize the weights of each fully 1/n, where n is the input connected layer using Gaussian distribution with a standard deviation of dimension of the layer. ! Table 4: Configuration of DSRNet. Data shape represents the dimension in (channel, frequency, time). ', 'original_lines': 'features. Finally, we produce output predictions with fc11–fc13 and the following softmax activa- tion. Single output prediction is calculated from 66;650 input samples (approximately 1:5 s at 44:1 kHz). We do not use padding in convolutional layers. We apply ReLU activation for all the hidden layers and batch normalization (Ioffe & Szegedy, 2015) to the output of conv1–conv10. We also apply dropout (Srivastava et al., 2014) to the output of fc11 and fc12. Table 4: Configuration of DSRNet. Data shape represents the dimension in (channel, frequency, time). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 20:25:18
|
ICLR.cc/2018/Conference
|
SkDEywp7z
|
BkYphDTmG
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'ESC-50 (Piczak, 2015b), which surpasses the human level. We argue that our approach is different from the so-called data augmentation methods we introduced ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'The experimental results show that BC learning improves the performance on various sound recog- nition networks, datasets, and data augmentation schemes, in which BC learning proves to be al- ways beneficial. Furthermore, we constructed a new deep sound recognition network (DSRNet) and ', 'modified_lines': 'trained it with BC learning. As a result, we achieved a 15:1% error rate on a benchmark dataset ', 'original_lines': 'trained it with BC learning. As a result, we achieved a 15.1% error rate on a benchmark dataset ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 1: Pipeline of BC learning. We create each training example by mixing two sounds belonging to different classes with a random ratio. We input the mixed sound to the model and train the model to output the mixing ratio using the KL loss. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': '', 'original_lines': 'Random Select & Augment Dog Bird Cat Training Dataset x1 x2 mixr(x1, x2) = p x1 + (1 p2 + (1 p) x2 p)2 where p = 1 G1 20 G2 1 + 10 1 r r · mixr(x1, x2) U (0, 1) r ⇠ p Input KL Dog 0.7 Cat 0.3 Bird 0 Label r t1 + (1 r) t2 Output Model ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': '1 INTRODUCTION', 'context_after': '(Piczak, 2015b), with this method. 3 BETWEEN-CLASS LEARNING FOR SOUND RECOGNITION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'predictions of the image recognition networks and that of the sound network. They used the out- put of the hidden layer of the sound recognition network as the feature when applying to the target sound classification problem. They then classified it with linear SVM. They could train a deep sound ', 'modified_lines': 'recognition network (SoundNet8) and achieve a 74:2% accuracy on a benchmark dataset, ESC-50 ', 'original_lines': 'recognition network (SoundNet8) and achieve a 74.2% accuracy on a benchmark dataset, ESC-50 ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 2: BC learning enlarges the Fisher’s criterion in the feature space, by training the model to output the into the point near the internally dividing point of f (x1) and f (x2) , considering the characteristic of sounds. Middle: When the Fisher’s criterion is small, some mixed examples are projected into one of the classes, and BC learning gives a large penalty. Right: When the Fisher’s criterion is large, most of the mixed examples are projected into between-class points, and BC learning gives a small penalty. Therefore, BC learning leads to such a feature space. Besides, we can generate a new sound simply by adding the waveform data of two sounds, and humans can recognize both ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'mixing ratio between two different classes. We hypothesize that a mixed sound mixr(x1; x2) is projected ', 'original_lines': 'class A x1 f Standard learning f (x1) BC learning (ours) f (x1) mixr(x1, x2) x2 class B Input space f (mixr(x1, x2)) A f (mixr(x1, x2)) A mixr(A, B) f (x2) mixr(A, B) f (x2) B B Feature Space mixing ratio between two different classes. We hypothesize that a mixed sound mixr(x1, x2) is projected 0 20 10 -10 r = 1 r = 0 r = 0.8 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 HOW BC LEARNING WORKS', 'after_section': '3.3 HOW BC LEARNING WORKS', 'context_after': 'of two particular sounds to the model changing the mixing ratio from 0 to 1. This figure shows that the mixture of two sounds is projected into the point near the internally dividing point of two features, and the features of the mixed sounds are distributed between two classes, as we expected. ', 'paragraph_idx': 18, 'before_section': '3.3 HOW BC LEARNING WORKS', 'context_before': '(Tokozume & Harada, 2017) against training data of ESC-10 (Piczak, 2015b). The results are shown in Fig. 3. The magenta circles represent the feature distribution of the mixed sounds of dog bark and rain with a ratio of ', 'modified_lines': '0:8 : 0:2, and the black dotted line represents the trajectory of the feature when we input a mixture ', 'original_lines': '0.8 : 0.2, and the black dotted line represents the trajectory of the feature when we input a mixture ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'If the Fisher’s criterion is small, the feature distribution of the mixed sounds becomes large, and would have a large overlap with one or both of the feature distribution of class A and B ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'ture space using PCA. The features of the mixed sounds are distributed between two classes. ', 'modified_lines': '', 'original_lines': ' dog bark rain others mixed -10 -30 -30 -20 -20 10 20 0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'BC learning can avoid the situation in which the decision boundary of other class appears between ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'C appears between class A and class B, and the tra- jectory of the features of the mixed sounds crosses the decision boundary of class C. ', 'modified_lines': '', 'original_lines': ' Standard learning BC learning (ours) C r = 1 A r = 0 B 1 0.8 0.6 n o i t c i d e r p 0.4 C 1 0.8 0.6 r = 1 A r = 0 B n o i t c i d e r p 0.4 0 1 0 0 0.6 0.2 0.8 0.2 0.2 A: dog bark B: rain 0.4 mixing ratio A: dog bark B: rain C: baby cry ', 'after_paragraph_idx': 2, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '4 EXPERIMENTS 4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'situation in which the decision boundary of other class appears between any two classes. ', 'modified_lines': '', 'original_lines': '0.4 mixing ratio 0.6 0.8 0.2 1 0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_after': 'silent sections in which the value was equal to 0 at the beginning or end of examples in the ESC-50 and ESC-10 datasets. We converted all sound files to monaural 16-bit WAV files. We evaluated the performance of the methods using a K-fold cross-validation (K = 5 for ESC-50 and ESC-10, ', 'paragraph_idx': 24, 'before_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_before': 'learning, and demonstrate the effectiveness of BC learning. Datasets. We used ESC-50, ESC-10 (Piczak, 2015b), and UrbanSound8K (Salamon et al., 2014) ', 'modified_lines': 'to train and evaluate the models. ESC-50, ESC-10, and UrbanSound8K contain a total of 2;000, 400, and 8;732 examples consisting of 50, 10, and 10 classes, respectively. We removed completely ', 'original_lines': 'to train and evaluate the models. ESC-50, ESC-10, and UrbanSound8K contain a total of 2,000, 400, and 8,732 examples consisting of 50, 10, and 10 classes, respectively. We removed completely ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 23}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_after': 'on each side of a training sound and randomly cropped a T -s section from the padded sound. We mixed two cropped sounds with a random ratio when using BC learning. In the testing phase, we sound at regular intervals. We then input these 10 crops to the network and averaged all softmax that is, the full range of 16-bit recordings. Learning settings. All models were trained with Nesterov’s accelerated gradient using a momen- settings between standard and BC learning is the number of training epochs. BC learning tends to require more training epochs than does standard learning, while standard learning tends to overfit with many training epochs. To validate the comparison, we first identified an appropriate standard ', 'paragraph_idx': 26, 'before_section': None, 'context_before': 'times for ESC-50 and ESC-10, and showed the standard error. Preprocessing and data augmentation. We used a simple preprocessing and data augmentation ', 'modified_lines': 'scheme. Let T be the input length of a network [s]. In the training phase, we padded T =2 s of zeros also padded T =2 s of zeros on each side of a test sound and cropped 10 T -s sections from the padded outputs. Each input data was regularized into a range of from (cid:0)1 to +1 by dividing it by 32;768, tum of 0:9, weight decay of 0:0005, and mini-batch size of 64. The only difference in the learning ', 'original_lines': 'scheme. Let T be the input length of a network [s]. In the training phase, we padded T /2 s of zeros also padded T /2 s of zeros on each side of a test sound and cropped 10 T -s sections from the padded 1 to +1 by dividing it by 32,768, outputs. Each input data was regularized into a range of from − tum of 0.9, weight decay of 0.0005, and mini-batch size of 64. The only difference in the learning ', 'after_paragraph_idx': 26, 'before_paragraph_idx': None}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': None, 'context_after': 'Model Learning Standard BC (ours) ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'use a strong data augmentation scheme. Our DSRNet trained with BC learning performs the best and surpasses the human performance on ESC-50. ', 'modified_lines': 'Error rate (%) on EnvNet (Tokozume & Harada, 2017) SoundNet5 (Aytar et al., 2016) M18 (Dai et al., 2017) Logmel-CNN (Piczak, 2015a) + BN DSRNet (ours) DSRNet (ours) + strong augment ', 'original_lines': ' ESC-50 ESC-10 UrbanSound8K Error rate (%) on EnvNet (Tokozume & Harada, 2017) SoundNet5 (Aytar et al., 2016) M18 (Dai et al., 2017) Logmel-CNN (Piczak, 2015a) + BN DSRNet (ours) DSRNet (ours) + strong augment ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_after': 'EnvNet on ESC-50 in Fig. 5(left). Note that the curves show the average of all trials. 4.1.2 EXPERIMENT ON A DEEPER NETWORK ', 'paragraph_idx': 31, 'before_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_before': 'The results are summarized in the upper half of Table 1. Our BC learning improved the performance of all networks on all datasets. The performance on ESC-50, ESC-10, and UrbanSound8K was ', 'modified_lines': 'improved by 4:5–6:4%, 1:5–4:0%, and 1:8–4:8%, respectively. We show the training curves of ', 'original_lines': 'improved by 4.5–6.4%, 1.5–4.0%, and 1.8–4.8%, respectively. We show the training curves of ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 31}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_after': 'if the number of training epochs was small, the performance of BC learning was lower than that of standard learning. We can say that BC learning always improves the performance as long as we use a sufficiently large number of training epochs. Additionally, the number of training epochs needed would become large when there are many classes. Figure 6: Error rate vs. # of training epochs. 4.2 ABLATION ANALYSIS ', 'paragraph_idx': 31, 'before_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_before': '600 training epochs are sufficient for both ESC- 10 and ESC-50. However, this number is insufficient for BC learning. Although BC learning per- formed better than standard learning with 600 epochs, improved performance was achieved when ', 'modified_lines': 'using more training epochs (900 and 1;200 epochs for ESC-10 and ESC-50, respectively). However, ', 'original_lines': 'using more training epochs (900 and 1,200 epochs for ESC-10 and ESC-50, respectively). However, total epochs total epochs r o r r e 1200 1200 900 300 600 900 600 300 22 26 10 24 12 28 ', 'after_paragraph_idx': 31, 'before_paragraph_idx': 31}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Comparison of Setting Mixing method ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'model can learn the between-class examples more efficiently when using our ratio label. ', 'modified_lines': '', 'original_lines': 'Table 2: Ablation analysis. We trained EnvNet on ESC-50 using various settings. The results show that the training data variation is not the only matter. Err. rate (%) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Input (proposed) pool2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'N = 2 (proposed) N = 2 or 3 N = 3 ', 'modified_lines': '', 'original_lines': ' 26.8 26.5 24.1 26.5 25.0 24.1 27.3 24.8 24.1 24.1 25.3 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Where to mix ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'pool4 fc5 fc6 ', 'modified_lines': '', 'original_lines': ' 24.1 27.1 28.7 28.8 28.5 28.6 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'three sounds belonging to different classes. When we mixed three sounds, we used a mixing method that is an extended version of Eqn. 2 for three classes, and we generated the mixing ratio from is marginally better than N = 2, it does not represent a significant difference. It is interesting to note that the performance of N = 3 is worse than that of N = 2 despite the larger variation in training data. We believe that the most important factor is not the training data variation but rather ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(2016). N = 1 or 2 means that we completely randomly selected two sounds to be mixed; some- times these two sounds were the same class. N = 2 means that we mixed two sounds belonging to different classes (proposed). N = 2 or 3 means that we mixed two and three sounds belonging ', 'modified_lines': 'to different classes with probabilities of 0:5 and 0:5, respectively. N = 3 means that we mixed Dir(1; 1; 1). As shown in Table 2, the proposed N = 2 performed well. Although N = 2 or 3 ', 'original_lines': 'to different classes with probabilities of 0.5 and 0.5, respectively. N = 3 means that we mixed Dir(1, 1, 1). As shown in Table 2, the proposed N = 2 performed well. Although N = 2 or 3 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': None, 'context_after': 'Where to mix. Finally, we investigated what occurs when we mix two examples within the net- work. We input two sounds to be mixed into the model and performed the forward calculation to the mixing point. We then mixed the activations of two sounds at the mixing point and performed the As shown in Table 2, the performance tended to improve when we mixed two examples at the layer near the input layer. The performance was the best when we mixed in the input space. Mixing in the input space is the best choice, not only because it performs the best, but also because it does not require additional forward/backward computation and is easy to implement. 5 CONCLUSION ', 'paragraph_idx': 28, 'before_section': None, 'context_before': 'feature distributions. Mixing more than two sounds leads to increased training data variation, but we expect that cannot efficiently achieve them. ', 'modified_lines': '29:2 (cid:6) 0:1 rest of the forward calculation. We mixed two activations h1 and h2 simply by r h1 + (1 (cid:0) r) h2 . ', 'original_lines': '29.2 0.1 ± rest of the forward calculation. We mixed two activations h1 and h2 simply by r h1 + (1 r) h2 . − ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ESC-50 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '# of epochs Initial LR ', 'modified_lines': '', 'original_lines': ' LR schedule Warmup ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 21:23:13
|
ICLR.cc/2018/Conference
|
BkYphDTmG
|
B13Cis7BM
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 RELATED WORK ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'trained it with BC learning. As a result, we achieved a 15:1% error rate on a benchmark dataset ESC-50 (Piczak, 2015b), which surpasses the human level. ', 'modified_lines': 'We argue that our BC learning is different from the so-called data augmentation methods we in- troduced above. Although BC learning can be regarded as a data augmentation method from the viewpoint of using augmented data, the novelty or key point of our method is not mixing multiple sounds, but rather learning method of training the model to output the mixing ratio. This is a fun- damentally different idea from previous data augmentation methods. In general, data augmentation methods aim to improve the generalization ability by generating additional training data which is likely to appear in testing phase. Thus, the problem to be solved is the same in both training and testing phase. On the other hand, BC learning only uses mixed data and labels for training, while mixed data does not appear in testing phase. BC learning is a method to improve the classification performance by solving a problem of predicting the mixing ratio between two different classes. To the best of our knowledge, this is the first time a learning method that employs a mixing ratio be- tween different classes has been proposed. We intuitively describe why such a learning method is effective and demonstrate the effectiveness of BC learning through wide-ranging experiments. ', 'original_lines': 'We argue that our approach is different from the so-called data augmentation methods we introduced above. In general, the problem to be solved is the same in both training and testing phase. Data augmentation methods aim to improve the generalization ability by generating additional training data which is likely to appear in testing phase. Thus, the most important thing is the training data variation. On the other hand, our BC learning aims to learn a classification problem by solving the problem of predicting the mixing ratio between two different classes. Our method only uses mixed data and labels for training, without using the original pure data and labels, while the model solves a single-label classification problem in the testing phase. We believe that the key point of BC learning is not the increase in data variation but the learning method itself. To the best of our knowledge, this is the first time a learning method that employs a mixing ratio between different classes has been proposed. Hence, our BC learning is a novel learning method. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': '2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'the most standard and important data augmentation methods is cropping (Piczak, 2015a; Aytar et al., 2016; Tokozume & Harada, 2017). The training data variation increases, and we are able to more efficiently train the network when the short section (approximately 1–2 s) of the training sound ', 'modified_lines': '', 'original_lines': 'cropped from the original data to the network, and not the whole section, is inputted. A similar method is generally used in the test phase. Multiple sections of test data are input with a stride, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 APPROACHES TO ACHIEVE HIGH PERFORMANCE', 'after_section': '2.2 APPROACHES TO ACHIEVE HIGH PERFORMANCE', 'context_after': 'and the average of the output predictions is used to classify the test sound. Salamon & Bello (2017) proposed the usage of additional training data created by time stretching (slow down or speed up the sounds), pitch shifting (raise or lower the pitch of sounds), dynamic range compression (compress ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'classes with a random ratio. We input the mixed sound to the model and train the model to output the mixing ratio using the KL loss. ', 'modified_lines': 'cropped from the original data to the network, and not the whole section, is inputted. A similar method is generally used in the test phase. Multiple sections of test data are input with a stride, ', 'original_lines': '', 'after_paragraph_idx': 10, 'before_paragraph_idx': None}, {'section': '2.2 APPROACHES TO ACHIEVE HIGH PERFORMANCE', 'after_section': '2.2 APPROACHES TO ACHIEVE HIGH PERFORMANCE', 'context_after': 'Next, we describe the approaches of utilizing external data/knowledge. Aytar et al. (2016) proposed to learn rich sound representations using pairs of image and sound included in a large amount of ', 'paragraph_idx': 10, 'before_section': '2.2 APPROACHES TO ACHIEVE HIGH PERFORMANCE', 'context_before': 'searchers also proposed using additional training data created by mixing multiple training examples. Parascandolo et al. (2016) applied this method to polyphonic sound event detection. Takahashi et al. (2016) applied this method to single-label sound event classification, but only the sounds belonging ', 'modified_lines': 'to the same class were mixed. Our method is different from both of them in that we employ a mixing ratio between different classes for training. ', 'original_lines': 'to the same class were mixed. Both of them treated the mixed data as additional training data, and only focused on the increase in data variation. To the best of our knowledge, there has been no method that employs a mixing ratio between different classes for training. ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 10}, {'section': '3.3 HOW BC LEARNING WORKS', 'after_section': '3.3 HOW BC LEARNING WORKS', 'context_after': 'linearly-separable features are learned in a hidden layer close to the output layer (An et al., 2015). 4 ', 'paragraph_idx': 17, 'before_section': '3.3 HOW BC LEARNING WORKS', 'context_before': '3.3.1 ENLARGEMENT OF FISHER’S CRITERION ', 'modified_lines': 'BC leaning leads to an enlargement of Fisher’s criterion (i.e., the ratio of the between-class dis- tance to the within-class variance). We explain the reason in Fig. 2. In deep neural networks, ', 'original_lines': 'BC leaning leads to an enlargement of the Fisher’s criterion (i.e., the ratio of the between-class distance to the within-class variance). We explain the reason in Fig. 2. In deep neural networks, ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': '3.3 HOW BC LEARNING WORKS', 'after_section': None, 'context_after': 'circles represent the feature distribution of the mixed sounds of dog bark and rain with a ratio of 0:8 : 0:2, and the black dotted line represents the trajectory of the feature when we input a mixture of two particular sounds to the model changing the mixing ratio from 0 to 1. This figure shows that the mixture of two sounds is projected into the point near the internally dividing point of two features, and the features of the mixed sounds are distributed between two classes, as we expected. 3.3.2 REGULARIZATION OF POSITIONAL RELATIONSHIP AMONG FEATURE DISTRIBUTIONS ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Figure 2: BC learning enlarges Fisher’s criterion in the feature space, by training the model to output the mixing ratio between two classes. We hypothesize that a mixed sound mixr(x1; x2) is projected into the point near the internally dividing point of f (x1) and f (x2) , considering the characteristic of sounds. Middle: When Fisher’s criterion is small, some mixed examples are projected into one of the classes, and BC learning gives a large penalty. Right: When Fisher’s criterion is large, most of the mixed examples are projected into between- class points, and BC learning gives a small penalty. Therefore, BC learning leads to such a feature space. Besides, we can generate a new sound simply by adding the wave- form data of two sounds, and humans can recognize both of two sounds and perceive which of two sounds is louder or softer from the mixed sound. Therefore, it is expected that an internally dividing point of the input space almost corresponds to that of the semantic feature space, at least for sounds. Then, the feature distribution of the mixed sounds of class A and class B with a certain ratio would be located near the internally dividing point of the original feature distribution of class A and B, and the variance of the feature dis- tribution of the mixed sounds is proportional to the original feature distribution of class A and B. To investigate whether this hypoth- esis is correct or not, we visualized the feature distributions of the standard-learned model using PCA. We used the activations of fc6 of EnvNet (Tokozume & Harada, 2017) against training data of ESC- 10 (Piczak, 2015b). The results are shown in Fig. 3. The magenta Figure 3: Visualization of the feature space using PCA. The features of the mixed sounds are distributed between two classes. If Fisher’s criterion is small, the feature distribution of the mixed sounds becomes large, and would have a large overlap with one or both of the feature distribution of class A and B (Fig. 2(middle)). In this case, some mixed sounds are projected into one of the classes as shown in this figure, and the model cannot output the mixing ratio. BC learning gives a penalty to this situation because BC learning trains a model to output the mixing ratio. If Fisher’s criterion is large, on the other hand, the overlap becomes small (Fig. 2(right)). The model becomes able to output the mixing ratio, and BC learning gives a small penalty. Therefore, BC learning enlarges Fisher’s criterion between any two classes in the feature space. ', 'original_lines': 'Figure 2: BC learning enlarges the Fisher’s criterion in the feature space, by training the model to output the mixing ratio between two different classes. We hypothesize that a mixed sound mixr(x1; x2) is projected into the point near the internally dividing point of f (x1) and f (x2) , considering the characteristic of sounds. Middle: When the Fisher’s criterion is small, some mixed examples are projected into one of the classes, and BC learning gives a large penalty. Right: When the Fisher’s criterion is large, most of the mixed examples are projected into between-class points, and BC learning gives a small penalty. Therefore, BC learning leads to such a feature space. Besides, we can generate a new sound simply by adding the waveform data of two sounds, and humans can recognize both of two sounds and perceive which of two sounds is louder or softer from the mixed sound. Therefore, it is expected that an internally dividing point of the input space almost corresponds to that of the semantic feature space, at least for sounds. Then, the feature distribution of the mixed sounds of class A and class B with a certain ratio would be located near the internally divid- ing point of the original feature distribution of class A and B, and the variance of the feature distribution of the mixed sounds is proportional to the original feature distribution of class A and B. To investigate whether this hypothesis is correct or not, we visualized the feature distributions of the standard-learned model using PCA. We used the activations of fc6 of EnvNet (Tokozume & Harada, 2017) against training data of ESC-10 (Piczak, 2015b). The results are shown in Fig. 3. The magenta Figure 3: Visualization of the fea- ture space using PCA. The features of the mixed sounds are distributed between two classes. If the Fisher’s criterion is small, the feature distribution of the mixed sounds becomes large, and would have a large overlap with one or both of the feature distribution of class A and B (Fig. 2(middle)). In this case, some mixed sounds are projected into one of the classes as shown in this figure, and the model cannot output the mixing ratio. BC learning gives a penalty to this situation because BC learning trains a model to output the mixing ratio. If the Fisher’s criterion is large, on the other hand, the overlap becomes small (Fig. 2(right)). The model becomes able to output the mixing ratio, and BC learning gives a small penalty. Therefore, BC learning enlarges the Fisher’s criterion between any two classes in the feature space. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 SOUND RECOGNITION NETWORKS', 'after_section': None, 'context_after': '5 Feature SpaceBC learning (ours)ABInput spaceclass Aclass BStandard learningABf(mixr(x1,x2))f(x1)f(x2)fmixr(A,B)mixr(A,B)f(mixr(x1,x2))f(x1)f(x2)x2mixr(x1,x2)x1-30-20-1001020-30-20-1001020dog barkrainothersmixedr=0.8r=1r=0 Under review as a conference paper at ICLR 2018 a sound of other classes. In this case, we assume that the features of each class are distributed as in Fig. 4(upper left). The decision boundary of class ', 'paragraph_idx': 9, 'before_section': None, 'context_before': 'of standard-learned model when we input a mixture of two particular training sounds (dog bark and rain) to the model changing the mixing ratio from 0 to 1. The output probability of dog bark monotonically increases and that of rain monotonically decreases as we expected, but the model ', 'modified_lines': 'classifies the mixed sound as baby cry when the mixing ratio is within the range of 0:45 – 0:8. This is an undesirable state because there is little possibility that a mixed sound of two classes becomes ', 'original_lines': 'classifies the mixed sound as baby cry when the mixing ratio is within the range of 0:45 – 0:8. This is an undesirable state because there is little possi- bility that a mixed sound of two classes becomes ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-22 18:22:12
|
ICLR.cc/2018/Conference
|
B13Cis7BM
|
BJGoaLx0W
|
[]
|
2018-01-25 15:42:05
|
ICLR.cc/2018/Conference
|
BJGoaLx0W
|
BkuOK7TPf
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION The amount and quality of training data and how to feed it are important for machine learning, partic- ularly for deep learning. Various approaches have been proposed to improve the sound recognition ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep ', 'modified_lines': 'sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level1. Sound recognition has been conventionally conducted by applying classifiers such as SVM to local features such as MFCC or log-mel features (Logan et al., 2000; Vacher et al., 2007; Łopatka et al., 2010). Convolutional neural networks (CNNs) (LeCun et al., 1998), which have achieved success in image recognition tasks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), have recently proven to be effective in tasks related to series data, such as speech recognition (Abdel-Hamid et al., 2014; Sainath et al., 2015a;b) and natural language processing (Kim, 2014; Zhang et al., 2015). Some researchers applied CNNs to sound recognition tasks and achieved high performance (Aytar et al., 2016; Dai et al., 2017; Tokozume & Harada, 2017). ', 'original_lines': 'sound recognition network (DSRNet) and train it with BC learning. As a result, we achieved a performance surpasses the human level. Sound recognition has been conventionally conducted by applying classifiers, such as SVM or GMM, to local features, such as MFCC or log-mel features (Logan et al., 2000; Vacher et al., 2007; Łopatka et al., 2010). Convolutional neural networks (CNNs) (LeCun et al., 1998), which have achieved success in image recognition tasks (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016), have recently proven to be effective in tasks related to series data, such as speech recognition (Abdel-Hamid et al., 2014; Sainath et al., 2015a;b) and natural language process- ing (Kim, 2014; Zhang et al., 2015). Some researchers applied CNNs to sound recognition tasks and achieved high performance: applying a CNN to local features (Piczak, 2015a) and learning directly from raw waveforms (Aytar et al., 2016; Dai et al., 2017; Tokozume & Harada, 2017). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In this paper, as a novel third approach we propose a learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'sounds or adding a background noise (Tokozume & Harada, 2017; Salamon & Bello, 2017). Re- searchers also proposed using additional training data created by mixing multiple training examples (Parascandolo et al., 2016; Takahashi et al., 2016). The second approach is to use external data or ', 'modified_lines': 'knowledge. Aytar et al. (2016) proposed learning rich sound representations using a large amount of unlabeled video datasets and pre-trained image recognition networks. The sound dataset expansion was also conducted (Salamon et al., 2014; Piczak, 2015b; Gemmeke et al., 2017). ', 'original_lines': 'knowledge. Aytar et al. (Aytar et al., 2016) proposed learning rich sound representations using a large amount of unlabeled video datasets and pre-trained image recognition networks. The sound dataset expansion was also conducted (Salamon et al., 2014; Piczak, 2015b; Gemmeke et al., 2017). ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'data of two sounds. The advantages of BC learning are not limited only to the increase in varia- tion of the training data; BC learning leads to an enlargement of Fisher’s criterion (Fisher, 1936) (i.e., the ratio of the between-class distance to the within-class variance) in the feature space, and a regularization of the positional relationship among the feature distributions of the classes. trained it with BC learning. As a result, we achieved a 15:1% error rate on a benchmark dataset ESC-50 (Piczak, 2015b), which surpasses the human level. ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the network to output the mixing ratio. Our method focuses on the char- acteristic of the sound, from which we can generate a new sound simply by adding the waveform ', 'modified_lines': '1The code is publicly available at https://github.com/mil-tokyo/bc_learning_sound/. 1 Published as a conference paper at ICLR 2018 The experimental results show that BC learning improves the performance on various sound recogni- tion networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we constructed a new deep sound recognition network (EnvNet-v2) and ', 'original_lines': ' 1 Under review as a conference paper at ICLR 2018 The experimental results show that BC learning improves the performance on various sound recog- nition networks, datasets, and data augmentation schemes, in which BC learning proves to be al- ways beneficial. Furthermore, we constructed a new deep sound recognition network (DSRNet) and ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'mixed data does not appear in testing phase. BC learning is a method to improve the classification performance by solving a problem of predicting the mixing ratio between two different classes. To the best of our knowledge, this is the first time a learning method that employs a mixing ratio be- ', 'paragraph_idx': 8, 'before_section': '1 INTRODUCTION', 'context_before': 'damentally different idea from previous data augmentation methods. In general, data augmentation methods aim to improve the generalization ability by generating additional training data which is likely to appear in testing phase. Thus, the problem to be solved is the same in both training and ', 'modified_lines': 'testing phase. On the other hand, BC learning uses only mixed data and labels for training, while ', 'original_lines': 'testing phase. On the other hand, BC learning only uses mixed data and labels for training, while ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where m is the number of classes, and (cid:17) is the learning rate. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '@(cid:18) ; ', 'modified_lines': '', 'original_lines': ' (4) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_after': 'Learning ', 'paragraph_idx': 29, 'before_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_before': 'Logmel-CNN (Piczak, 2015a) + BN ', 'modified_lines': 'EnvNet-v2 (ours) EnvNet-v2 (ours) + strong augment ', 'original_lines': 'DSRNet (ours) DSRNet (ours) + strong augment ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 29}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': None, 'context_after': '4.1.1 EXPERIMENT ON EXISTING NETWORKS ', 'paragraph_idx': 33, 'before_section': None, 'context_before': '- - ', 'modified_lines': 'Figure 5: Training curves of EnvNet and EnvNet-v2 on ESC-50 (average of all trials). ', 'original_lines': 'Figure 5: Training curves of EnvNet and DSRNet on ESC-50 (average of all trials). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '7 The results are also shown in the upper half of Table 1, and the training curves on ESC-50 are given in Fig. 5(right). The performance was also improved with BC learning, and the degree of the improvement was greater than other networks (7:4%, 3:6%, and 7:5% on ESC-50, ESC-10, lowest on ESC-50 and UrbanSound8K among all the models including Logmel-CNN + BN, which uses powerful hand-crafted features. Moreover, the error rate on ESC-50 (18:2%) is comparable to 4.1.3 EXPERIMENT WITH STRONG DATA AUGMENTATION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '4.1.2 EXPERIMENT ON A DEEPER NETWORK To investigate the effectiveness of BC learning on deeper networks, we constructed a deep sound ', 'modified_lines': 'recognition network based on EnvNet, which we refer to as EnvNet-v2, and trained it with both 101520253035404503006009001200error rate (%)epochsEnvNet standardEnvNet BC (ours)10152025303540450300600900120015001800error rate (%)epochsEnvNet-v2 standardEnvNet-v2 BCEnvNet-v2 std.+augmentEnvNet-v2 BC+augment Published as a conference paper at ICLR 2018 standard and BC learning. The main differences between EnvNet and EnvNet-v2 are as follows: 1) EnvNet uses a sampling rate of 16 kHz for the input waveforms, whereas EnvNet-v2 uses 44:1 kHz; and 2) EnvNet consists of 7 layers, whereas EnvNet-v2 consists of 13 layers. A detailed configuration is provided in the appendix. and UrbanSound8K, respectively). The error rate of EnvNet-v2 trained with BC learning was the human performance reported by Piczak (2015b) (18:7%). The point is not that our EnvNet-v2 is well designed, but that our BC learning successfully elicits the true value of a deep network. ', 'original_lines': 'recognition network based on EnvNet, which we refer to as DSRNet, and trained it with both standard 101520253035404503006009001200error rate (%)epochsEnvNet standardEnvNet BC (ours)10152025303540450300600900120015001800error rate (%)epochsDSRNet standardDSRNet BCDSRNet std.+augmentDSRNet BC+augment Under review as a conference paper at ICLR 2018 and BC learning. The main differences between EnvNet and DSRNet are as follows: 1) EnvNet uses a sampling rate of 16 kHz for the input waveforms, whereas DSRNet uses 44:1 kHz; and 2) EnvNet consists of 7 layers, whereas DSRNet consists of 13 layers. A detailed configuration is provided in the appendix. and UrbanSound8K, respectively). The error rate of DSRNet trained with BC learning was the human performance reported by Piczak (2015b) (18:7%). The point is not that our DSRNet is well designed, but that our BC learning successfully elicits the true value of a deep network. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.3 HOW BC LEARNING WORKS', 'after_section': None, 'context_after': 'are given in Fig. 5(right). With BC learning, the performance was significantly improved even when we used a strong data augmentation scheme. Furthermore, the performance on ESC-50 (15:1%) sur- passes the human performance (18:7%). BC learning performs well on various networks, datasets, ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'when employing BC learning) using linear interpolation, and gain augmentation was performed just before inputting to the network (thus, after mixing when using BC learning). ', 'modified_lines': 'The results for EnvNet-v2 are shown in the lower half of Table 1, and the training curves on ESC-50 ', 'original_lines': 'The results for DSRNet are shown in the lower half of Table 1, and the training curves on ESC-50 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Input (proposed) pool2 pool3 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '28:5 (cid:6) 0:1 28:6 (cid:6) 0:2 ', 'modified_lines': '', 'original_lines': 'Table 2: Ablation analysis. We trained EnvNet on ESC-50 using various settings. The results show that the training data variation is not the only matter. Label. We compared the different labels that we applied to the mixed sound. As shown in Table 2, the proposed ratio label of t = r t1 + (1 (cid:0) r) t2 performed the best. When we applied a single label of the dom- inant sound (i.e., t = t1 if r > 0:5, oth- erwise t = t2) and trained the model using softmax cross entropy loss, the performance was also improved compared to that of stan- dard learning. However, the degree of im- provement was small. When we applied a multi-label (i.e., t = t1 + t2) and trained the model using sigmoid cross entropy loss, the performance was better than when using a single label. However, the performance was worse than when using our ratio label. The model can learn the between-class examples more efficiently when using our ratio label. Comparison of Setting Mixing method Label # mixed classes Eqn. (1) (2) + RMS (2) + A-weighting (proposed) Single Multi Ratio (proposed) N = 1 N = 1 or 2 N = 2 (proposed) N = 2 or 3 N = 3 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 METHOD DETAILS', 'after_section': None, 'context_after': 'Number of mixed classes. We investigated the relationship between the performance and 29:2 (cid:6) 0:1 ', 'paragraph_idx': 17, 'before_section': None, 'context_before': 'Where to mix ', 'modified_lines': 'the number of sound classes that we mixed. N = 1 in Table 2 means that we mixed two sounds belonging to the same class, which is similar to Takahashi et al. (2016). N = 1 or 2 means that we completely randomly selected two sounds to be mixed; sometimes these two sounds were the same class. N = 2 or 3 means that we mixed two and three sounds belonging to different classes with probabilities of 0:5 and 0:5, respectively. When we mixed three sounds, we generated a mixing ratio from Dir(1,1,1) and mixed three sounds using a method that is an extended version of Eqn. 2 to three classes. As shown in Table 2, the proposed N = 2 performed the best. N = 2 or 3 also achieved a good performance. It is interesting to note that the performance of N = 3 is worse than that of N = 2 despite the larger variation in training data. We believe that the most important factor is not the training data variation but rather the enlargement of Fisher’s criterion and the regularization of the positional relationship among the feature distributions. Mixing more than two sounds leads to increased training data variation, but we expect that cannot efficiently achieve them. Standard learning ', 'original_lines': 'Standard learning the number of classes of sounds that we mixed. N = 1 in Table 2 means that we mixed two sounds belonging to the same class, which is similar to Takahashi et al. (2016). N = 1 or 2 means that we completely randomly selected two sounds to be mixed; some- times these two sounds were the same class. N = 2 means that we mixed two sounds belonging to different classes (proposed). N = 2 or 3 means that we mixed two and three sounds belonging to different classes with probabilities of 0:5 and 0:5, respectively. N = 3 means that we mixed three sounds belonging to different classes. When we mixed three sounds, we used a mixing method that is an extended version of Eqn. 2 for three classes, and we generated the mixing ratio from Dir(1; 1; 1). As shown in Table 2, the proposed N = 2 performed well. Although N = 2 or 3 is marginally better than N = 2, it does not represent a significant difference. It is interesting to note that the performance of N = 3 is worse than that of N = 2 despite the larger variation in training data. We believe that the most important factor is not the training data variation but rather the enlargement of Fisher’s criterion and the regularization of the positional relationship among the feature distributions. Mixing more than two sounds leads to increased training data variation, but we expect that cannot efficiently achieve them. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 CONCLUSION', 'after_section': None, 'context_after': '9 REFERENCES ', 'paragraph_idx': 45, 'before_section': '5 CONCLUSION', 'context_before': 'We proposed a novel learning method for deep sound recognition, called BC learning. Our method improved the performance on various networks, datasets, and data augmentation schemes. More- ', 'modified_lines': 'over, we achieved a performance surpasses the human level by constructing a deeper network named EnvNet-v2 and training it with BC learning. BC learning is a simple and powerful method that im- proves various sound recognition methods and elicits the true value of large-scale networks. Further- more, BC learning is innovative in that a discriminative feature space can be learned from between- class examples, without inputting pure examples. We assume that the core idea of BC learning is generic and could contribute to the improvement of the performance of tasks of other modalities. ACKNOWLEDGEMENT This work was supported by JST CREST Grant Number JPMJCR1403, Japan. Published as a conference paper at ICLR 2018 ', 'original_lines': 'over, we achieved a performance surpasses the human level by constructing a new deep sound recognition network named DSRNet and training it with BC learning. BC learning is a simple and powerful method that improves various sound recognition methods and elicits the true value of large-scale networks. Furthermore, BC learning is innovative in that a discriminative feature space can be learned from between-class examples, without inputting pure examples. We assume that the core idea of BC learning is generic and could contribute to the improvement of the performance of tasks of other modalities. Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 45}, {'section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'after_section': None, 'context_after': 'EnvNet SoundNet5 M18 Logmel-CNN EnvNet SoundNet5 M18 Logmel-CNN 600 300 ', 'paragraph_idx': 29, 'before_section': '4.1 COMPARISON BETWEEN STANDARD LEARNING AND BC LEARNING', 'context_before': 'SoundNet5 M18 Logmel-CNN ', 'modified_lines': 'EnvNet-v2 EnvNet-v2 EnvNet-v2 ', 'original_lines': 'DSRNet DSRNet DSRNet ', 'after_paragraph_idx': None, 'before_paragraph_idx': 29}]
|
2018-02-23 05:41:03
|
ICLR.cc/2018/Conference
|
HJtynQRpW
|
H1y_COa-G
|
[{'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': 'N Y 80.0% 95.0% 69.5% 89.1% 86.0% 97.6% 72.9% 92.3% 96.7% 98.4% 88.0% 96.5% 97.3% 98.4% 88.1% 97.0% SIAMESE KERNEL SVM SIAMESE KERNEL SVM N Y 98.5% 99.3% 94.0% 98.0% 98.4% 99.2% 94.1% 98.2% ', 'paragraph_idx': 21, 'before_section': '4 EXPERIMENTS', 'context_before': 'BASELINE CLASSIFIER BASELINE CLASSIFIER ', 'modified_lines': 'CONVOLUTIONAL SIAMESE NET CONVOLUTIONAL SIAMESE NET MATCHING NET MATCHING NET N Y N Y 98.1% 98.9% 93.8% 98.5% 97.9% 98.7% 93.5% 98.7% NEURAL STATISTICIAN Edwards & Storkey (2016) N N TCML Mishra et al. (2017) N PROTOTYPICAL NETWORKS Snell et al. (2017) N METANET Munkhdalai & Yu (2017) 98.1% 99.5% 93.2% 98.1% 99.0% 99.8% 97.6% 99.4% 98.8% 99.7% 96.0% 98.9% 99.0% 98.7% 97.1% 97.0% ', 'original_lines': 'CONVOLUTIONAL SIAMESE NET N CONVOLUTIONAL SIAMESE NET Y MATCHING NET MATCHING NET N Y 98.1% 98.9% 93.8% 98.5% 97.9% 98.7% 93.5% 98.7% ', 'after_paragraph_idx': None, 'before_paragraph_idx': 21}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': 'wlen can be considered as a hyperparameter of the model, so optimal value of wlen depends on the task as we will see. Furthermore, the length of each data point is exactly wlen seconds, where wlen ≤ k is satisfied. In addition, these data points can partially overlap, but k seconds length training data points mustn’t overlap with evaluation points. Few seconds classification is an important task in real-world applications because it is exhausting to collect a large amount of data from speakers for robust classification. In this section, two scenarios are investigated: the first is a real-time application for speaker recognition, where k is 1 second, this can be considered as the upper limit of the online recognition. The second case is where k is 5 seconds, it is considered as an offline scenario. TIMIT (Garofolo et al., 1993) is one of the most widely used English speech corpus. The dataset was originally designed for speech-to-text tasks. However, this dataset is perfect for speaker iden- learning problem on this dataset, so two different baseline models are introduced. In this experiment, the official training set is used for training the models to learn representation and ', 'paragraph_idx': 21, 'before_section': '4 EXPERIMENTS', 'context_before': 'The task of one-shot learning is poorly defined on audio data because one-shot can be 1 second or even 5 seconds as well, therefore it is required to redefine the task. In this paper k-sec learning is defined so that the length of the training data is k seconds regardless of the sample rate. Eventually, ', 'modified_lines': ' 2The figure originally published in Koch et al. (2015), this version contains minor modifications. 3The table contains results of Vinyals et al. (2016) regarding baseline classifier, Convolutional Siamese Net, and Matching Net models. The Siamese kernel SVM results are measured in the same experimental setup as other models 6 Under review as a conference paper at ICLR 2018 tification task too. The used dataset, which is projected from TIMIT contains audio files and their labels are the speakers. It contains 630 native speakers and the total number of sentences is 6300. Each speaker speaks for about 30 seconds. The official training set contains 462 people’s voice. As a matter of fact, the training set is distinct from evaluation set regarding speakers, so neither of the training set speakers appears in the test set due to TIMIT is a speech-to-text task oriented dataset. This partitioning of the data makes the dataset unsuitable for a classical classification task, but it makes the TIMIT dataset perfect for k-sec learning task. There is no baseline known for k-sec ', 'original_lines': ' 2The figure originally published in Koch et al. (2015), this version contains minor modifications. 3The table contains results of Vinyals et al. (2016) regarding baseline classifier, Convolutional Siamese Net, and Matching Net models. The Siamese kernel SVM results are measured in the same experimental setup as other models 6 Under review as a conference paper at ICLR 2018 tification task too. It contains 630 native speakers and the total number of sentences is 6300. Each speaker speaks for about 30 seconds. The official training set contains 462 people’s voice. As a matter of fact, the training set is distinct from evaluation set regarding speakers, so neither of the training set speakers appears in the test set due to TIMIT is a speech-to-text task oriented dataset. This partitioning of the data makes the dataset unsuitable for a classical classification task, but it makes the TIMIT dataset perfect for k-sec learning task. There is no baseline known for k-sec ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 21}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, volume 2, 2015. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'arXiv:1412.6980, 2014. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-12 16:05:59
|
ICLR.cc/2018/Conference
|
H1y_COa-G
|
r1uR0u6-M
|
[]
|
2017-12-12 16:07:44
|
ICLR.cc/2018/Conference
|
r1uR0u6-M
|
By7Nvk5pZ
|
[]
|
2018-01-25 15:42:51
|
ICLR.cc/2018/Conference
|
HJG45zZCW
|
r17wKy-0-
|
[]
|
2018-01-25 15:41:21
|
ICLR.cc/2018/Conference
|
SklMhMWA-
|
SJCedmZAZ
|
[]
|
2017-10-27 21:41:09
|
ICLR.cc/2018/Conference
|
SJCedmZAZ
|
SkVCdmZA-
|
[{'section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'after_section': None, 'context_after': '(7) If the conditions of Section 2.2 or 2.3 apply, our choice of prior over functions implies that z1, ..., zn, z∗ are n + 1 draws from a GP and z∗, z|x∗, x ∼ N (0, K) is a multivariate Gaussian ', 'paragraph_idx': 20, 'before_section': None, 'context_before': '1 P (t) ', 'modified_lines': 'dz P (z∗, z|x∗, x) P (t|z) , where t = (t1, ..., tn)T are the targets on the training set. ', 'original_lines': 'dz P (z∗, z|x∗, x) P (t|z) . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 21:44:44
|
ICLR.cc/2018/Conference
|
SkVCdmZA-
|
ryNtgLDMG
|
[]
|
2017-12-20 02:52:44
|
ICLR.cc/2018/Conference
|
ryNtgLDMG
|
B1KhmUPzf
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In light of the resurgence in popularity of neural networks, it is timely to revisit this line of work. We delineate the correspondence between deep neural networks and GPs and utilize it for Bayesian ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'network (NN) is a function drawn from a Gaussian process (GP). In the case of single hidden-layer networks, the form of the kernel of this GP is well known (Neal (1994a); Williams (1997)). ', 'modified_lines': 'This correspondence implies that if we choose the hypothesis space to be the class of infinitely wide neural networks, an i.i.d. prior over weights and biases can be replaced with a corresponding GP prior over functions. As noted by (Williams, 1997), this substitution enables exact Bayesian inference for regression using neural networks. The computation requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations. ', 'original_lines': 'This correspondence implies that if we choose the hypothesis space to be the class of infinitely wide neural networks, an i.i.d. prior of weights and biases can be replaced with a corresponding GP prior over functions. As noted by (Williams, 1997), this substitution enables exact Bayesian inference for regression using neural networks. The computation requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations. ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '1.2 SUMMARY OF CONTRIBUTIONS', 'after_section': '1.2 SUMMARY OF CONTRIBUTIONS', 'context_after': 'In this work, as a first proof of concept of our NNGP construction, we focus on exact Bayesian We conduct experiments making Bayesian predictions on MNIST and CIFAR-10 (Section 3) and compare against NNs trained with standard gradient-based approaches. The experiments explore different hyperparameter settings of the Bayesian training including network depth, nonlinearity, training set size (up to and including the full dataset consisting of tens of thousands of images), and weight and bias variance. Our experiments reveal that the best NNGP performance is consistently 2 ', 'paragraph_idx': 10, 'before_section': None, 'context_before': '1.2 SUMMARY OF CONTRIBUTIONS ', 'modified_lines': 'We begin by specifying the form of a GP which corresponds to a deep, infinitely wide neural network – hereafter referred to as the Neural Network GP (NNGP) – in terms of a recursive, deterministic computation of the kernel function. The prescription is valid for generic pointwise nonlinearities. We develop a computationally efficient method (Section 2.5) to compute the covariance function corresponding to deep neural networks with fixed hyperparameters. inference for regression tasks, treating classification as regression on class labels. While less prin- cipled, least-squares classification performs well (Rifkin et al., 2003) and allows us to compare exact inference via a GP to prediction by a trained neural network on well-studied tasks (MNIST and CIFAR-10 classification). Note that it is possible to extend GPs to softmax classification with cross entropy loss (Williams & Barber (1998); Rasmussen & Williams (2006)), which we aim to investigate in future work. competitive against that of NNs trained with gradient-based techniques, and the best NNGP setting, chosen across hyperparameters, often surpasses that of conventional training (Section 3, Table 1). We further observe that, with increasing network width, the performance of neural networks with gradient-based training approaches that of the NNGP computation. Furthermore, the performance of the NNGP depends on the structure of the kernel, which can be connected to recent work on signal propagation in networks with random parameters (Schoenholz et al., 2017). ', 'original_lines': 'We begin by specifying the form of the GPs which correspond to deep, infinitely wide neural net- works – hereafter referred to as the Neural Network GP (NNGP) – in terms of a recursive, deter- ministic computation of the kernel function. The prescription is valid for generic pointwise non- linearities. We develop a computationally efficient method (Section 2.5) to compute the covariance function corresponding to deep neural networks with fixed hyperparameters. inference for regression tasks, treating classification as regression on class labels. While less princi- pled, least-squares classification is widely used (Rifkin et al., 2003) and allows us to compare exact inference via a GP to prediction by a trained neural network on well-studied tasks (MNIST and CIFAR-10 classification). Note, however, that it is possible to extend GPs to softmax classification with cross entropy loss (Williams & Barber (1998); Rasmussen & Williams (2006)), which we aim to investigate in future work. competitive against that of gradient-based trained NNs, and the best NNGP setting, chosen across hyperparameters, often surpasses that of conventional training (Section 3, Table 1). We further observe that, with increasing network width, the performance of neural networks with gradient- based training approaches that of, but is bounded by, the NNGP computation. Furthermore, the performance of the NNGP depends on the structure of the kernel, which can be connected to recent work on signal propagation in networks with random parameters (Schoenholz et al., 2017). ', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': '2.1 NOTATION', 'after_section': '2.1 NOTATION', 'context_after': 'w/N and σ2 ij, bl ', 'paragraph_idx': 14, 'before_section': '2.1 NOTATION', 'context_before': 'Consider an L-hidden-layer fully-connected neural network with hidden layers of width N and pointwise nonlinearities φ. Let x ∈ Rdin denote the input to the network, and let zL ∈ Rdout denote ', 'modified_lines': 'its output. The ith component of the activations in the lth layer, post-nonlinearity and post-affine transformation, are denoted xl i respectively. We will refer to these as the post- and pre- activations. (We let x0 i ≡ xi for the input, dropping the Arabic numeral superscript, and instead use a Greek superscript xα to denote a particular input α). Weight and bias parameters for the lth layer have components W l i, which are independent and randomly drawn, and we take them all to have zero mean and variances σ2 b , respectively. GP(µ, K) denotes a Gaussian process with mean and covariance functions µ(·), K(·, ·), respectively. i and zl ', 'original_lines': 'its output. The i-th component of the pre- and post-activations of the l-th layer are denoted zl i and i, respectively. (We let x0 xl i ≡ xi for the input, dropping the Arabic numeral superscript, and instead use a Greek superscript xα to denote a particular input α). Weight and bias parameters for the l-th layer have components W l i, which are independent and randomly drawn, and we take them all to have zero mean and variances σ2 b , respectively. GP(µ, K) denotes a Gaussian process with mean and covariance functions µ(·), K(·, ·), respectively. ', 'after_paragraph_idx': 15, 'before_paragraph_idx': 14}, {'section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'after_section': None, 'context_after': 'z1 i (xα=1), ..., z1 i (x(cid:48))(cid:3) = σ2 b + σ2 w E (cid:2)x1 (2) where we have introduced C(x, x(cid:48)) as in Neal (1994a); it is obtained by integrating against the distribution of W 0, b0. ', 'paragraph_idx': 16, 'before_section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'context_before': 'j , x1 where we have emphasized the dependence on input x. Because the weight and bias parameters are ', 'modified_lines': 'taken to be i.i.d., the post-activations x1 j(cid:48) are linearly independent for j (cid:54)= j(cid:48). Moreover, since i (x) is a sum of i.i.d terms, it follows from the Central Limit Theorem that in the limit of infinite width N → ∞, z1 i (x) will be Gaussian distributed. Likewise, from the multidimensional Central i (xα=k)} will have a joint multivariate Limit Theorem, any finite collection of {z1 Gaussian distribution, which is exactly the definition of a Gaussian process. Therefore we conclude that z1 i ∼ GP(µ1, K 1), a GP with mean µ1 and covariance K 1, which are themselves independent of i. Because the parameters have zero mean, we have that µ1(x) = E (cid:2)z1 i (x(cid:48))(cid:3) ≡ σ2 i (x)(cid:3) = 0 and, wC(x, x(cid:48)), b + σ2 K 1(x, x(cid:48)) ≡ E (cid:2)z1 i (x)x1 i (x)z1 ', 'original_lines': 'taken to be i.i.d., the postactivations x1 i (x) is a sum of i.i.d terms, it follows from the Central Limit Theorem that in the limit of infinite width N → ∞, z1 i (x) will be Gaussian distributed. Likewise, from the multidimensional Central Limit Theorem, any finite collection of {z1 i (xα=k)} will have a joint multivariate Gaussian distribution, which is exactly the definition of a Gaussian process. Therefore we conclude that i ∼ GP(µ1, K 1), a GP with mean µ1 and covariance K 1, which are themselves independent of i. Because the parameters have zero mean, we have that µ1(x) = E (cid:2)z1 j(cid:48) are independent for j (cid:54)= j(cid:48). Moreover, since z1 K 1(x, x(cid:48)) ≡ E (cid:2)z1 i (x)z1 i (x)x1 i (x(cid:48))(cid:3) ≡ σ2 wC(x, x(cid:48)), i (x)(cid:3) = 0 and, b + σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 16}, {'section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'after_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_after': ', but this is equivalent (x(cid:48)). The latter is described by a zero mean, two-dimensional Gaussian whose covariance matrix has distinct entries K l−1(x, x(cid:48)), K l−1(x, x), and K l−1(x(cid:48), x(cid:48)). As such, these are the only three quantities that appear in the result. We introduce the shorthand ', 'paragraph_idx': 19, 'before_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_before': '(4) By induction, the expectation in Equation (4) is over the GP governing zl−1 ', 'modified_lines': 'to integrating against the joint distribution of only zl−1 (x) and zl−1 i i i 3 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'to integrating against the joint distribution of only zl−1 (x) and zl−1 i i i 3 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 19, 'before_paragraph_idx': 19}, {'section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'after_section': None, 'context_after': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS ', 'paragraph_idx': 19, 'before_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_before': 'In fact, these recurrence relations have appeared in other contexts. They are exactly the relations derived in the mean field theory of signal propagation in fully-connected random neural networks ', 'modified_lines': '(Poole et al. (2016); Schoenholz et al. (2017)) and also appear in the literature on compositional ker- nels (Cho & Saul (2009); Daniely et al. (2016)). For certain activation functions, Equation (5) can be computed analytically (Cho & Saul (2009); Daniely et al. (2016)). In the case of the ReLU non- linearity, it yields the well-known arccosine kernel (Cho & Saul (2009)) whose form we reproduce in Appendix B. When no analytic form exists, it can instead be efficiently computed numerically, as described in Section 2.5. ', 'original_lines': '(Poole et al. (2016); Schoenholz et al. (2017)) and also appear in the literature on compositional kernels (Cho & Saul (2009); Daniely et al. (2016)). For various activation functions, Equation (5) can be computed analytically (Cho & Saul (2009); Daniely et al. (2016)). In the case of the ReLU nonlinearity, it yields the well-known arccosine kernel (Cho & Saul (2009)) whose form we reproduce in Appendix B. When no analytic form exists, it can instead be efficiently computed numerically, as described in Section 2.5. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 19}, {'section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'after_section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'context_after': 'If the conditions of Section 2.2 or 2.3 apply, our choice of prior over functions implies that z1, ..., zn, z∗ are n + 1 draws from a GP and z∗, z|x∗, x ∼ N (0, K) is a multivariate Gaussian ', 'paragraph_idx': 20, 'before_section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'context_before': '(7) ', 'modified_lines': 'where t = (t1, ..., tn)T are the targets on the training set, and P (t|z) corresponds to observation noise. We will assume a noise model consisting of a Gaussian with variance σ2 (cid:15) centered at z. ', 'original_lines': 'where t = (t1, ..., tn)T are the targets on the training set. ', 'after_paragraph_idx': 21, 'before_paragraph_idx': 20}, {'section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'after_section': None, 'context_after': 'In)−1t (8) (9) where In is the n × n identity. The predicted distribution for z∗|D, x∗ is hence determined from straightforward matrix computations, yet nonetheless corresponds to fully Bayesian training of the deep neural network. The form of the covariance function used is determined by the choice of GP ', 'paragraph_idx': 21, 'before_section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'context_before': ', ', 'modified_lines': 'where the block structure corresponds to the division between the training set and the test point. That is, KD,D is an n × n matrix whose (i, j)th element is K(xi, xj) with xi, xj ∈ D, while e.g. the ith element of Kx∗,D is K(x∗, xi), xi ∈ D. As is standard in GPs, the integral in Equation 7 can be done exactly, resulting in z∗|D, x∗ ∼ N (¯µ, ¯K) with ¯µ = Kx∗,D(KD,D + σ2 (cid:15) ¯K = Kx∗,x∗ − Kx∗,D(KD,D + σ2 (cid:15) In)−1K T x∗,D ', 'original_lines': 'where the block structure corresponds to division between the training set and test point. That is, KD,D is an n × n matrix whose (i, j)-th element is K(xi, xj) with xi, xj ∈ D, while e.g. the i-th element of Kx∗,D is K(x∗, xi), xi ∈ D. We further assume an additive Gaussian noise model with (cid:15) . The integral in Equation 7 can be done exactly, resulting in z∗|D, x∗ ∼ N (¯µ, ¯K) with variance σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 21}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': None, 'context_after': '5 ', 'paragraph_idx': 26, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'We plan to release an open source implementation of the algorithm after paper de-anonymization. ', 'modified_lines': '2For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. ', 'original_lines': '2For numerical stability, in practice an independent 1D lookup table is built for the case that cj = 1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 25}, {'section': '3.1 DESCRIPTION', 'after_section': '3.1 DESCRIPTION', 'context_after': 'Performance: We find that the NNGP often outperforms trained finite width networks, and that trained neural network performance becomes more similar to that of the NNGP with increasing width. See Table 1 and Figure 1. (a) Accuracy ', 'paragraph_idx': 27, 'before_section': None, 'context_before': '3.1 DESCRIPTION ', 'modified_lines': 'We compare NNGPs with SGD3 trained neural networks on the permutation invariant MNIST and CIFAR-10 datasets. The baseline neural network is a fully-connected network with identical width at each hidden layer. Training is on the mean squared error (MSE) loss, chosen so as to allow direct comparison to GP predictions. Formulating classification as regression often leads to good results (Rifkin & Klautau, 2004). Future work may involve evaluating the NNGP on a cross entropy loss using the approach in (Williams & Barber, 1998; Rasmussen & Williams, 2006). Training used the Adam optimizer (Kingma & Ba (2014)) with learning rate and initial weight/bias variances op- timized over validation error using the Vizier hyperparameter tuner (Golovin et al., 2017). Dropout was not used. In future work, it would be interesting to incorporate dropout into the NNGP covari- ance matrix using an approach like that in (Schoenholz et al., 2017). For the study, nonlinearities were chosen to be either rectified linear units (ReLU) or hyperbolic tangent (Tanh). Class labels were encoded as a one-hot, zero-mean, regression target (i.e., entries of -0.1 for the incorrect class and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlinearities following the method described in Section 2.5. For all the experiments we used pre-computed lookup tables F with ng = 501, nv = 501, nc = 500, and smax = 100. Uncertainty: One benefit in using a GP is that, due to its Bayesian nature, all predictions have uncertainty estimates (Equation (9)). For conventional neural networks, capturing the uncertainty in a model’s predictions is challenging (Gal, 2016). In the NNGP, every test point has an explicit estimate of prediction variance associated with it (Equation 9). In our experiments, we observe that the NNGP uncertainty estimate is highly correlated with prediction error (Figure 2). ', 'original_lines': 'We compare NNGPs with SGD trained neural networks on the permutation invariant MNIST and CIFAR-10 datasets. The baseline neural network is a fully-connected network of constant width. Training is on the mean squared error (MSE) loss, chosen so as to allow direct comparison to GP predictions. Formulating classification as regression often leads to good results (Rifkin & Klau- tau, 2004). Future work may involve evaluating the NNGP on a cross entropy loss using the ap- proach in (Williams & Barber, 1998; Rasmussen & Williams, 2006). Training used the ADAM optimizer (Kingma & Ba (2014)) with learning rate and initial weight/bias variances optimized over validation error using the Vizier hyperparameter tuner (Golovin et al., 2017). Dropout was not used. In future work, it would be interesting to incorporate dropout into the NNGP covariance matrix us- ing an approach like that in (Schoenholz et al., 2017). For the study, nonlinearities were chosen to be either rectified linear units (ReLU) or hyperbolic tangent (Tanh). Class labels were encoded as a one-hot, zero-mean, regression target (i.e., entries of -0.1 for the incorrect class and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlineari- ties following the method described in Section 2.5. For all the experiments we used pre-computed lookup tables F with ng = 501, nv = 501, nc = 500, and smax = 100. Uncertainty: One benefit in using the GP is due to its Bayesian nature, so that predictions have uncertainty estimates (Equation (9)). For conventional neural networks, capturing the uncertainty of the model is challenging (Gal, 2016). In our NNGP, we can assign uncertainty from the GP, which can be viewed as uncertainty estimates for deep neural networks. For our experiments we looked at uncertainty estimated on test points by the NNGP and observed that it is strongly correlated with prediction error (see Figure 2). ', 'after_paragraph_idx': 28, 'before_paragraph_idx': None}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': '6 ', 'paragraph_idx': 30, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'Several prior works (Poole et al. (2016); Schoenholz et al. (2017); Daniely et al. (2016); Duvenaud et al. (2014)) have noted the recurrence relations Equation (5) commonly approach a functionally uninteresting fixed point with depth l → ∞, in that K∞(x, x(cid:48)) becomes a constant or piecewise ', 'modified_lines': ' 3For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD produced qualitatively similar results, with slightly higher MSE. ', 'original_lines': 'constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 30}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': 'w-σ2 w-σ2 ', 'paragraph_idx': 34, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'Table 1: The NNGP often outperforms finite width networks. Test accuracy on MNIST and CIFAR- 10 datasets. The reported NNGP results correspond to the best performing depth, σ2 b values ', 'modified_lines': 'on the validation set. The traditional NN results correspond to the best performing depth, width and optimization hyperparameters. Best models for a given training set size are specified by (depth- width-σ2 b ) for GPs. More results are in Appendix Table 2. b ) for NNs and (depth–σ2 w, and σ2 ', 'original_lines': 'on the validation set. The traditional NN results correspond to the best performing depth, width and optimization hyperparameters. Best models for given training set size are specified by (depth-width- σ2 b ) for GPs. More results are in Appendix Table 2. b ) for NNs and (depth–σ2 w, and σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 34}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': 'We will be particularly interested in contextualizing our results in relation to Poole et al. (2016); Schoenholz et al. (2017) which analyzed the fixed points and the approach to them in detail for bounded nonlinearities. To briefly recapitulate: there are regions of hyperparameter space (called ', 'paragraph_idx': 34, 'before_section': None, 'context_before': '0.5034 0.5558 ', 'modified_lines': 'constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_after': 'x = x(cid:48) but q∗c∗ for x (cid:54)= x(cid:48). Here c∗ < 1 is the fixed point correlation. In each of these regimes, there is also a finite depth-scale ξ which describes the characteristic number of layers over which the covariance function decays exponentially towards its fixed point form. Exactly at the boundary between these two regimes is a line in (σ2 b )-space where the decay K l(x, x(cid:48)) towards its fixed w, σ2 For ReLU networks a similar picture emerges, however there are some subtleties due to the un- b , K∞(x, x(cid:48)) = q∗ for all x, x(cid:48) and every point becomes asymptotically correlated. Despite this, there are again two phases: a “bounded” phase in which q∗ is finite (and nonzero) and an unbounded phase in which q∗ is either ', 'paragraph_idx': 35, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'chaotic phase the weight variance σ2 w dominates and similar inputs become dissimilar with depth as they are randomly projected by the weight matrices. In this case, the covariance K l(x, x(cid:48)) → q∗ for ', 'modified_lines': ' w and σ2 7 Under review as a conference paper at ICLR 2018 point is significantly slower and non-exponential. It was noted in Schoenholz et al. (2017) that this approach to the fixed-point covariance fundamentally bounded whether or not neural networks could successfully be trained. It was shown that initializing networks on this line allowed for significantly deeper neural networks to be trained. bounded nature of the nonlinearity. In this case for all σ2 ', 'original_lines': ' w and σ2 7 Under review as a conference paper at ICLR 2018 point is significantly slower. It was noted in Schoenholz et al. (2017) that this approach to the fixed-point covariance fundamentally bounded whether or not neural networks could successfully It was shown that initializing networks on this line allowed for significantly deeper be trained. neural networks to be trained. bounded nature of the nonlinearity. In this case for all σw and σ2 ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 35}, {'section': 'Abstract', 'after_section': None, 'context_after': 'b values. The right subfigure for each nonlinearity is the theoretical phase diagram from analysis of Schoenholz et al. (2017). We observe that the performance of the NNGP is best along the critical line (dotted ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(b) ReLU Figure 3: The best performing NNGP hyperparameters agree with those predicted by deep signal ', 'modified_lines': 'propagation. Test set accuracy heatmaps for NNGPs evaluated for a grid of σ2 ', 'original_lines': 'propagation. Test-set accuracy heatmaps for NNGPs evaluated for a grid of σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 CONCLUSION AND FUTURE DIRECTIONS', 'after_section': '4 CONCLUSION AND FUTURE DIRECTIONS', 'context_after': 'We suggest a few additional interesting directions to pursue, among those already mentioned. In 8 Under review as a conference paper at ICLR 2018 Additionally, the NNGP provides explicit estimates of uncertainty. This may be useful in predicting model failure in critical applications of deep learning, or for active learning tasks where it can be ', 'paragraph_idx': 41, 'before_section': '4 CONCLUSION AND FUTURE DIRECTIONS', 'context_before': 'uncertainty estimates from deep neural networks without stochastic gradient-based training. The performance is competitive with the best neural networks trained on the same regression task under similar hyperparameter settings. While we were able to run experiments for somewhat large datasets ', 'modified_lines': '(sizes of 50k), we intend to look into scalability for larger learning tasks, possibly harnessing recent progress in scalable GPs (Qui˜nonero-Candela & Rasmussen (2005); Hensman et al. (2013)). our experiments, we observed the performance of the optimized neural network appears to approach that of the GP computation with increasing width. Whether gradient-based stochastic optimization implements an approximate Bayesian computation is an interesting question for further investiga- tion. Recent work (Mandt et al. (2017)) has suggested that SGD can be made to approximately sample from a Bayesian posterior. Further study is needed to determine if SGD does approximately implement Bayesian inference under the conditions typically employed in practice. ', 'original_lines': '(sizes of 50k), we intend to look into scalability for larger learning tasks, possibly combined with progresses in scalable GPs (Qui˜nonero-Candela & Rasmussen (2005); Hensman et al. (2013)). our experiments, we observed the performance of the optimized-trained neural network appears to approach that of GP computation with increasing width. Whether gradient-based stochastic opti- mization implements an approximate Bayesian computation is an interesting question for further investigation. Recent work (Mandt et al. (2017)) has suggested that SGD, under a set of restric- tive assumptions, may be sampling from the posterior distribution of a probabilistic model. Further study is needed to determine if SGD does approximately implement Bayesian inference under the conditions typically employed in practice. ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 41}, {'section': '1.1 RELATED WORK', 'after_section': None, 'context_after': '(cid:113) ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'In the main text, we noted that the recurrence relation Equation 5 can be computed analytically for certain nonlinearities. In particular, this was computed in Cho & Saul (2009) for polynomial ', 'modified_lines': 'rectified nonlinearities. For ReLU, the result including the weight and bias variance is ', 'original_lines': 'rectified nonlinearities. For ReLU, the result is ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 DESCRIPTION', 'after_section': None, 'context_after': 'Performance: Performance of grid points of σ2 w-σ2 b for varying depth is shown in Figure 7. The phase changes as described in Section 3.2. as Figure 2. 12 ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'Here we include more results from experiments described in Section 3. ', 'modified_lines': 'Uncertainty: Relationship between the target MSE and the GP’s uncertainty estimate for smaller training set size is shown in Figure 6. best performing NNGP’s hyperparameters are distributed near the critical line (Figure 8) where the Figure 6: The prediction uncertainty for smaller number of training points. The details are the same ', 'original_lines': 'Uncertainty: Dependence on target MSE to GP’s uncertainty estimate for smaller training set size is shown in Figure 6. best performing NNGP’s hyperparamters are distributed near the critical line (Figure 8) where the Figure 6: This prediction uncertainty for smaller number of training points. The details are the same ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-12-20 03:06:25
|
ICLR.cc/2018/Conference
|
B1KhmUPzf
|
H1uawLp7G
|
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1.1 RELATED WORK ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'covariance matrices over the training and test sets and straightforward linear algebra computations. In light of the resurgence in popularity of neural networks, it is timely to revisit this line of work. ', 'modified_lines': 'We delineate the correspondence between deep and wide neural networks and GPs and utilize it for Bayesian training of neural networks on regression tasks. ', 'original_lines': 'We delineate the correspondence between deep neural networks and GPs and utilize it for Bayesian training of neural networks on regression tasks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 5}, {'section': '1.1 RELATED WORK', 'after_section': None, 'context_after': '1 Under review as a conference paper at ICLR 2018 Gaussian nonlinearities and noted the use of the GP prior for exact Bayesian inference in regression. Duvenaud et al. (2014) discusses several routes to building deep GPs and observes the degenerate form of kernels that are composed infinitely many times – a point we will return to Section 3.2 – Related work has also appeared outside of the GP context but in compositional kernel constructions. cludes the Sign and ReLU nonlinearities, and can be used in GPs; our manner of composing kernels matches theirs, though the context is different. Daniely et al. (2016) extends the construction of compositional kernels to neural networks whose underlying directed acyclic graph (which they term Drawing inspiration from the multi-layer nature of deep neural networks, there is a line of work considering various approaches to stacking GPs, such as deep GPs (Lawrence & Moore (2007); Damianou & Lawrence (2013); Hensman & Lawrence (2014); Duvenaud et al. (2014); Bui et al. 1.2 SUMMARY OF CONTRIBUTIONS We begin by specifying the form of a GP which corresponds to a deep, infinitely wide neural network – hereafter referred to as the Neural Network GP (NNGP) – in terms of a recursive, deterministic In this work, as a first proof of concept of our NNGP construction, we focus on exact Bayesian inference for regression tasks, treating classification as regression on class labels. While less prin- cipled, least-squares classification performs well (Rifkin et al., 2003) and allows us to compare exact inference via a GP to prediction by a trained neural network on well-studied tasks (MNIST and CIFAR-10 classification). Note that it is possible to extend GPs to softmax classification with cross entropy loss (Williams & Barber (1998); Rasmussen & Williams (2006)), which we aim to investigate in future work. ', 'paragraph_idx': 7, 'before_section': '1.1 RELATED WORK', 'context_before': '1Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional ', 'modified_lines': 'to the layer width. (1997) computes analytic GP kernels for single hidden-layer neural networks with error function or but they do not derive the form of GP kernels as we do. Hazan & Jaakkola (2015) also discusses constructing kernels equivalent to infinitely wide deep neural networks but their construction does not go beyond two hidden layers with nonlinearities. Cho & Saul (2009) derives compositional kernels for polynomial rectified nonlinearities, which in- a “computation skeleton”) is of general form. They also prove, utilizing the formalism of dual activations, that compositional kernels originating from fully-connected topologies with the same nonlinearity become degenerate when composed infinitely many times. In a different context than compositional kernels, Poole et al. (2016); Schoenholz et al. (2017) study the same underlying re- currence relation for the specific case of fully-connected networks and bounded nonlinearities. They distinguish regions in hyperparameter space with different fixed points and convergence behavior in the recurrence relations. The focus in these works was to better understand the expressivity and trainability of deep networks. (2016)), which can give rise to a richer class of probabilistic models beyond GPs. This contrasts with our work, where we study GPs which are in direct correspondence with deep, infinitely wide neural networks. Krauth et al. (2016) has recently explored the performance of GP models with deep kernels given in Cho & Saul (2009), implemented with scalable approximations. However, they do not discuss the equivalence between deep neural networks and GPs with compositional kernels, which constitutes a conceptual contribution of our work. Furthermore, we note that the GP kernels in our work are more general than the compositional kernel construction outlined in Cho & Saul (2009) in two respects: (i) we are not limited to rectified polynomials but can deal with general nonlinearities, and (ii) we consider two additional hyperparameters in the kernels, which would correspond to the weight and bias parameter variances in a neural network. Finally, Gal & Ghahramani (2016) connects dropout in deep neural networks with approximate Bayesian inference in deep GPs. Another series of recent works (Wilson et al. (2016b;a); Al-Shedivat et al. (2017)), termed deep kernel learning, utilize GPs with base kernels which take in features produced by a deep multilayer neural network; the model is trained end-to-end. Our work differs from these in that we do not learn properties of the kernel – in particular, the many free parameters of a neural network which provides an embedding – but rather use fixed basis functions. Furthermore, our GP corresponds to a multilayer neural network with all layers infinitely wide. We do perform a grid search over a few kernel hyperparameters, which could be learned by maximizing the marginal likelihood of the GP. computation of the kernel function. The prescription is valid for generic pointwise nonlinearities in fully-connected feedforward networks. We develop a computationally efficient method (Section 2.5) to compute the covariance function corresponding to deep neural networks with fixed hyperparam- eters. 2 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'to the network width. (1997) computed analytic GP kernels for single hidden-layer neural networks with error function or but they do not derive the form of GP kernels as we do. The kernels discussed in Hazan & Jaakkola (2015) rely on auxiliary GPs. Cho & Saul (2009) derive compositional kernels for polynomial rectified nonlinearities, which in- a “computational skeleton”) is general. They also prove, utilizing the formalism of dual activations, that compositional kernels originating from fully-connected topologies with the same nonlinearity become degenerate when composed infinitely many times. In a different context than compositional kernels, Poole et al. (2016); Schoenholz et al. (2017) study the same underlying recurrence rela- tion for the specific case of fully-connected networks and bounded nonlinearities. They distinguish regions in hyperparameter space with different fixed points and convergence behavior in the recur- rence relations. The focus in these works was to better understand the expressivity and trainability of deep networks. (2016)), which can give rise to a richer class of probabilistic models beyond GPs. This contrasts with our work, where we study GPs which are in direct correspondence with deep, infinitely wide neural networks. Gal & Ghahramani (2016) connects dropout in deep neural networks with approximate Bayesian inference in deep GPs. computation of the kernel function. The prescription is valid for generic pointwise nonlinearities. We develop a computationally efficient method (Section 2.5) to compute the covariance function corresponding to deep neural networks with fixed hyperparameters. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 7}, {'section': '1.2 SUMMARY OF CONTRIBUTIONS', 'after_section': None, 'context_after': '2 DEEP, INFINITELY WIDE NEURAL NETWORKS ARE DRAWN FROM GPS ', 'paragraph_idx': 13, 'before_section': '1.2 SUMMARY OF CONTRIBUTIONS', 'context_before': 'competitive against that of NNs trained with gradient-based techniques, and the best NNGP setting, chosen across hyperparameters, often surpasses that of conventional training (Section 3, Table 1). We further observe that, with increasing network width, the performance of neural networks with ', 'modified_lines': 'gradient-based training approaches that of the NNGP computation, and that the GP uncertainty is strongly correlated with prediction error. Furthermore, the performance of the NNGP depends on the structure of the kernel, which can be connected to recent work on signal propagation in networks with random parameters (Schoenholz et al., 2017). ', 'original_lines': 'gradient-based training approaches that of the NNGP computation. Furthermore, the performance of the NNGP depends on the structure of the kernel, which can be connected to recent work on signal propagation in networks with random parameters (Schoenholz et al., 2017). 2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'i and zl ', 'paragraph_idx': 4, 'before_section': None, 'context_before': '2.1 NOTATION ', 'modified_lines': 'Consider an L-hidden-layer fully-connected neural network with hidden layers of width Nl (for layer l) and pointwise nonlinearities φ. Let x ∈ Rdin denote the input to the network, and let zL ∈ Rdout denote its output. The ith component of the activations in the lth layer, post-nonlinearity and post- affine transformation, are denoted xl i respectively. We will refer to these as the post- and pre-activations. (We let x0 i ≡ xi for the input, dropping the Arabic numeral superscript, and instead use a Greek superscript xα to denote a particular input α). Weight and bias parameters for the lth layer have components W l i, which are independent and randomly drawn, and we take them all to have zero mean and variances σ2 b , respectively. GP(µ, K) denotes a Gaussian process with mean and covariance functions µ(·), K(·, ·), respectively. w/Nl and σ2 ', 'original_lines': 'Consider an L-hidden-layer fully-connected neural network with hidden layers of width N and pointwise nonlinearities φ. Let x ∈ Rdin denote the input to the network, and let zL ∈ Rdout denote its output. The ith component of the activations in the lth layer, post-nonlinearity and post-affine transformation, are denoted xl i respectively. We will refer to these as the post- and pre- activations. (We let x0 i ≡ xi for the input, dropping the Arabic numeral superscript, and instead use a Greek superscript xα to denote a particular input α). Weight and bias parameters for the lth layer have components W l i, which are independent and randomly drawn, and we take them all to have zero mean and variances σ2 b , respectively. GP(µ, K) denotes a Gaussian process with mean and covariance functions µ(·), K(·, ·), respectively. w/N and σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'after_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_after': 'i ∼ GP(0, K l). K l(x, x(cid:48)) ≡ E (cid:2)zl ', 'paragraph_idx': 19, 'before_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_before': 'i (xα=k)} will have joint multivariate Gaussian distribution and zl ', 'modified_lines': 'random terms so that, as Nl → ∞, any finite collection ', 'original_lines': 'random terms so that, as N → ∞, any finite collection ', 'after_paragraph_idx': 19, 'before_paragraph_idx': 19}, {'section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'after_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_after': 'to integrating against the joint distribution of only zl−1 (x(cid:48)). The latter is described by a zero mean, two-dimensional Gaussian whose covariance matrix has distinct entries K l−1(x, x(cid:48)), K l−1(x, x), and K l−1(x(cid:48), x(cid:48)). As such, these are the only three quantities that appear in the result. We introduce the shorthand K l(x, x(cid:48)) = σ2 b + σ2 w Fφ (cid:16) (cid:17) to emphasize the recursive relationship between K l and K l−1 via a deterministic function F whose form depends only on the nonlinearity φ. This gives an iterative series of computations which can For the base case K 0, suppose W 0 relating K 1 and K 0, where ', 'paragraph_idx': 20, 'before_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_before': '(4) By induction, the expectation in Equation (4) is over the GP governing zl−1 ', 'modified_lines': ', but this is equivalent (x) and zl−1 i i i K l−1(x, x(cid:48)), K l−1(x, x), K l−1(x(cid:48), x(cid:48)) (5) be performed to obtain K L for the GP describing the network’s final output. In Appendix C, we provide an alternative derivation of this GP correspondence, in terms of marginalization over the intermediate layers. ', 'original_lines': ' (x) and zl−1 , but this is equivalent i i i 3 Under review as a conference paper at ICLR 2018 K l−1(x, x(cid:48)), K l−1(x, x), K l−1(x(cid:48), x(cid:48)) (5) be performed to obtain K L for the GP describing the network’s final output. ', 'after_paragraph_idx': 20, 'before_paragraph_idx': 20}, {'section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'after_section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'context_after': 'where t = (t1, ..., tn)T are the targets on the training set, and P (t|z) corresponds to observation noise. We will assume a noise model consisting of a Gaussian with variance σ2 ', 'paragraph_idx': 21, 'before_section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'context_before': '(7) ', 'modified_lines': '2In fact, in the case when all hidden layer widths are finite and the limit is taken simultaneously, a concurrent ICLR 2018 submission (Anonymous, 2018) rigorously derives the convergence towards a GP, as well as its rate. 4 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 22, 'before_paragraph_idx': 21}, {'section': 'Abstract', 'after_section': None, 'context_after': '(9) where In is the n × n identity. The predicted distribution for z∗|D, x∗ is hence determined from straightforward matrix computations, yet nonetheless corresponds to fully Bayesian training of the deep neural network. The form of the covariance function used is determined by the choice of GP ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In)−1t ', 'modified_lines': '', 'original_lines': 'In)−1K T x∗,D (8) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': 'elements. Note that we are using fixed, rather than adaptive, sampling grids to allow oper- ations to be parallelized and reused across datapoints and layers. ', 'paragraph_idx': 25, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': '1. Generate: pre-activations u = [−umax, · · · , umax] consisting of ng elements linearly spaced between −umax and umax; variances s = [0, · · · , smax] with nv linearly spaced elements, where smax < u2 ', 'modified_lines': 'max; and correlations c = (−1, · · · , 1) with nc linearly spaced ', 'original_lines': 'max; and correlations c = [−1, · · · , 1] with nc linearly spaced ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': '(cid:80) ', 'paragraph_idx': 25, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'involves numerically approximating a Gaussian integral, in terms of the marginal variances s and covariances c. We guarantee that the marginal variance is identical for each datapoint, by preprocessing all datapoints to have identical norm at the input layer, so the number of ', 'modified_lines': 'entries in the lookup table need only be nvnc. These entries are computed as3: ', 'original_lines': 'entries in the lookup table need only be nvnc. These entries are computed as2: ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': '3. For every pair of datapoints x and x(cid:48) in layer l, compute K l (x, x(cid:48)) using Equation (5). (cid:18) ', 'paragraph_idx': 25, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': '(10) ', 'modified_lines': '3For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. 5 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': 'We plan to release an open source implementation of the algorithm after paper de-anonymization. 3 EXPERIMENTAL RESULTS 3.1 DESCRIPTION CIFAR-10 datasets. The baseline neural network is a fully-connected network with identical width at each hidden layer. Training is on the mean squared error (MSE) loss, chosen so as to allow direct comparison to GP predictions. Formulating classification as regression often leads to good ', 'paragraph_idx': 27, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'erated tensor operations, and computation of KL is typically faster than solving the system of linear equations in Equation (8)-(9). ', 'modified_lines': 'Finally, note that the full computational pipeline is deterministic and differentiable. The shape and properties of a deep network kernel are purely determined by hyperparameters of the deep neural network. Since GPs give exact marginal likelihood estimates, this kernel construction may allow principled hyperparameter selection, or nonlinearity design, e.g. by gradient ascent on the log likelihood w.r.t. the hyperparameters. Although this is not the focus of current work, we hope to return to this topic in follow-up work. We compare NNGPs with SGD4 trained neural networks on the permutation invariant MNIST and ', 'original_lines': 'Finally, note that the full computational pipeline is deterministic and differentiable. The shape and properties of a deep network kernel are purely determined by hyperparameters of the deep neural network. Since GPs give exact likelihood estimates, this kernel construction may allow principled hyperparameter selection, or nonlinearity design, e.g. by gradient ascent on the log likelihood w.r.t. the hyperparameters. Although this is not the focus of current work, we hope to return to this topic in follow-up work. 2For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. 5 Under review as a conference paper at ICLR 2018 We compare NNGPs with SGD3 trained neural networks on the permutation invariant MNIST and ', 'after_paragraph_idx': 28, 'before_paragraph_idx': 26}, {'section': '3.1 DESCRIPTION', 'after_section': '3.1 DESCRIPTION', 'context_after': 'Performance: We find that the NNGP often outperforms trained finite width networks, and that trained neural network performance becomes more similar to that of the NNGP with increasing ', 'paragraph_idx': 29, 'before_section': '3.1 DESCRIPTION', 'context_before': 'ance matrix using an approach like that in (Schoenholz et al., 2017). For the study, nonlinearities were chosen to be either rectified linear units (ReLU) or hyperbolic tangent (Tanh). Class labels were encoded as a one-hot, zero-mean, regression target (i.e., entries of -0.1 for the incorrect class ', 'modified_lines': 'and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlinearities following the method described in Section 2.5. ', 'original_lines': 'and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlinearities following the method described in Section 2.5. For all the experiments we used pre-computed lookup tables F with ng = 501, nv = 501, nc = 500, and smax = 100. ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 29}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION Several prior works (Poole et al. (2016); Schoenholz et al. (2017); Daniely et al. (2016); Duvenaud et al. (2014)) have noted the recurrence relations Equation (5) commonly approach a functionally uninteresting fixed point with depth l → ∞, in that K∞(x, x(cid:48)) becomes a constant or piecewise produced qualitatively similar results, with slightly higher MSE. ', 'paragraph_idx': 32, 'before_section': '3.1 DESCRIPTION', 'context_before': 'estimate of prediction variance associated with it (Equation 9). In our experiments, we observe that the NNGP uncertainty estimate is highly correlated with prediction error (Figure 2). ', 'modified_lines': '4For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD ', 'original_lines': '(a) Accuracy (b) Mean squared error Figure 1: The NNGP often outperforms finite width networks, and neural network performance more closely resembles NNGP performance with increasing width. Accuracy and mean squared error on MNIST and CIFAR-10 dataset are shown for the best performing NNGP and best performing SGD trained neural networks for given width. 3For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD ', 'after_paragraph_idx': None, 'before_paragraph_idx': 31}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': 'Figure 2: The Bayesian nature of NNGP allows it to assign a prediction uncertainty to each test point. This prediction uncertainty is highly correlated with the empirical error on test points. The ', 'paragraph_idx': 32, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': ' (a) Accuracy (b) Mean squared error Figure 1: The NNGP often outperforms finite width networks, and neural network performance more closely resembles NNGP performance with increasing width. Test accuracy and mean squared error on MNIST and CIFAR-10 dataset are shown for the best performing NNGP and best perform- ing SGD trained neural networks for given width. ‘NN-best’ denotes the best performing (on the validation set) neural network across all width and trials. Often this is the neural network with the largest width. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': 'w = 2.0, and σ2 Table 1: The NNGP often outperforms finite width networks. Test accuracy on MNIST and CIFAR- 10 datasets. The reported NNGP results correspond to the best performing depth, σ2 ', 'paragraph_idx': 35, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'comparison of mean squared error, each plotted point is an average over 100 test points, binned by predicted MSE. The hyperparameters for the NNGP are depth= 3, σ2 b = 0.2. See ', 'modified_lines': 'Appendix Figure 7 for dependence on training set size. constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. We will be particularly interested in contextualizing our results in relation to Poole et al. (2016); Schoenholz et al. (2017) which analyzed the fixed points and the approach to them in detail for bounded nonlinearities. To briefly recapitulate: there are regions of hyperparameter space (called phases) where K∞(x, x(cid:48)) changes only quantitatively with σ2 b . However, there are low dimensional boundaries that separate different phases and between them the nature of K∞(x, x(cid:48)) changes qualitatively. w and σ2 For the Tanh nonlinearity, there are two distinct phases respectively called the “ordered” phase and the “chaotic” phase that can be understood as a competition between the weights and the biases of the network. A diagram showing these phases and the boundary between them is shown in Figure 3a. In the ordered phase, the features obtained by propagating an input through the each layer of the recursion become similar for dissimilar inputs. Fundamentally, this occurs because the different inputs share common bias vectors and so all inputs end up just approaching the random bias. In this 7 102103104Training dataset size0.40.50.60.70.80.91.0AccuracyMNIST, Tanh102103104Training dataset size0.40.50.60.70.80.91.0MNIST, ReLU102103104Training dataset size0.20.30.40.50.6AccuracyCIFAR, Tanh102103104Training dataset size0.20.30.40.50.6CIFAR, ReLUNNGPNN-bestNN-w5NN-w10NN-w50NN-w100NN-w500NN-w1000NN-w5000102103104Training dataset size0.000.010.020.030.040.050.06MSEMNIST, Tanh102103104Training dataset size0.000.010.020.030.040.050.06MNIST, ReLU102103104Training dataset size0.0600.0650.0700.0750.0800.0850.090MSECIFAR, Tanh102103104Training dataset size0.0600.0650.0700.0750.0800.0850.090CIFAR, ReLUNNGPNN-bestNN-w5NN-w10NN-w50NN-w100NN-w500NN-w1000NN-w50000.050.000.050.100.150.20Output variance0.020.010.000.010.020.030.040.05MSEMNIST-50kTanh-corr:0.9330ReLU-corr:0.95730.050.000.050.100.150.200.250.300.35Output variance0.030.040.050.060.070.08MSECIFAR-45kTanh-corr:0.7428ReLU-corr:0.8223 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'Appendix Figure 6 for dependence on training set size. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 35}, {'section': 'Abstract', 'after_section': None, 'context_after': 'w and σ2 case the covariance K l(x, x(cid:48)) → q∗ for every pair of inputs x, x(cid:48), where q∗ is a constant that depends only on σ2 b . All inputs have unit correlation asymptotically with depth. By contrast in the chaotic phase the weight variance σ2 w dominates and similar inputs become dissimilar with depth as they are randomly projected by the weight matrices. In this case, the covariance K l(x, x(cid:48)) → q∗ for x = x(cid:48) but q∗c∗ for x (cid:54)= x(cid:48). Here c∗ < 1 is the fixed point correlation. In each of these regimes, there is also a finite depth-scale ξ which describes the characteristic number of layers over which the covariance function decays exponentially towards its fixed point form. Exactly at the boundary ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.5034 0.5558 ', 'modified_lines': '', 'original_lines': 'constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. We will be particularly interested in contextualizing our results in relation to Poole et al. (2016); Schoenholz et al. (2017) which analyzed the fixed points and the approach to them in detail for bounded nonlinearities. To briefly recapitulate: there are regions of hyperparameter space (called phases) where K∞(x, x(cid:48)) changes only quantitatively with σ2 b . However, there are low dimensional boundaries that separate different phases and between them the nature of K∞(x, x(cid:48)) changes qualitatively. For the Tanh nonlinearity, there are two distinct phases respectively called the “ordered” phase and the “chaotic” phase that can be understood as a competition between the weights and the biases of the network. A diagram showing these phases and the boundary between them is shown in Figure 3a. In the ordered phase, the features obtained by propagating an input through the each layer of the recursion become similar for dissimilar inputs. Fundamentally, this occurs because the different inputs share common bias vectors and so all inputs end up just approaching the random bias. In this w and σ2 7 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Expo- In Advances In Neural ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'of Computer Science, 1994b. ', 'modified_lines': '', 'original_lines': '9 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.1 NOTATION', 'after_section': None, 'context_after': 'w) and 30 points evenly spaced from 0 to 2.0 (for σ2 b ) was evaluated to generate the heatmap. The best GP run was chosen among the 900 evaluations in the σ2 ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'sampled and mini-batch size was chosen equally among [16, 32, 64, 128, 256]. For the GP with given depth and nonlinearity, a grid of 30 points evenly spaced from 0.1 to 5.0 (for ', 'modified_lines': 'σ2 ', 'original_lines': 'σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-01-05 19:53:35
|
ICLR.cc/2018/Conference
|
H1uawLp7G
|
rJXAbfZAW
|
[]
|
2018-01-25 15:40:24
|
ICLR.cc/2018/Conference
|
rJXAbfZAW
|
HkK7mY2vf
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'Consider a deep fully-connected neural network with i.i.d. random parameters. Each scalar output This correspondence implies that if we choose the hypothesis space to be the class of infinitely wide neural networks, an i.i.d. prior over weights and biases can be replaced with a corresponding GP prior over functions. As noted by (Williams, 1997), this substitution enables exact Bayesian inference for regression using neural networks. The computation requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations. ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'Deep neural networks have emerged in recent years as flexible parametric models which can fit complex patterns in data. As a contrasting approach, Gaussian processes have long served as a ', 'modified_lines': 'traditional nonparametric tool for modeling. An equivalence between these two approaches was derived in Neal (1994a), for the case of one layer networks in the limit of infinite width. Neal (1994a) further suggested that a similar correspondence might hold for deeper networks. of the network, an affine transformation of the final hidden layer, will be a sum of i.i.d. terms. As we will discuss in detail below, in the limit of infinite width the Central Limit Theorem1 implies that the function computed by the neural network (NN) is a function drawn from a Gaussian process (GP). In the case of single hidden-layer networks, the form of the kernel of this GP is well known (Neal (1994a); Williams (1997)). ∗Both authors contributed equally to this work. †Work done as a member of the Google Brain Residency program (g.co/brainresidency). 1Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional to the layer width. 1 Published as a conference paper at ICLR 2018 ', 'original_lines': 'traditional nonparametric tool for modeling. In fact, a correspondence due to Neal (1994a) equates these two models in the limit of infinite width. of the network, an affine transformation of the final hidden layer, will be a sum of i.i.d. terms. In the limit of infinite width, the Central Limit Theorem1 implies that the function computed by the neural network (NN) is a function drawn from a Gaussian process (GP). In the case of single hidden-layer networks, the form of the kernel of this GP is well known (Neal (1994a); Williams (1997)). ', 'after_paragraph_idx': 4, 'before_paragraph_idx': 3}, {'section': '1.1 RELATED WORK', 'after_section': '1.1 RELATED WORK', 'context_after': '(1997) computes analytic GP kernels for single hidden-layer neural networks with error function or Gaussian nonlinearities and noted the use of the GP prior for exact Bayesian inference in regression. Duvenaud et al. (2014) discusses several routes to building deep GPs and observes the degenerate form of kernels that are composed infinitely many times – a point we will return to Section 3.2 – but they do not derive the form of GP kernels as we do. Hazan & Jaakkola (2015) also discusses not go beyond two hidden layers with nonlinearities. Related work has also appeared outside of the GP context but in compositional kernel constructions. ', 'paragraph_idx': 8, 'before_section': '1.1 RELATED WORK', 'context_before': 'Our work touches on aspects of GPs, Bayesian learning, and compositional kernels. The corre- spondence between infinite neural networks and GPs was first noted by Neal (1994a;b). Williams ', 'modified_lines': 'constructing kernels equivalent to infinitely wide deep neural networks, but their construction does ', 'original_lines': ' 1Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional to the layer width. 1 Under review as a conference paper at ICLR 2018 constructing kernels equivalent to infinitely wide deep neural networks but their construction does ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 8}, {'section': '1.1 RELATED WORK', 'after_section': '1.1 RELATED WORK', 'context_after': 'neural networks. Krauth et al. (2016) has recently explored the performance of GP models with deep kernels given in Cho & Saul (2009), implemented with scalable approximations. However, they do not discuss the equivalence between deep neural networks and GPs with compositional ', 'paragraph_idx': 10, 'before_section': '1.1 RELATED WORK', 'context_before': 'considering various approaches to stacking GPs, such as deep GPs (Lawrence & Moore (2007); Damianou & Lawrence (2013); Hensman & Lawrence (2014); Duvenaud et al. (2014); Bui et al. (2016)), which can give rise to a richer class of probabilistic models beyond GPs. This contrasts ', 'modified_lines': 'with our work, where we study GPs that are in direct correspondence with deep, infinitely wide ', 'original_lines': 'with our work, where we study GPs which are in direct correspondence with deep, infinitely wide ', 'after_paragraph_idx': 10, 'before_paragraph_idx': 10}, {'section': 'Abstract', 'after_section': None, 'context_after': 'cross entropy loss (Williams & Barber (1998); Rasmussen & Williams (2006)), which we aim to investigate in future work. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'cipled, least-squares classification performs well (Rifkin et al., 2003) and allows us to compare exact inference via a GP to prediction by a trained neural network on well-studied tasks (MNIST and CIFAR-10 classification). Note that it is possible to extend GPs to softmax classification with ', 'modified_lines': '', 'original_lines': ' 2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'after_section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'context_after': 'where we have emphasized the dependence on input x. Because the weight and bias parameters are taken to be i.i.d., the post-activations x1 i (x) is a sum of i.i.d terms, it follows from the Central Limit Theorem that in the limit of infinite width N1 → ∞, z1 i (x) will be Gaussian distributed. Likewise, from the multidimensional Central Limit Theorem, any finite collection of {z1 ', 'paragraph_idx': 18, 'before_section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'context_before': '(1) ', 'modified_lines': ' j(cid:48) are independent for j (cid:54)= j(cid:48). Moreover, since z1 j , x1 3 Published as a conference paper at ICLR 2018 ', 'original_lines': 'j , x1 ', 'after_paragraph_idx': 18, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where t = (t1, ..., tn)T are the targets on the training set, and P (t|z) corresponds to observation noise. We will assume a noise model consisting of a Gaussian with variance σ2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(7) ', 'modified_lines': '', 'original_lines': '2In fact, in the case when all hidden layer widths are finite and the limit is taken simultaneously, a concurrent ICLR 2018 submission (Anonymous, 2018) rigorously derives the convergence towards a GP, as well as its rate. 4 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': None, 'context_after': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL Given an L-layer deep neural network with fixed hyperparameters, constructing the covariance ma- trix KL for the equivalent GP involves computing the Gaussian integral in Equation (4) for all pairs of training-training and training-test points, recursively for all layers. For some nonlinearities, such as ReLU, this integration can be done analytically. However, to compute the kernel corresponding agreement between the kernel function computed numerically (as described below) and analytically, for the ReLU nonlinearity. It also illustrates the angular dependence of the kernel and its evolution with increasing depth. ', 'paragraph_idx': 26, 'before_section': '2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS', 'context_before': 'bias variances. We henceforth resume placing a superscript L as in KL to emphasize the choice of depth for the compositional kernel. ', 'modified_lines': 'to arbitrary nonlinearities, the integral must be performed numerically. Figure 6 illustrates close ', 'original_lines': 'x∗,D In)−1K T (8) to arbitrary nonlinearities, the integral must be performed numerically. Figure 5 illustrates close ', 'after_paragraph_idx': None, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': 'to O (cid:0)n2 correlation grid, as described below. In order to achieve this, we break the process into several steps: train + ntrainntest)(cid:1), where n2 ', 'paragraph_idx': 26, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'g is the sampling density for the pair of Gaussian random vari- ables in the 2D integral and ntrain, ntest are the training and test set sizes, respectively. However, by careful pipelining, and by preprocessing all inputs to have identical norm, we can improve this cost ', 'modified_lines': 'train + ntrainntest)(cid:1), where nv and nc are sampling densities for a variance and ', 'original_lines': 'train + ntrainntest)(cid:1), where nv and nc are sampling densities for a variance and ', 'after_paragraph_idx': 26, 'before_paragraph_idx': 26}, {'section': 'Abstract', 'after_section': None, 'context_after': '3. For every pair of datapoints x and x(cid:48) in layer l, compute K l (x, x(cid:48)) using Equation (5). ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '. (10) ', 'modified_lines': '', 'original_lines': ' 3For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': None, 'context_after': '3 EXPERIMENTAL RESULTS ', 'paragraph_idx': 29, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'likelihood w.r.t. the hyperparameters. Although this is not the focus of current work, we hope to return to this topic in follow-up work. ', 'modified_lines': 'We have released an open source implementation of the algorithm.3 ', 'original_lines': 'We plan to release an open source implementation of the algorithm after paper de-anonymization. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 28}, {'section': '3.1 DESCRIPTION', 'after_section': None, 'context_after': 'Uncertainty: One benefit in using a GP is that, due to its Bayesian nature, all predictions have uncertainty estimates (Equation (9)). For conventional neural networks, capturing the uncertainty in a model’s predictions is challenging (Gal, 2016). In the NNGP, every test point has an explicit estimate of prediction variance associated with it (Equation 9). In our experiments, we observe that 3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION Several prior works (Poole et al. (2016); Schoenholz et al. (2017); Daniely et al. (2016); Duvenaud et al. (2014)) have noted the recurrence relations Equation (5) commonly approach a functionally uninteresting fixed point with depth l → ∞, in that K∞(x, x(cid:48)) becomes a constant or piecewise point. This prediction uncertainty is highly correlated with the empirical error on test points. The x−axis shows the predicted MSE for test points, while the y−axis shows the realized MSE. To allow comparison of mean squared error, each plotted point is an average over 100 test points, binned by predicted MSE. The hyperparameters for the NNGP are depth= 3, σ2 b = 0.2. See w = 2.0, and σ2 phases) where K∞(x, x(cid:48)) changes only quantitatively with σ2 b . However, there are low dimensional boundaries that separate different phases and between them the nature of K∞(x, x(cid:48)) ', 'paragraph_idx': 31, 'before_section': '3.1 DESCRIPTION', 'context_before': 'and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlinearities following the method described in Section 2.5. ', 'modified_lines': '2For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. 3https://github.com/brain-research/nngp 4For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD produced qualitatively similar results, with slightly higher MSE. 6 Published as a conference paper at ICLR 2018 Performance: We find that the NNGP often outperforms trained finite width networks. See Table 1 and Figure 1. (a) Accuracy (b) Mean squared error Figure 1: The NNGP often outperforms finite width networks, and neural network performance more closely resembles NNGP performance with increasing width. Test accuracy and mean squared error on MNIST and CIFAR-10 dataset are shown for the best performing NNGP and best perform- ing SGD trained neural networks for given width. ‘NN-best’ denotes the best performing (on the validation set) neural network across all widths and trials. Often this is the neural network with the largest width. Curiously, the performance of the best finite-width NNs, trained with a variant of SGD, approaches that of the NNGP with increasing layer width. We find this to be interesting from at least two, po- tentially related, standpoints. (1) NNs are commonly believed to be powerful because of their ability to do flexible representation learning, while our NNGP uses fixed basis functions; nonetheless, in our experiments we find no salient performance advantage to the former. (2) It hints at a possible relationship between SGD and Bayesian inference in certain regimes. There is recent work sug- gesting that SGD can implement approximate Bayesian inference (Mandt et al., 2017) under certain assumptions. The similarity of the performance of the widest NN in Figure 1 with the NNGP suggests that the limit of infinite network width, which is inherent to the GP, is far from being a disadvantage. Indeed, in practice it is found that the best generalizing NNs are in fact the widest. To support this, in Figure 2 we show results on the generalization gap from an experiment in which we train 180 fully-connected networks with five hidden layers on CIFAR-10 with a range of layer widths. For this experiment, we trained the networks using a standard cross entropy loss rather than MSE, leading to a slight difference in performance. the NNGP uncertainty estimate is highly correlated with prediction error (Figure 3). constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. We will be particularly interested in contextualizing our results in relation to Poole et al. (2016); Schoenholz et al. (2017) which analyzed the fixed points and the approach to them in detail for bounded nonlinearities. To briefly recapitulate: there are regions of hyperparameter space (called 7 102103104Training dataset size0.40.50.60.70.80.91.0AccuracyMNIST, Tanh102103104Training dataset size0.40.50.60.70.80.91.0MNIST, ReLU102103104Training dataset size0.20.30.40.50.6AccuracyCIFAR, Tanh102103104Training dataset size0.20.30.40.50.6CIFAR, ReLUNNGPNN-bestNN-w5NN-w10NN-w50NN-w100NN-w500NN-w1000NN-w5000102103104Training dataset size0.000.010.020.030.040.050.06MSEMNIST, Tanh102103104Training dataset size0.000.010.020.030.040.050.06MNIST, ReLU102103104Training dataset size0.0600.0650.0700.0750.0800.0850.090MSECIFAR, Tanh102103104Training dataset size0.0600.0650.0700.0750.0800.0850.090CIFAR, ReLUNNGPNN-bestNN-w5NN-w10NN-w50NN-w100NN-w500NN-w1000NN-w5000 Published as a conference paper at ICLR 2018 Figure 2: Generalization gap for five hidden layer fully-connected networks with variable widths, using ReLU and Tanh nonlinearities on CIFAR-10. Random optimization and initialization hy- perparameters were used and results were filtered for networks with 100% classification training accuracy, resulting in a total of 125 Tanh and 55 ReLU networks. The best generalizing networks are consistently the widest. Figure 3: The Bayesian nature of NNGP allows it to assign a prediction uncertainty to each test Appendix Figure 8 for dependence on training set size. ', 'original_lines': 'Performance: We find that the NNGP often outperforms trained finite width networks, and that trained neural network performance becomes more similar to that of the NNGP with increasing width. See Table 1 and Figure 1. the NNGP uncertainty estimate is highly correlated with prediction error (Figure 2). 4For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD produced qualitatively similar results, with slightly higher MSE. 6 Under review as a conference paper at ICLR 2018 (a) Accuracy (b) Mean squared error Figure 1: The NNGP often outperforms finite width networks, and neural network performance more closely resembles NNGP performance with increasing width. Test accuracy and mean squared error on MNIST and CIFAR-10 dataset are shown for the best performing NNGP and best perform- ing SGD trained neural networks for given width. ‘NN-best’ denotes the best performing (on the validation set) neural network across all width and trials. Often this is the neural network with the largest width. Figure 2: The Bayesian nature of NNGP allows it to assign a prediction uncertainty to each test Appendix Figure 7 for dependence on training set size. constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. We will be particularly interested in contextualizing our results in relation to Poole et al. (2016); Schoenholz et al. (2017) which analyzed the fixed points and the approach to them in detail for bounded nonlinearities. To briefly recapitulate: there are regions of hyperparameter space (called ', 'after_paragraph_idx': None, 'before_paragraph_idx': 30}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_after': 'the recursion become similar for dissimilar inputs. Fundamentally, this occurs because the different inputs share common bias vectors and so all inputs end up just approaching the random bias. In this Table 1: The NNGP often outperforms finite width networks. Test accuracy on MNIST and CIFAR- 10 datasets. The reported NNGP results correspond to the best performing depth, σ2 ', 'paragraph_idx': 41, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'For the Tanh nonlinearity, there are two distinct phases respectively called the “ordered” phase and the “chaotic” phase that can be understood as a competition between the weights and the biases of the network. A diagram showing these phases and the boundary between them is shown in Figure ', 'modified_lines': '4a. In the ordered phase, the features obtained by propagating an input through the each layer of case the covariance K l(x, x(cid:48)) → q∗ for every pair of inputs x, x(cid:48), where q∗ is a constant that depends only on σ2 b . All inputs have unit correlation asymptotically with depth. By contrast in the chaotic phase the weight variance σ2 w dominates and similar inputs become dissimilar with depth as they are randomly projected by the weight matrices. In this case, the covariance K l(x, x(cid:48)) → q∗ for x = x(cid:48) but q∗c∗ for x (cid:54)= x(cid:48). Here c∗ < 1 is the fixed point correlation. In each of these regimes, there is also a finite depth-scale ξ which describes the characteristic number of layers over which the covariance function decays exponentially towards its fixed point form. Exactly at the boundary w and σ2 8 51002510002510k20.450.50.550.6ReLUWidthGeneralization gap51002510002510k20.50.60.70.8TanhWidthGeneralization gap0.050.000.050.100.150.20Output variance0.020.010.000.010.020.030.040.05MSEMNIST-50kTanh-corr:0.9330ReLU-corr:0.95730.050.000.050.100.150.200.250.300.35Output variance0.030.040.050.060.070.08MSECIFAR-45kTanh-corr:0.7428ReLU-corr:0.8223 Published as a conference paper at ICLR 2018 ', 'original_lines': '3a. In the ordered phase, the features obtained by propagating an input through the each layer of 7 102103104Training dataset size0.40.50.60.70.80.91.0AccuracyMNIST, Tanh102103104Training dataset size0.40.50.60.70.80.91.0MNIST, ReLU102103104Training dataset size0.20.30.40.50.6AccuracyCIFAR, Tanh102103104Training dataset size0.20.30.40.50.6CIFAR, ReLUNNGPNN-bestNN-w5NN-w10NN-w50NN-w100NN-w500NN-w1000NN-w5000102103104Training dataset size0.000.010.020.030.040.050.06MSEMNIST, Tanh102103104Training dataset size0.000.010.020.030.040.050.06MNIST, ReLU102103104Training dataset size0.0600.0650.0700.0750.0800.0850.090MSECIFAR, Tanh102103104Training dataset size0.0600.0650.0700.0750.0800.0850.090CIFAR, ReLUNNGPNN-bestNN-w5NN-w10NN-w50NN-w100NN-w500NN-w1000NN-w50000.050.000.050.100.150.20Output variance0.020.010.000.010.020.030.040.05MSEMNIST-50kTanh-corr:0.9330ReLU-corr:0.95730.050.000.050.100.150.200.250.300.35Output variance0.030.040.050.060.070.08MSECIFAR-45kTanh-corr:0.7428ReLU-corr:0.8223 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 41, 'before_paragraph_idx': 41}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_after': 'between these two regimes is a line in (σ2 point is significantly slower and non-exponential. It was noted in Schoenholz et al. (2017) that this approach to the fixed-point covariance fundamentally bounded whether or not neural networks could successfully be trained. It was shown that initializing networks on this line allowed for significantly ', 'paragraph_idx': 42, 'before_section': None, 'context_before': '0.5034 0.5558 ', 'modified_lines': 'b )-space where the decay K l(x, x(cid:48)) towards its fixed ', 'original_lines': 'w and σ2 case the covariance K l(x, x(cid:48)) → q∗ for every pair of inputs x, x(cid:48), where q∗ is a constant that depends only on σ2 b . All inputs have unit correlation asymptotically with depth. By contrast in the chaotic phase the weight variance σ2 w dominates and similar inputs become dissimilar with depth as they are randomly projected by the weight matrices. In this case, the covariance K l(x, x(cid:48)) → q∗ for x = x(cid:48) but q∗c∗ for x (cid:54)= x(cid:48). Here c∗ < 1 is the fixed point correlation. In each of these regimes, there is also a finite depth-scale ξ which describes the characteristic number of layers over which the covariance function decays exponentially towards its fixed point form. Exactly at the boundary b )-space where the decay K l(x, x(cid:48)) towards its fixed ', 'after_paragraph_idx': 42, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'w and σ2 In a striking analogy with the trainability of neural networks, we observe that the performance of the Indeed, we see that as for hyperparameter settings that are far from criticality, the GP is unable to train and we encounter poor test set performance. By contrast, near criticality we observe that our models display high accuracy. Moreover, we find that the accuracy appears to drop more quickly ', 'paragraph_idx': 5, 'before_section': None, 'context_before': '“bounded” phase in which q∗ is finite (and nonzero) and an unbounded phase in which q∗ is either infinite or zero. As in the Tanh case there are depth scales that control the rate of convergence to these fixed points and therefore limit the maximum trainable depth. The phase diagram for the ReLU ', 'modified_lines': 'nonlinearity is also shown in Figure 4b. NNGP appears to closely track the structure from the phase diagram, clearly illustrated in Figure 4. ', 'original_lines': 'nonlinearity is also shown in Figure 3b. NNGP appears to closely track the structure from the phase diagram, clearly illustrated in Figure 3. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': 'Abstract', 'context_after': 'uncertainty estimates from deep neural networks without stochastic gradient-based training. The performance is competitive with the best neural networks (within specified class of fully-connected models) trained on the same regression task under similar hyperparameter settings. While we were ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'neural networks and Gaussian processes whose kernel function is constructed in a compositional, but fully deterministic and differentiable, manner. Use of a GP prior on functions enables exact Bayesian inference for regression from matrix computations, and hence we are able to obtain predictions and ', 'modified_lines': '', 'original_lines': ' 8 Under review as a conference paper at ICLR 2018 (a) Tanh (b) ReLU Figure 3: The best performing NNGP hyperparameters agree with those predicted by deep signal propagation. Test set accuracy heatmaps for NNGPs evaluated for a grid of σ2 b values. The right plot in each subfigure (a), (b) is a theoretical phase diagram for that nonlinearity following the methodology of Schoenholz et al. (2017). We observe that the performance of the NNGP is best along the critical line (dotted lines). Additional depths are shown in the Appendix Figure 8. w and σ2 ', 'after_paragraph_idx': 2, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Andrew G Wilson, Zhiting Hu, Ruslan R Salakhutdinov, and Eric P Xing. Stochastic variational deep kernel learning. In Advances in Neural Information Processing Systems, pp. 2586–2594, 2016a. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351, 1998. ', 'modified_lines': '', 'original_lines': '10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:16) D DETAILS OF THE EXPERIMENTS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(19) ', 'modified_lines': 'in the limit of infinite width, zL|x is described by a Gaussian process with kernel So, G ◦ (F ◦ G)L(cid:17) (cid:0)K 0(cid:1). ', 'original_lines': 'So, in the limit of infinite width, zL|x is a multivariate Normal with kernel G ◦ (F ◦ G)L(cid:17) (cid:0)K 0(cid:1). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1.2 SUMMARY OF CONTRIBUTIONS', 'after_section': None, 'context_after': 'Performance: Performance of grid points of σ2 phase changes as described in Section 3.2. 14 b values for varying depth. Rows correspond to Tanh and ReLU nonlinearities, and columns correspond to varying depth. ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'Here we include more results from experiments described in Section 3. Uncertainty: Relationship between the target MSE and the GP’s uncertainty estimate for smaller ', 'modified_lines': 'training set size is shown in Figure 8. w-σ2 b for varying depth is shown in Figure 9. The best performing NNGP’s hyperparameters are distributed near the critical line (Figure 10) where the Published as a conference paper at ICLR 2018 Figure 8: The prediction uncertainty for smaller number of training points. The details are the same as Figure 3. Figure 9: Test set accuracy heatmaps for NNGPs evaluated for a grid of σ2 ', 'original_lines': 'training set size is shown in Figure 7. w-σ2 b for varying depth is shown in Figure 8. The best performing NNGP’s hyperparameters are distributed near the critical line (Figure 9) where the Under review as a conference paper at ICLR 2018 Figure 7: The prediction uncertainty for smaller number of training points. The details are the same as Figure 2. Figure 8: Test set accuracy heatmaps for NNGPs evaluated for a grid of σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-22 17:51:29
|
ICLR.cc/2018/Conference
|
HkK7mY2vf
|
BJQffSAvz
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the ', 'modified_lines': 'corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as co- variance functions for GPs and allow fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide deep net- works and GPs. We further develop a computationally efficient pipeline to com- pute the covariance function for these GPs. We then use the resulting GPs to per- form Bayesian inference for wide deep neural networks on MNIST and CIFAR- 10. We observe that trained neural network accuracy approaches that of the corre- sponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test perfor- mance increases as finite-width trained networks are made wider and more similar to a GP, and thus that GP predictions typically outperform those of finite-width networks. Finally we connect the performance of these GPs to the recent theory of signal propagation in random neural networks. ', 'original_lines': 'corresponding GP. Recently, kernel functions for multi-layer random neural net- works have been developed but only outside of a Bayesian framework. As such, previous work has not identified a correspondence between using these kernels as the covariance function for a GP and performing fully Bayesian prediction with a deep neural network. In this work, we derive the exact equivalence between infinitely wide, deep, net- works and GPs with a particular covariance function. We further develop a compu- tationally efficient pipeline to compute this covariance function. We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncer- tainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outper- form those of finite-width networks. Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': '∗Both authors contributed equally to this work. 1Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional to the layer width. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'This correspondence implies that if we choose the hypothesis space to be the class of infinitely wide neural networks, an i.i.d. prior over weights and biases can be replaced with a corresponding GP prior over functions. As noted by (Williams, 1997), this substitution enables exact Bayesian ', 'modified_lines': 'inference for regression using neural networks. The computation requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations. †Work done as a member of the Google AI Residency program (g.co/airesidency). ', 'original_lines': '†Work done as a member of the Google Brain Residency program (g.co/brainresidency). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': 'Abstract', 'after_section': None, 'context_after': 'In light of the resurgence in popularity of neural networks, it is timely to revisit this line of work. We delineate the correspondence between deep and wide neural networks and GPs and utilize it for ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2018 ', 'modified_lines': '', 'original_lines': ' inference for regression using neural networks. The computation requires building the necessary covariance matrices over the training and test sets and straightforward linear algebra computations. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1.1 RELATED WORK', 'after_section': '1.1 RELATED WORK', 'context_after': 'Drawing inspiration from the multi-layer nature of deep neural networks, there is a line of work considering various approaches to stacking GPs, such as deep GPs (Lawrence & Moore (2007); ', 'paragraph_idx': 9, 'before_section': '1.1 RELATED WORK', 'context_before': 'constructing kernels equivalent to infinitely wide deep neural networks, but their construction does not go beyond two hidden layers with nonlinearities. ', 'modified_lines': 'Related work has also appeared outside of the GP context but in compositional kernel construc- tions. Cho & Saul (2009) derives compositional kernels for polynomial rectified nonlinearities, which includes the Sign and ReLU nonlinearities, and can be used in GPs; our manner of com- posing kernels matches theirs, though the context is different. Daniely et al. (2016) extends the construction of compositional kernels to neural networks whose underlying directed acyclic graph is of general form. They also prove, utilizing the formalism of dual activations, that compositional kernels originating from fully-connected topologies with the same nonlinearity become degenerate when composed infinitely many times. In a different context than compositional kernels, Poole et al. (2016); Schoenholz et al. (2017) study the same underlying recurrence relation for the specific case of fully-connected networks and bounded nonlinearities. They distinguish regions in hyperparame- ter space with different fixed points and convergence behavior in the recurrence relations. The focus in these works was to better understand the expressivity and trainability of deep networks. ', 'original_lines': 'Related work has also appeared outside of the GP context but in compositional kernel constructions. Cho & Saul (2009) derives compositional kernels for polynomial rectified nonlinearities, which in- cludes the Sign and ReLU nonlinearities, and can be used in GPs; our manner of composing kernels matches theirs, though the context is different. Daniely et al. (2016) extends the construction of compositional kernels to neural networks whose underlying directed acyclic graph (which they term a “computation skeleton”) is of general form. They also prove, utilizing the formalism of dual activations, that compositional kernels originating from fully-connected topologies with the same nonlinearity become degenerate when composed infinitely many times. In a different context than compositional kernels, Poole et al. (2016); Schoenholz et al. (2017) study the same underlying re- currence relation for the specific case of fully-connected networks and bounded nonlinearities. They distinguish regions in hyperparameter space with different fixed points and convergence behavior in the recurrence relations. The focus in these works was to better understand the expressivity and trainability of deep networks. ', 'after_paragraph_idx': 10, 'before_paragraph_idx': 9}, {'section': 'Abstract', 'after_section': None, 'context_after': '1.2 SUMMARY OF CONTRIBUTIONS ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Another series of recent works (Wilson et al. (2016b;a); Al-Shedivat et al. (2017)), termed deep kernel learning, utilize GPs with base kernels which take in features produced by a deep multilayer ', 'modified_lines': 'neural network, and train the resulting model end-to-end. Our work differs from these in that our GP corresponds to a multilayer neural network. Additionally, our GP kernels have many fewer pa- rameters, and these parameters correspond to the hyperparameters of the equivalent neural network. ', 'original_lines': 'neural network; the model is trained end-to-end. Our work differs from these in that we do not learn properties of the kernel – in particular, the many free parameters of a neural network which provides an embedding – but rather use fixed basis functions. Furthermore, our GP corresponds to a multilayer neural network with all layers infinitely wide. We do perform a grid search over a few kernel hyperparameters, which could be learned by maximizing the marginal likelihood of the GP. 2 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'after_section': None, 'context_after': 'i (xα=1), ..., z1 i (x(cid:48))(cid:3) = σ2 b + σ2 w where we have introduced C(x, x(cid:48)) as in Neal (1994a); it is obtained by integrating against the i , z1 j for i (cid:54)= j are joint Gaussian and have zero covariance, they are guaranteed to be independent despite utilizing the same features produced by the hidden layer. 2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS Suppose that zl−1 and identically distributed). After l − 1 steps, the network computes ', 'paragraph_idx': 18, 'before_section': '2.2 REVIEW OF GAUSSIAN PROCESSES AND SINGLE-LAYER NEURAL NETWORKS', 'context_before': 'i ∼ GP(µ1, K 1), a GP with mean µ1 and covariance K 1, which are themselves independent of i. Because the parameters have zero mean, we have that µ1(x) = E (cid:2)z1 ', 'modified_lines': 'j(cid:48) are independent for j (cid:54)= j(cid:48). Moreover, since z1 i (x)(cid:3) = 0 and, b + σ2 E (cid:2)x1 i (x)x1 i (x(cid:48))(cid:3) ≡ σ2 wC(x, x(cid:48)), (2) K 1(x, x(cid:48)) ≡ E (cid:2)z1 i (x)z1 3 Published as a conference paper at ICLR 2018 distribution of W 0, b0. Note that, as any two z1 The arguments of the previous section can be extended to deeper layers by induction. We proceed by taking the hidden layer widths to be infinite in succession (N1 → ∞, N2 → ∞, etc.) as we continue with the induction, to guarantee that the input to the layer under consideration is already governed by a GP. In Appendix C we provide an alternative derivation in terms of Bayesian marginalization over intermediate layers, which does not depend on the order of limits, in the case of a Gaussian prior on the weights. A concurrent work (de G. Matthews et al., 2018) further derives the convergence rate towards a GP if all layers are taken to infinite width simultaneously, but at different rates. ', 'original_lines': ' K 1(x, x(cid:48)) ≡ E (cid:2)z1 i (x)z1 distribution of W 0, b0. Note that, as any two z1 wC(x, x(cid:48)), i (x)x1 i (x(cid:48))(cid:3) ≡ σ2 E (cid:2)x1 (2) i (x)(cid:3) = 0 and, b + σ2 The arguments of the previous section can be extended to deeper layers by induction. We proceed by taking the hidden layer widths to be infinite in succession (N1 → ∞, N2 → ∞, etc.) as we continue with the induction, to guarantee that the input to the layer under consideration is already governed by a GP. In Appendix C we provide an alternative derivation, in terms of marginalization over intermediate layers, which does not depend on the order of limits, in the case of a Gaussian prior on the weights. A concurrent work (de G. Matthews et al., 2018) further derives the convergence rate towards a GP if all layers are taken to infinite width simultaneously. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 18}, {'section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'after_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_after': ', but this is equivalent to integrating against the joint distribution of only zl−1 (x(cid:48)). The latter is described by ', 'paragraph_idx': 22, 'before_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_before': '(4) ', 'modified_lines': 'By induction, the expectation in Equation 4 is over the GP governing zl−1 ', 'original_lines': 'By induction, the expectation in Equation (4) is over the GP governing zl−1 ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'after_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_after': 'be computed analytically (Cho & Saul (2009); Daniely et al. (2016)). In the case of the ReLU non- linearity, it yields the well-known arccosine kernel (Cho & Saul (2009)) whose form we reproduce in Appendix B. When no analytic form exists, it can instead be efficiently computed numerically, as described in Section 2.5. 2.4 BAYESIAN TRAINING OF NEURAL NETWORKS USING GAUSSIAN PROCESS PRIORS Here we provide a short review of how a GP prior over functions can be used to do Bayesian infer- ence; see e.g. (Rasmussen & Williams, 2006) for a comprehensive review of GPs. Given a dataset D = {(x1, t1), ..., (xn, tn)} consisting of input-target pairs (x, t), we wish to make a Bayesian pre- diction at test point x∗ using a distribution over functions z(x). This distribution is constrained to take values z ≡ (z1, ..., zn) on the training inputs x ≡ (x1, ..., xn) and, ', 'paragraph_idx': 22, 'before_section': '2.3 GAUSSIAN PROCESSES AND DEEP NEURAL NETWORKS', 'context_before': 'In fact, these recurrence relations have appeared in other contexts. They are exactly the relations derived in the mean field theory of signal propagation in fully-connected random neural networks ', 'modified_lines': '(Poole et al. (2016); Schoenholz et al. (2017)) and also appear in the literature on compositional kernels (Cho & Saul (2009); Daniely et al. (2016)). For certain activation functions, Equation 5 can 4 Published as a conference paper at ICLR 2018 ', 'original_lines': '(Poole et al. (2016); Schoenholz et al. (2017)) and also appear in the literature on compositional ker- nels (Cho & Saul (2009); Daniely et al. (2016)). For certain activation functions, Equation (5) can 4 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 22, 'before_paragraph_idx': 22}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': 'of training-training and training-test points, recursively for all layers. For some nonlinearities, such as ReLU, this integration can be done analytically. However, to compute the kernel corresponding The most direct implementation of a numerical algorithm for KL would be to compute integrals independently for each pair of datapoints and each layer. This is prohibitively expensive and costs O (cid:0)n2 ', 'paragraph_idx': 25, 'before_section': None, 'context_before': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL Given an L-layer deep neural network with fixed hyperparameters, constructing the covariance ma- ', 'modified_lines': 'trix KL for the equivalent GP involves computing the Gaussian integral in Equation 4 for all pairs to arbitrary nonlinearities, the integral must be performed numerically. ', 'original_lines': 'trix KL for the equivalent GP involves computing the Gaussian integral in Equation (4) for all pairs to arbitrary nonlinearities, the integral must be performed numerically. Figure 6 illustrates close agreement between the kernel function computed numerically (as described below) and analytically, for the ReLU nonlinearity. It also illustrates the angular dependence of the kernel and its evolution with increasing depth. ', 'after_paragraph_idx': 25, 'before_paragraph_idx': None}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': 'involves numerically approximating a Gaussian integral, in terms of the marginal variances s and correlations c. We guarantee that the marginal variance is identical for each datapoint, by preprocessing all datapoints to have identical norm at the input layer, so the number of entries in the lookup table need only be nvnc. These entries are computed as2: (cid:21)(cid:33) ', 'paragraph_idx': 25, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'elements. Note that we are using fixed, rather than adaptive, sampling grids to allow oper- ations to be parallelized and reused across datapoints and layers. ', 'modified_lines': '2. Populate a matrix F containing a lookup table for the function Fφ in Equation 5. This 5 Published as a conference paper at ICLR 2018 ', 'original_lines': '5 Published as a conference paper at ICLR 2018 2. Populate a matrix F containing a lookup table for the function Fφ in Equation (5). This ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': '(cid:18) ', 'paragraph_idx': 25, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': '(10) ', 'modified_lines': '3. For every pair of datapoints x and x(cid:48) in layer l, compute K l (x, x(cid:48)) using Equation 5. ', 'original_lines': '3. For every pair of datapoints x and x(cid:48) in layer l, compute K l (x, x(cid:48)) using Equation (5). ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_after': 'Finally, note that the full computational pipeline is deterministic and differentiable. The shape and properties of a deep network kernel are purely determined by hyperparameters of the deep ', 'paragraph_idx': 26, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'train + ntrainntest)(cid:1). ', 'modified_lines': 'This computational recipe allows us to compute the covariance matrix for the NNGP correspond- ing to any well-behaved nonlinearity φ. All computational steps above can be implemented using accelerated tensor operations, and computation of KL is typically faster than solving the system of linear equations in Equation 8-9. Figure 6 illustrates the close agreement between the kernel func- tion computed numerically (using this approach) and analytically, for the ReLU nonlinearity. It also illustrates the angular dependence of the kernel and its evolution with increasing depth. ', 'original_lines': 'This computational recipe allows us to compute the covariance matrix for the NNGP corresponding to any well-behaved nonlinearity φ. All computational steps above can be implemented using accel- erated tensor operations, and computation of KL is typically faster than solving the system of linear equations in Equation (8)-(9). ', 'after_paragraph_idx': 27, 'before_paragraph_idx': 25}, {'section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'after_section': None, 'context_after': '3 EXPERIMENTAL RESULTS 3.1 DESCRIPTION CIFAR-10 datasets. The baseline neural network is a fully-connected network with identical width at each hidden layer. Training is on the mean squared error (MSE) loss, chosen so as to allow direct comparison to GP predictions. Formulating classification as regression often leads to good results (Rifkin & Klautau, 2004). Future work may involve evaluating the NNGP on a cross entropy loss using the approach in (Williams & Barber, 1998; Rasmussen & Williams, 2006). Training used the Adam optimizer (Kingma & Ba (2014)) with learning rate and initial weight/bias variances op- were encoded as a one-hot, zero-mean, regression target (i.e., entries of -0.1 for the incorrect class and 0.9 for the correct class). We constructed the covariance kernel numerically for ReLU and Tanh nonlinearities following the method described in Section 2.5. Performance: We find that the NNGP often outperforms trained finite width networks. See Table 1 and Figure 1. (a) Accuracy ', 'paragraph_idx': 28, 'before_section': '2.5 EFFICIENT IMPLEMENTATION OF THE GP KERNEL', 'context_before': 'likelihood w.r.t. the hyperparameters. Although this is not the focus of current work, we hope to return to this topic in follow-up work. ', 'modified_lines': 'An open source implementation of research/nngp. the algorithm is available at https://github.com/brain- We compare NNGPs with SGD3 trained neural networks on the permutation invariant MNIST and timized over validation error using the Google Vizier hyperparameter tuner (Golovin et al., 2017). Dropout was not used. In future work, it would be interesting to incorporate dropout into the NNGP covariance matrix using an approach like that in (Schoenholz et al., 2017). For the study, nonlineari- ties were chosen to be either rectified linear units (ReLU) or hyperbolic tangent (Tanh). Class labels 2For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. 3For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD produced qualitatively similar results, with slightly higher MSE. 6 Published as a conference paper at ICLR 2018 ', 'original_lines': 'We have released an open source implementation of the algorithm.3 We compare NNGPs with SGD4 trained neural networks on the permutation invariant MNIST and timized over validation error using the Vizier hyperparameter tuner (Golovin et al., 2017). Dropout was not used. In future work, it would be interesting to incorporate dropout into the NNGP covari- ance matrix using an approach like that in (Schoenholz et al., 2017). For the study, nonlinearities were chosen to be either rectified linear units (ReLU) or hyperbolic tangent (Tanh). Class labels 2For numerical reasons, in practice an independent 1D lookup table is built for the case that cj = 1. 3https://github.com/brain-research/nngp 4For all presented results, the variant of SGD used is Adam. Although not shown, we found vanilla SGD produced qualitatively similar results, with slightly higher MSE. 6 Published as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': '3.1 DESCRIPTION', 'after_section': '3.1 DESCRIPTION', 'context_after': 'tentially related, standpoints. (1) NNs are commonly believed to be powerful because of their ability to do flexible representation learning, while our NNGP uses fixed basis functions; nonetheless, in our experiments we find no salient performance advantage to the former. (2) It hints at a possible networks with five hidden layers on CIFAR-10 with a range of layer widths. For this experiment, we trained the networks using a standard cross entropy loss rather than MSE, leading to a slight difference in performance. Uncertainty: One benefit in using a GP is that, due to its Bayesian nature, all predictions have estimate of prediction variance associated with it (Equation 9). In our experiments, we observe that the NNGP uncertainty estimate is highly correlated with prediction error (Figure 3). 3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION Several prior works (Poole et al. (2016); Schoenholz et al. (2017); Daniely et al. (2016); Duvenaud uninteresting fixed point with depth l → ∞, in that K∞(x, x(cid:48)) becomes a constant or piecewise constant map. We now briefly relate our ability to train NNGPs with the convergence of K l(x, x(cid:48)) to the fixed-point kernel. ', 'paragraph_idx': 32, 'before_section': None, 'context_before': 'validation set) neural network across all widths and trials. Often this is the neural network with the largest width. ', 'modified_lines': 'We additionally find the performance of the best finite-width NNs, trained with a variant of SGD, approaches that of the NNGP with increasing layer width. This is interesting from at least two, po- relationship between SGD and Bayesian inference in certain regimes – were the neural networks trained in a fully Bayesian fashion, rather than by SGD, the approach to NNGP in the large width limit would be guaranteed. There is recent work suggesting that SGD can implement approximate Bayesian inference (Mandt et al., 2017) under certain assumptions. The similarity of the performance of the widest NN in Figure 1 with the NNGP suggests that the limit of infinite network width, which is inherent to the GP, is far from being a disadvantage. Indeed, in practice it is found that the best generalizing NNs are in fact the widest. To support this, in Fig- ure 2 we show generalization gap results from an experiment in which we train 180 fully-connected uncertainty estimates (Equation 9). For conventional neural networks, capturing the uncertainty in a model’s predictions is challenging (Gal, 2016). In the NNGP, every test point has an explicit et al. (2014)) have noted the recurrence relations Equation 5 commonly approach a functionally ', 'original_lines': 'Curiously, the performance of the best finite-width NNs, trained with a variant of SGD, approaches that of the NNGP with increasing layer width. We find this to be interesting from at least two, po- relationship between SGD and Bayesian inference in certain regimes. There is recent work sug- gesting that SGD can implement approximate Bayesian inference (Mandt et al., 2017) under certain assumptions. The similarity of the performance of the widest NN in Figure 1 with the NNGP suggests that the limit of infinite network width, which is inherent to the GP, is far from being a disadvantage. Indeed, in practice it is found that the best generalizing NNs are in fact the widest. To support this, in Figure 2 we show results on the generalization gap from an experiment in which we train 180 fully-connected uncertainty estimates (Equation (9)). For conventional neural networks, capturing the uncertainty in a model’s predictions is challenging (Gal, 2016). In the NNGP, every test point has an explicit et al. (2014)) have noted the recurrence relations Equation (5) commonly approach a functionally ', 'after_paragraph_idx': 32, 'before_paragraph_idx': None}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': '7 ', 'paragraph_idx': 37, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'We will be particularly interested in contextualizing our results in relation to Poole et al. (2016); Schoenholz et al. (2017) which analyzed the fixed points and the approach to them in detail for bounded nonlinearities. To briefly recapitulate: there are regions of hyperparameter space (called ', 'modified_lines': 'phases) where K∞(x, x(cid:48)) changes only quantitatively with σ2 b . However, there are low w and σ2 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': 'dimensional boundaries that separate different phases and between them the nature of K∞(x, x(cid:48)) changes qualitatively. For the Tanh nonlinearity, there are two distinct phases respectively called the “ordered” phase and inputs share common bias vectors and so all inputs end up just approaching the random bias. In this case the covariance K l(x, x(cid:48)) → q∗ for every pair of inputs x, x(cid:48), where q∗ is a constant that depends only on σ2 ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'w = 2.0, and σ2 ', 'modified_lines': 'the “chaotic” phase that can be understood as a competition between the weights and the biases of the network. A diagram showing these phases and the boundary between them is shown in Figure 4a. In the ordered phase, the features obtained by propagating an input through the each layer of the recursion become similar for dissimilar inputs. Fundamentally, this occurs because the different ', 'original_lines': 'phases) where K∞(x, x(cid:48)) changes only quantitatively with σ2 b . However, there are low w and σ2 the “chaotic” phase that can be understood as a competition between the weights and the biases of the network. A diagram showing these phases and the boundary between them is shown in Figure 4a. In the ordered phase, the features obtained by propagating an input through the each layer of the recursion become similar for dissimilar inputs. Fundamentally, this occurs because the different ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'after_section': None, 'context_after': 'w and σ2 8 ', 'paragraph_idx': 38, 'before_section': '3.2 RELATIONSHIP TO DEEP SIGNAL PROPAGATION', 'context_before': 'x = x(cid:48) but q∗c∗ for x (cid:54)= x(cid:48). Here c∗ < 1 is the fixed point correlation. In each of these regimes, there is also a finite depth-scale ξ which describes the characteristic number of layers over which the covariance function decays exponentially towards its fixed point form. Exactly at the boundary ', 'modified_lines': 'b )-space where the decay K l(x, x(cid:48)) towards its fixed between these two regimes is a line in (σ2 w, σ2 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 38}, {'section': 'Abstract', 'after_section': None, 'context_after': 'point is significantly slower and non-exponential. It was noted in Schoenholz et al. (2017) that this approach to the fixed-point covariance fundamentally bounded whether or not neural networks could successfully be trained. It was shown that initializing networks on this line allowed for significantly deeper neural networks to be trained. For ReLU networks a similar picture emerges, however there are some subtleties due to the un- bounded nature of the nonlinearity. In this case for all σ2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '0.5034 0.5558 ', 'modified_lines': '', 'original_lines': 'b )-space where the decay K l(x, x(cid:48)) towards its fixed between these two regimes is a line in (σ2 w, σ2 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2018-02-24 01:38:18
|
ICLR.cc/2018/Conference
|
ByAr9y-0b
|
SyD3VWW0-
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION 1 Under review as a conference paper at ICLR 2018 2 RELATED WORK ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT ', 'modified_lines': 'We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising when compared to those trained using traditional backpropaga- tion techniques, both in terms of training speed, as well as feature extraction and reconstruction performance. For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. However, this surge of success of backpropagation-based methods in recent years has somewhat overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder- decoder networks and generative adversarial models, the reliance on backpropagation methods remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD- based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we point the reader to Taylor et al. (2016). From another perspective, there has been marked progress in recent years in the area of non-convex optimization (beyond deep learning), which has resulted in scalable methods such as iterated hard thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate solutions. In this work, we investigate a non-backpropagation strategy to train neural networks, leveraging recent advances in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), and it offers an alternating minimization-based technique for training neural networks - in particular, autoencoders. DANTE is based on a simple but useful observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and stochastic normalized gradient descent Hazan et al. (2015) to train autoencoder networks. The key contributions of this work are summarized below: • We show that viewing each layer of an a neural network as applying an ensemble of generalized linear transformations, allows the problem of training the network to be cast as a bi-quasi-convex optimization problem (exact statement later). • We exploit this intuition by employing an alternating minimization strategy DANTE that reduces the problem of training the layers to quasi-convex optimization problems. • We utilize state-of-the-art Stochastic Normalized Gradient Descent (SNGD) technique Hazan et al. (2015) for quasi-convex optimization to provide an efficient implementation of DANTE for networks with sigmoidal activation functions. However, a limitation of SNGD is its inability to handle link non-differentiable functions such as the ReLU. • To overcome this limitation, we introduce the generalized ReLU, a variant of the popular ReLU activation function and show how the SNGD technique can be applied with the generalized ReLU function. This presents an augmentation in the state of the art in quasi- convex optimization and may be of independent interest. This allows DANTE to train AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the SNGD method offers provably more rapid convergence with the general- ized ReLU function than it does even for the sigmoidal activation. This is corroborated in experiments as well. A key advantage of our approach is the ability to exploit these theo- retical results to set learning rates and batch sizes without any fine tuning/cross-validation required. • We also show DANTE can be easily extended to train deep AEs with multiple hidden layers. • We empirically validate DANTE with both the generalized ReLU and sigmoid activations and establish that DANTE does provide comparable or better test errors, reconstructions and classification performance (with the learned representations), when compared to an identical network train using standard used mini-batch SGD-based backpropagation. ', 'original_lines': 'In this work, we present a novel method, DANTE, to train neural networks - in particular, autoencoders - using alternating minimization. This method provides a different perspective in lieu of traditional gradient-based backpropagation com- monly used to train deep networks such as autoencoders. DANTE utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimization problem. We show that for autoencoder con- figurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform alternations effectively. In experi- ments on standard datasets, autoencoders trained using DANTE were found to be very promising when compared to those trained using traditional backpropagation techniques, both in terms of solutions as well as feature extraction and reconstruc- tion performance. We also extended DANTE to multi-layer settings and showed that the proposed method performs promisingly in this setting too. For much of the recent march of deep learning, gradient-based backpropagation methods (e.g. Stochastic Gradient Descent, SGD) have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. However, this surge of success of backpropagation-based methods in recent years has overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder-decoder networks and generative adversarial models, the reliance on backpropagation methods remains. While reinforcement learning methods are becoming increasingly used [REF] today, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD-based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we request the reader to refer Taylor et al. (2016). From another perspective, there has been marked progress in the area of non-convex optimization (beyond deep learning) in recent years, which has resulted in scalable methods such as iterated hard thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate solutions. In this work, we seek to formulate a non-backpropagation strategy to train neural networks, leveraging a recent result in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), an alternating minimization-based technique for training neural networks - in particular, autoencoders. DANTE is based on our observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and the more recent stochastic normalized gradient descent Hazan et al. (2015) to train an autoencoder. The key contributions of this work are summarized below: • We introduce DANTE, a new method to efficiently train neural networks using alternating minimization. DANTE trains Autoencoder (AE) networks through alternating optimization of quasi-convex - Strictly-Locally-Quasi-Convex (SLQC), to be specific - problems using an efficient stochastic solver. • DANTE views each layer of a neural network as a Generalized Linear Model (GLM). While it has been recently shown that a GLM with sigmoid activation function is SLQC Hazan et al. (2015), we introduce a generalized version of the popularly used ReLU activation function, and show that a GLM with such non-differentiable activation functions can also be shown to be SLQC. This allows us to apply our methodology to AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the Stochastic Normalized Gradient Descent (SNGD) method offers provably more rapid convergence under this newly introduced activation function for an idealized GLM problem than it does for the sigmoidal activation function. A key advantage of our approach is that the theoretical result provides a direction to set the learning rate and batch size to provide an acceptable level of convergence while training the AE. • We empirically validate DANTE with both generalized ReLU and sigmoid activations and establish that the proposed method provides comparable or better test error, reconstructions and classification performance (with the learned representations), when compared with a regularly used mini-batch SGD setup. • We also show that the proposed methodology can be extended to train deep AEs, and our results on deep AEs show promise in this setting too. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Under review as a conference paper at ICLR 2018 3 DANTE: DEEP ALTERNATIONS FOR TRAINING AUTOENCODERS 3.1 PROBLEM FORMULATION Consider a neural network with L layers. Each layer l ∈ {1, 2, . . . , L} has nl nodes and is character- ized by a linear operator Wl ∈ Rnl−1×nl and a non-linear activation function φl : Rnl → Rnl . The activations generated by the layer l are denoted by al ∈ Rnl . We denote by a0, the input activations f (W; x) = φL(cid:104)WL, φL−1(cid:104)WL−1, · · · , φ1(cid:104)W1, x(cid:105)(cid:105)(cid:105) ', 'paragraph_idx': 8, 'before_section': '2 RELATED WORK', 'context_before': 'Direction Method of Multipliers (ADMM) and Bregman iterations Taylor et al. (2016). The focus of this method, however, was on scaling the training of neural networks to a distributed setting on multiple cores across a computing cluster. Jaderberg also proposed the idea of ’synthetic gradients’ in ', 'modified_lines': 'Jaderberg et al. (2016). While this approach is interesting, this work is more focused towards a more efficient way to carry out gradient-based parameter updates in a neural network. However, in our work, we focus on an entirely new approach to training neural networks – in particular, autoencoders – using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the a range of datasets. Although alternating minimization has found much appeal in areas such as matrix factorization Jain et al. (2013), to the best of our knowledge, this is the first such effort in using alternating principles to train neural networks with related performance guarantees. In this section, we will first set notation and establish the problem setting, then present details of the DANTE method, including the SNGD algorithm. For sake of simplicity, we consider networks with just a single hidden layer. We then offer some theoretical insight intro DANTE’s inner workings, which also allow us to arrive at the generalized ReLU activation function, and finally describe how DANTE can be extended to deep networks with multiple hidden layers. and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105) ∈ Rnl , where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'original_lines': 'Jaderberg et al. (2016). While this approach is interesting, this work is more towards a more efficient way to carry out gradient-based parameter updates in a neural network. In this work, we focus on a new approach to train neural networks - in particular, autoencoders - using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the considered datasets. To the best of our knowledge, this is the first such effort in using alternating SNGD to train neural networks with related performance guarantees. and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105), where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': '3.1 PROBLEM FORMULATION', 'context_after': 'represented as f (W; x) = φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) to describe our methodology. We describe in a later section on how this idea can be extended to deep multi-layer autoencoders. (Note that our definition of a single-layer autoencoder is equivalent to a two-layer neural network in a classification setting, by ', 'paragraph_idx': 11, 'before_section': '3.1 PROBLEM FORMULATION', 'context_before': '(3) ', 'modified_lines': 'For purpose of simplicity and convenience, we first consider the case of a single-layer autoencoder, ', 'original_lines': 'For purposes of simplicity and convenience, we consider the case of a single-layer autoencoder, ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'min W1 3 ', 'paragraph_idx': 14, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': 'Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2 ', 'modified_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter (note that φW2(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105)). Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2. ', 'original_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, inverting the network and fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter. Ex∼D(cid:107)φ(cid:104)W1, x(cid:105) − x(cid:107)2 2 where φ(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result in Hazan et al. (2015) that a GLM with a differentiable activation, such as sigmoid, is Strictly Locally Quasi-Convex (SLQC), which allows us to use SNGD to solve each sub-problem of the alternating setup efficiently (with performance guarantees). We further show that a GLM with a non-differentiable activation – in particular, a generalized Rectified Linear Unit (ReLU) – is also SLQC, thus allowing us to extend the proposed alternating strategy, DANTE, to ReLU-based autoencoders too. We note that while we have developed this idea to train autoencoders in this work (since our approach relates closely to the greedy layerwise training in autoencoders), DANTE can be used to train standard multi-layer neural networks too (discussed in Section 5). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'Input 1 for t = 1 to T do 2 ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 Algorithm 1: Stochastic Normalized Gradient Descent (SNGD) ', 'modified_lines': ':Number of iterations T , training data S = {(xi, yi)}m minibatch size b, Initialization parameters w0 i=1 ∈ Rd × R, learning rate η, ', 'original_lines': ':Number of iterations T , {(xi, yi)}m parameters w0 i=1 ∈ Rd, learning rate η, minibatch size b, Initial ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_after': 'i=1(yi − φ(cid:104)w, xi(cid:105))2 3 ', 'paragraph_idx': 16, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': '(cid:107)gt(cid:107) wt+1 = wt − η · ˆgt ', 'modified_lines': 'i=1 ∼ Unif(S) ', 'original_lines': 'i=1 ∼ D ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': '4 56 end', 'after_section': '4 56 end', 'context_after': 'Output :Model given by wT Algorithm 2: DANTE: Deep AlterNations for Training autoEncoders Input :Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , Number of 2, learning rate η, minibatch size b 1, w0 ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '5 6 end ', 'modified_lines': '//Select a random mini-batch of training points Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result by Hazan et al. (2015) that shows that GLMs with nice, differentiable link functions such as sigmoid (or even a combination of sigmoids such as φW2(·)), satisfy a property the authors name Strict Locally Quasi-Convexity (SLQC), which allows techniques such as SNGD to solve the GLM problems effectively. This is quite advantageous for us since it allows us to solve each sub-problem of the alternating setup efficiently. In a subsequent section, we will show that GLMs with non-differentiable activation – in particular, a generalized Rectified Linear Unit (ReLU) – can also satisfy the SLQC property, thus allowing us to extend the proposed alternating strategy, DANTE, to ReLU-based autoencoders too. We note that while we have developed this idea to train autoencoders in this work (since our approach relates closely to the greedy layer-wise training in autoencoders), DANTE can be used to train standard multi-layer neural networks too (discussed in Section 5). 3.2 METHODOLOGY We begin our presentation of the proposed method by briefly reviewing the Stochastic Normalized Gradient Descent (SNGD) method, which is used to execute the inner steps of DANTE. We explain in the next subsection, the rationale behind the choice of SNGD as the optimizer. We stress that although DANTE does use stochastic gradient-style methods internally (such as the SNGD algorithm), the overall strategy adopted by DANTE is not a descent-based strategy, rather an alternating-minimization strategy. Stochastic Normalized Gradient Descent (SNGD): Normalized Gradient Descent (NGD) is an adaptation of traditional Gradient Descent where the updates in each iteration are purely based on the direction of the gradients, while ignoring their magnitudes. This is achieved by normalizing the gradients. SNGD is the stochastic version of NGD, where weight updates are performed using individual (randomly chosen) training samples, instead of the complete set of samples. Mini-batch SNGD generalizes this by applying updates to the parameters at the end of every mini-batch of samples, as does mini-batch Stochastic Gradient Descent (SGD). In the remainder of this paper, we refer to mini-batch SNGD as SNGD itself, as is common for SGD. Algorithm 1 describes the SNGD methodology for a generic GLM problem. DANTE: Given this background, Algorithm 2 outlines the proposed method, DANTE. Consider the autoencoder problem below for a single hidden layer network: min W f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: min W Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2, where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, 4 Under review as a conference paper at ICLR 2018 iterations for SNGD TSN GD, initial values w0 ', 'original_lines': 'iterations for each SNGD TSN GD, initial values w0 ', 'after_paragraph_idx': 17, 'before_paragraph_idx': None}, {'section': '4 56 end', 'after_section': None, 'context_after': 'min W 2, 3.3 RATIONALE To describe the motivation for our alternating strategy in DANTE, we first define key terms and introduced in Hazan et al. (2015)) and show that under certain realizability conditions, empirical objective functions induced by Generalized Linear Models (GLMs) are locally quasi-convex. To this end, we introduce a new activation function, the generalized ReLU, and show that the GLM with the ', 'paragraph_idx': 20, 'before_section': None, 'context_before': 'Output :w1, w2 ', 'modified_lines': 'fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 where φW2 (cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). This is once again solved by mini-batch SNGD, as before. results that are essential to our work. We present the notion of a locally quasi-convex function (as ', 'original_lines': '3.2 METHODOLOGY We begin our presentation of the proposed method by briefly reviewing the Stochastic Normalized Gradient Descent (SNGD) method, which is used in each step of our alternating strategy. (We explain in the next section, the rationale for the choice of SNGD). Stochastic Normalized Gradient Descent (SNGD): Normalized Gradient Descent (NGD) is an adaptation of traditional Gradient Descent where the updates in each iteration are purely based on the direction of the gradients, and not the gradients themselves (achieved by normalizing the gradients). SNGD is the stochastic version of NGD, where weight updates are performed at the end of every training sample, instead of the complete set of samples. Mini-batch SNGD updates the parameters at the end of every mini-batch of samples, as in mini-batch Stochastic Gradient Descent (SGD). In the remainder of this paper, we refer to mini-batch SNGD as SNGD itself, as is common for SGD. Algorithm 1 describes the SNGD methodology for a generic GLM problem. DANTE: Given this background, Algorithm 2 outlines the proposed method, DANTE. Consider the autoencoder problem below: f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: min W Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, upon fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: where v = φ2(cid:104)W2, x(cid:105). This is once again solved by mini-batch SNGD, as before. min W Ex∼D(cid:107)φ1(cid:104)W1, v(cid:105) − x(cid:107)2 2, 4 Under review as a conference paper at ICLR 2018 results that are essential for our work. We present the notion of a locally quasi-convex function (as ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proof. Consider ||w|| ≤ W such that ˆerrm(w) = 1 i=1(yi − φ(cid:104)w, xi(cid:105))2 ≥ (cid:15), where m is the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:15), 2b2W a ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'very high probability, as shown above in Theorem 3.6. Furthermore, while the GLM error function with sigmoid activation has κ = eW Hazan et al. (2015), we obtain κ = 2b2W (linear in W ) for ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ReLU activation function (since tanh is a rescaled sigmoid and the generalized ReLU includes many variants of ReLUs, this covers a large family of activation functions), DANTE uses SNGD to solve a SLQC problem in each alternating step, which converges to the optima solution in each step with ', 'modified_lines': '', 'original_lines': ' 6 Under review as a conference paper at ICLR 2018 (a) Phase - 1 (b) Phase - 2 Figure 1: An illustration of the proposed multi-layer DANTE (best viewed in color). In each training phase, the outer pairs of weights (shaded in gold) are treated as a single-layer autoencoder to be trained using single-layer DANTE, followed by the inner single-layer auroencoder (shaded in black). These two phases are followed by a finetuning process that may be empirically determined, similar to standard deep autoencoder training. Algorithm 3: DANTE for a multi-layer autoencoder Input :Encoder e with weights U, Decoder d with weights V, Number of hidden layers 2n − 1, Learning rates η, Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , initial values U0, V0, minibatch size b 1 t := 1 2 for l = 1 to n do while |f (Ut, Vt) − f (Ut−1, Vt−1)| ≥ (cid:15) or t < TAM do //Use SNGD for minimizations ut l ← arg min u (cid:16) Ex∼D d(e(x, Ut [1:l−1] · u · Ut−1 [l+1:n−1]), Vt [1:l−1] · Vt−1 [l:n−1]) − x (cid:17)2 vt n−l ← arg min v (cid:16) Ex∼D t := t + 1 d(e(x, Ut [1:l] · Ut−1 [l+1:n−1]), Vt [1:n−l−1] · v · Vt−1 [n−l+1:n−1]) − x (cid:17)2 3 4 5 end 6 7 end Output :U, V ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'Figure 2: Plots of training (left) and test (right) errors (y-axis) vs training iterations for single hidden- layer autoencoder with Sigmoid (top) and Leaky ReLU (bottom) activations for both DANTE and SGD. 4 EXPERIMENTS AND RESULTS Our first set of experiments were carried out with the standard benchmarking setup of the MNIST dataset1, with 60, 000 data samples used for training and 10, 000 samples for testing. Experiments ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': 'Algorithm 3: DANTE for a multi-layer autoencoder Input :Encoder e with weights U, Decoder d with weights V, Number of hidden layers 2n − 1, Learning rates η, Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , initial values U0, V0, minibatch size b 1 t := 1 2 for l = 1 to n do while |f (Ut, Vt) − f (Ut−1, Vt−1)| ≥ (cid:15) or t < TAM do //Use SNGD for minimizations ut l ← arg min u (cid:16) Ex∼D d(e(x, Ut [1:l−1] · u · Ut−1 [l+1:n−1]), Vt [1:l−1] · Vt−1 [l:n−1]) − x (cid:17)2 vt n−l ← arg min v (cid:16) Ex∼D t := t + 1 d(e(x, Ut [1:l] · Ut−1 [l+1:n−1]), Vt [1:n−l−1] · v · Vt−1 [n−l+1:n−1]) − x (cid:17)2 3 4 5 end 6 7 end Output :U, V (a) Single-layer autoencoder with Sigmoid activation (b) Single-layer autoencoder with Leaky ReLU acti- vation We validated DANTE by training autoencoders on an expanded 32 × 32 variant of the standard MNIST dataset LeCun et al. (1998) as well as other datasets from the UCI repository. We also conducted experiments with multi-layer autoencoders, as well as studies with varying number of hidden neurons on single-layer autoencoders. ', 'original_lines': '(a) Single-layer autoencoder with Sigmoid activa- tion (b) Single-layer autoencoder with Leaky ReLU activation (a) ionosphere dataset (b) satimage dataset (c) svmguide4 dataset (d) USPS dataset Figure 3: Comparison of proposed DANTE vs SGD on various other datasets. The x-axis on all sub-figures is number of mini-batch iterations and y-axis denotes test error. (Best viewed in color) We validated DANTE by training autoencoders on the standard MNIST dataset LeCun et al. (1998) as well as other datasets from the UCI repository.We also conducted experiments with multi-layer autoencoders as well as with single-layer autoencoders with varying number of hidden neurons. Each of these experiments and the corresponding results are described below. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'classification accuracy results using the hidden representations on various datasets are given in Table 1. ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'experiments to study the effectiveness of the feature representations learned using the models trained using DANTE and SGD in the same setting. After training, we passed the dataset through the autoencoder, extracted the hidden layer representations, and then trained a linear SVM. The ', 'modified_lines': '', 'original_lines': ' 1http://yann.lecun.com/exdb/mnist/ 8 Under review as a conference paper at ICLR 2018 MNIST ionosphere satimage svmguide4 USPS SGD DANTE 88.65% 89.71% 92.45% 96.22% 16.50 % 15.70 % 70.37% 87.65% 89.49% 90.43 % Figure 4: Reconstructions using the autoencoder models with ReLU activation. Top: Model trained using SGD; Bottom: Model trained us- ing DANTE. Table 1: Classification accuracies using ReLU autoencoder features on different datasets ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '4 EXPERIMENTS AND RESULTS', 'after_section': None, 'context_after': '(a) Architecture: 1024->500->500->1024 (b) Architecture: 1024->750->500->750->1024 ', 'paragraph_idx': 45, 'before_section': '4 EXPERIMENTS AND RESULTS', 'context_before': 'datasets. It can be clearly seen that the proposed DANTE method demonstrates superior generalization performance on most datasets. ', 'modified_lines': 'Multi-Layer Autoencoder: We also studied the performance of the proposed multi-layer DANTE method (Algorithm 3) for the MNIST dataset. Figure 5 shows the results obtained by stacking two 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ 9 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 44}, {'section': '4 EXPERIMENTS AND RESULTS', 'after_section': '4 EXPERIMENTS AND RESULTS', 'context_after': 'single-layer autoencoders, each with the generalized (leaky) ReLU activation (note that a two single- layer autoencoder corresponds to 4 layers in the overall network, as mentioned in the architecture on the figure). Evidently, the figure shows promising performance for DANTE in this experiment too. In this work, we presented a novel methodology, Deep AlterNations for Training autoEncoders Under review as a conference paper at ICLR 2018 2a , w∗(cid:17) (cid:15), b2W (cid:16) REFERENCES ', 'paragraph_idx': 40, 'before_section': None, 'context_before': 'Figure 5: Plots of (a) training error and (b) test error vs training iterations for multi-layer autoencoders with generalized (leaky) ReLU activations for both DANTE and SGD. ', 'modified_lines': '(a) 50 hidden neurons (b) 200 hidden neurons (c) 300 hidden neurons (d) 400 hidden neurons (e) 500 hidden neurons (f) 600 hidden neurons Figure 6: Plots of training (left) and test (right) error vs training iterations on a single-layer au- toencoder with generalized (leaky) ReLU activation, with a varying number of nodes in the hidden layer. 5 CONCLUSIONS AND FUTURE WORK (DANTE), to efficiently train autoencoders using alternating minimization, thus providing an effective alternative to backpropagation. We formulate training each layer of an autoencoder as a Generalized Linear Model (GLM) problem with an activation function, and leverage recent results to use Stochastic Normalized Gradient Descent (SNGD) to train each step of the autoencoder. While recent work showed that a GLM with a sigmoid activation function is SLQC (which SNGD solves with a provable convergence bound), we introduced a new generalized ReLU activation function, and showed that a GLM with this activation function is also SLQC, thus allowing us to expand the applicability of the proposed method to autoencoders with both sigmoid and ReLU family of activation functions. 10 In particular, we extended the definitions of local quasi-convexity to use subgradients in order to − SLQC, which improves prove that the GLM with generalized ReLU activation is the convergence bound for the corresponding GLM with the generalized ReLU (as compared to a GLM with sigmoid). We also showed how DANTE can be extended to train multi-layer autoencoders. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. DANTE can not only be used to train autoencoders, but can be extended to train standard multi-layer neural networks too. One could use DANTE to train a neural network layer-wise in a round robin fashion, and then finetune end-to-end using SGD. In case of autoencoders with tied weights, one would ideally use DANTE to learn the weights of the required layers, and then finetune end-to-end using a method such as SGD. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in theoretically analyzing the end-to-end convergence of the method for deep multi-layer autoencoders. ', 'original_lines': 'Multi-Layer Autoencoder: We also studied the performance of the proposed multi-layer DANTE method (Algorithm 3) for the MNIST dataset. Figure 5 shows the results obtained by stacking two 5 DISCUSSIONS AND CONCLUSION (DANTE), to efficiently train autoencoders using a non-backpropagation method, in particular, using alternating minimization. The proposed method formulates training an autoencoder as a set of 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ 9 (a) 50 hidden neurons (b) 200 hidden neurons (c) 300 hidden neurons (d) 400 hidden neurons (e) 500 hidden neurons (f) 600 hidden neurons Figure 6: Plots of training (left) and test (right) error vs training iterations on a single-layer au- toencoder with generalized (leaky) ReLU activation, with a varying number of nodes in the hidden layer. Generalized Linear Model (GLM) problems, which allows us to use an efficient solver proposed in Hazan et al. (2015) in an alternating fashion to train deep autoencoders. While the earlier work was proposed only for GLMs with sigmoidal activation functions, we propose an extension of local- quasi-convexity to a wider class of activation functions - in particular, the ReLU. We introduce a generalized ReLU activation which allows the Stochastic Normalized Gradient Descent method to have provably faster convergence, when compared to a sigmoidal activation. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in variations of the autoencoder such as the Convolutional Autoencoder Masci et al. (2011). 6 CONCLUSION In this work, we have extended the definitions of quasi-convexity to use subgradients in order to − SLQC, enable us to prove that the GLM with generalized ReLU activation is mathematically validating that ReLU is far superior to Sigmoid as an activation function. Additionally, we have proposed a new algorithm for training autoencoders involving alternating minimization techniques and Stochastic Normal Gradient Descent. We showed experimentally, on using hyperparameters suggested theoretically, our proposed algorithm drastically outperforms traditional methods, with the case of ReLUs showing immense improvement. In this process of experimental validation, we have demonstrated the sensitivity of our algorithm to learning rate and batch size. 10 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 40, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Yurii E. Nesterov. Minimization methods for nonsmooth convex and quasiconvex functions. Matekon, 29:519–531, 1984. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. ', 'modified_lines': '', 'original_lines': 'Jonathan Masci, Ueli Meier, Dan Cire¸san, and Jürgen Schmidhuber. Stacked convolutional auto- encoders for hierarchical feature extraction. Artificial Neural Networks and Machine Learning– ICANN 2011, pp. 52–59, 2011. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 19:10:38
|
ICLR.cc/2018/Conference
|
SyD3VWW0-
|
BkCSDWbAb
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION 1 Under review as a conference paper at ICLR 2018 2 RELATED WORK ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT ', 'modified_lines': 'In this work, we present a novel method, DANTE, to train neural networks - in particular, autoencoders - using alternating minimization. This method provides a different perspective in lieu of traditional gradient-based backpropagation com- monly used to train deep networks such as autoencoders. DANTE utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimization problem. We show that for autoencoder con- figurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform alternations effectively. In experi- ments on standard datasets, autoencoders trained using DANTE were found to be very promising when compared to those trained using traditional backpropagation techniques, both in terms of solutions as well as feature extraction and reconstruc- tion performance. We also extended DANTE to multi-layer settings and showed that the proposed method performs promisingly in this setting too. For much of the recent march of deep learning, gradient-based backpropagation methods (e.g. Stochastic Gradient Descent, SGD) have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. However, this surge of success of backpropagation-based methods in recent years has overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder-decoder networks and generative adversarial models, the reliance on backpropagation methods remains. While reinforcement learning methods are becoming increasingly used [REF] today, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD-based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we request the reader to refer Taylor et al. (2016). From another perspective, there has been marked progress in the area of non-convex optimization (beyond deep learning) in recent years, which has resulted in scalable methods such as iterated hard thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate solutions. In this work, we seek to formulate a non-backpropagation strategy to train neural networks, leveraging a recent result in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), an alternating minimization-based technique for training neural networks - in particular, autoencoders. DANTE is based on our observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and the more recent stochastic normalized gradient descent Hazan et al. (2015) to train an autoencoder. The key contributions of this work are summarized below: • We introduce DANTE, a new method to efficiently train neural networks using alternating minimization. DANTE trains Autoencoder (AE) networks through alternating optimization of quasi-convex - Strictly-Locally-Quasi-Convex (SLQC), to be specific - problems using an efficient stochastic solver. • DANTE views each layer of a neural network as a Generalized Linear Model (GLM). While it has been recently shown that a GLM with sigmoid activation function is SLQC Hazan et al. (2015), we introduce a generalized version of the popularly used ReLU activation function, and show that a GLM with such non-differentiable activation functions can also be shown to be SLQC. This allows us to apply our methodology to AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the Stochastic Normalized Gradient Descent (SNGD) method offers provably more rapid convergence under this newly introduced activation function for an idealized GLM problem than it does for the sigmoidal activation function. A key advantage of our approach is that the theoretical result provides a direction to set the learning rate and batch size to provide an acceptable level of convergence while training the AE. • We empirically validate DANTE with both generalized ReLU and sigmoid activations and establish that the proposed method provides comparable or better test error, reconstructions and classification performance (with the learned representations), when compared with a regularly used mini-batch SGD setup. • We also show that the proposed methodology can be extended to train deep AEs, and our results on deep AEs show promise in this setting too. ', 'original_lines': 'We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising when compared to those trained using traditional backpropaga- tion techniques, both in terms of training speed, as well as feature extraction and reconstruction performance. For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. However, this surge of success of backpropagation-based methods in recent years has somewhat overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder- decoder networks and generative adversarial models, the reliance on backpropagation methods remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD- based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we point the reader to Taylor et al. (2016). From another perspective, there has been marked progress in recent years in the area of non-convex optimization (beyond deep learning), which has resulted in scalable methods such as iterated hard thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate solutions. In this work, we investigate a non-backpropagation strategy to train neural networks, leveraging recent advances in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), and it offers an alternating minimization-based technique for training neural networks - in particular, autoencoders. DANTE is based on a simple but useful observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and stochastic normalized gradient descent Hazan et al. (2015) to train autoencoder networks. The key contributions of this work are summarized below: • We show that viewing each layer of an a neural network as applying an ensemble of generalized linear transformations, allows the problem of training the network to be cast as a bi-quasi-convex optimization problem (exact statement later). • We exploit this intuition by employing an alternating minimization strategy DANTE that reduces the problem of training the layers to quasi-convex optimization problems. • We utilize state-of-the-art Stochastic Normalized Gradient Descent (SNGD) technique Hazan et al. (2015) for quasi-convex optimization to provide an efficient implementation of DANTE for networks with sigmoidal activation functions. However, a limitation of SNGD is its inability to handle link non-differentiable functions such as the ReLU. • To overcome this limitation, we introduce the generalized ReLU, a variant of the popular ReLU activation function and show how the SNGD technique can be applied with the generalized ReLU function. This presents an augmentation in the state of the art in quasi- convex optimization and may be of independent interest. This allows DANTE to train AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the SNGD method offers provably more rapid convergence with the general- ized ReLU function than it does even for the sigmoidal activation. This is corroborated in experiments as well. A key advantage of our approach is the ability to exploit these theo- retical results to set learning rates and batch sizes without any fine tuning/cross-validation required. • We also show DANTE can be easily extended to train deep AEs with multiple hidden layers. • We empirically validate DANTE with both the generalized ReLU and sigmoid activations and establish that DANTE does provide comparable or better test errors, reconstructions and classification performance (with the learned representations), when compared to an identical network train using standard used mini-batch SGD-based backpropagation. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Under review as a conference paper at ICLR 2018 3 DANTE: DEEP ALTERNATIONS FOR TRAINING AUTOENCODERS 3.1 PROBLEM FORMULATION Consider a neural network with L layers. Each layer l ∈ {1, 2, . . . , L} has nl nodes and is character- ized by a linear operator Wl ∈ Rnl−1×nl and a non-linear activation function φl : Rnl → Rnl . The activations generated by the layer l are denoted by al ∈ Rnl . We denote by a0, the input activations f (W; x) = φL(cid:104)WL, φL−1(cid:104)WL−1, · · · , φ1(cid:104)W1, x(cid:105)(cid:105)(cid:105) ', 'paragraph_idx': 8, 'before_section': '2 RELATED WORK', 'context_before': 'Direction Method of Multipliers (ADMM) and Bregman iterations Taylor et al. (2016). The focus of this method, however, was on scaling the training of neural networks to a distributed setting on multiple cores across a computing cluster. Jaderberg also proposed the idea of ’synthetic gradients’ in ', 'modified_lines': 'Jaderberg et al. (2016). While this approach is interesting, this work is more towards a more efficient way to carry out gradient-based parameter updates in a neural network. In this work, we focus on a new approach to train neural networks - in particular, autoencoders - using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the considered datasets. To the best of our knowledge, this is the first such effort in using alternating SNGD to train neural networks with related performance guarantees. and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105), where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'original_lines': 'Jaderberg et al. (2016). While this approach is interesting, this work is more focused towards a more efficient way to carry out gradient-based parameter updates in a neural network. However, in our work, we focus on an entirely new approach to training neural networks – in particular, autoencoders – using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the a range of datasets. Although alternating minimization has found much appeal in areas such as matrix factorization Jain et al. (2013), to the best of our knowledge, this is the first such effort in using alternating principles to train neural networks with related performance guarantees. In this section, we will first set notation and establish the problem setting, then present details of the DANTE method, including the SNGD algorithm. For sake of simplicity, we consider networks with just a single hidden layer. We then offer some theoretical insight intro DANTE’s inner workings, which also allow us to arrive at the generalized ReLU activation function, and finally describe how DANTE can be extended to deep networks with multiple hidden layers. and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105) ∈ Rnl , where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': '3.1 PROBLEM FORMULATION', 'context_after': 'represented as f (W; x) = φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) to describe our methodology. We describe in a later section on how this idea can be extended to deep multi-layer autoencoders. (Note that our definition of a single-layer autoencoder is equivalent to a two-layer neural network in a classification setting, by ', 'paragraph_idx': 9, 'before_section': '3.1 PROBLEM FORMULATION', 'context_before': '(3) ', 'modified_lines': 'For purposes of simplicity and convenience, we consider the case of a single-layer autoencoder, ', 'original_lines': 'For purpose of simplicity and convenience, we first consider the case of a single-layer autoencoder, ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'min W1 3 ', 'paragraph_idx': 12, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': 'Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2 ', 'modified_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, inverting the network and fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter. Ex∼D(cid:107)φ(cid:104)W1, x(cid:105) − x(cid:107)2 2 where φ(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result in Hazan et al. (2015) that a GLM with a differentiable activation, such as sigmoid, is Strictly Locally Quasi-Convex (SLQC), which allows us to use SNGD to solve each sub-problem of the alternating setup efficiently (with performance guarantees). We further show that a GLM with a non-differentiable activation – in particular, a generalized Rectified Linear Unit (ReLU) – is also SLQC, thus allowing us to extend the proposed alternating strategy, DANTE, to ReLU-based autoencoders too. We note that while we have developed this idea to train autoencoders in this work (since our approach relates closely to the greedy layerwise training in autoencoders), DANTE can be used to train standard multi-layer neural networks too (discussed in Section 5). ', 'original_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter (note that φW2(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105)). Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 12}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'Input 1 for t = 1 to T do 2 ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 Algorithm 1: Stochastic Normalized Gradient Descent (SNGD) ', 'modified_lines': ':Number of iterations T , {(xi, yi)}m parameters w0 i=1 ∈ Rd, learning rate η, minibatch size b, Initial ', 'original_lines': ':Number of iterations T , training data S = {(xi, yi)}m minibatch size b, Initialization parameters w0 i=1 ∈ Rd × R, learning rate η, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': 'Output :Model given by wT Algorithm 2: DANTE: Deep AlterNations for Training autoEncoders Input :Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , Number of 2, learning rate η, minibatch size b 1, w0 ', 'paragraph_idx': 8, 'before_section': None, 'context_before': '5 6 end ', 'modified_lines': 'iterations for each SNGD TSN GD, initial values w0 ', 'original_lines': '//Select a random mini-batch of training points Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result by Hazan et al. (2015) that shows that GLMs with nice, differentiable link functions such as sigmoid (or even a combination of sigmoids such as φW2(·)), satisfy a property the authors name Strict Locally Quasi-Convexity (SLQC), which allows techniques such as SNGD to solve the GLM problems effectively. This is quite advantageous for us since it allows us to solve each sub-problem of the alternating setup efficiently. In a subsequent section, we will show that GLMs with non-differentiable activation – in particular, a generalized Rectified Linear Unit (ReLU) – can also satisfy the SLQC property, thus allowing us to extend the proposed alternating strategy, DANTE, to ReLU-based autoencoders too. We note that while we have developed this idea to train autoencoders in this work (since our approach relates closely to the greedy layer-wise training in autoencoders), DANTE can be used to train standard multi-layer neural networks too (discussed in Section 5). 3.2 METHODOLOGY We begin our presentation of the proposed method by briefly reviewing the Stochastic Normalized Gradient Descent (SNGD) method, which is used to execute the inner steps of DANTE. We explain in the next subsection, the rationale behind the choice of SNGD as the optimizer. We stress that although DANTE does use stochastic gradient-style methods internally (such as the SNGD algorithm), the overall strategy adopted by DANTE is not a descent-based strategy, rather an alternating-minimization strategy. Stochastic Normalized Gradient Descent (SNGD): Normalized Gradient Descent (NGD) is an adaptation of traditional Gradient Descent where the updates in each iteration are purely based on the direction of the gradients, while ignoring their magnitudes. This is achieved by normalizing the gradients. SNGD is the stochastic version of NGD, where weight updates are performed using individual (randomly chosen) training samples, instead of the complete set of samples. Mini-batch SNGD generalizes this by applying updates to the parameters at the end of every mini-batch of samples, as does mini-batch Stochastic Gradient Descent (SGD). In the remainder of this paper, we refer to mini-batch SNGD as SNGD itself, as is common for SGD. Algorithm 1 describes the SNGD methodology for a generic GLM problem. DANTE: Given this background, Algorithm 2 outlines the proposed method, DANTE. Consider the autoencoder problem below for a single hidden layer network: min W f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: min W Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2, where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, 4 Under review as a conference paper at ICLR 2018 iterations for SNGD TSN GD, initial values w0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ← arg min', 'after_section': None, 'context_after': 'min W 2, 3.3 RATIONALE To describe the motivation for our alternating strategy in DANTE, we first define key terms and introduced in Hazan et al. (2015)) and show that under certain realizability conditions, empirical objective functions induced by Generalized Linear Models (GLMs) are locally quasi-convex. To this end, we introduce a new activation function, the generalized ReLU, and show that the GLM with the ', 'paragraph_idx': 18, 'before_section': '1 ← arg min', 'context_before': 'Output :w1, w2 ', 'modified_lines': '3.2 METHODOLOGY We begin our presentation of the proposed method by briefly reviewing the Stochastic Normalized Gradient Descent (SNGD) method, which is used in each step of our alternating strategy. (We explain in the next section, the rationale for the choice of SNGD). Stochastic Normalized Gradient Descent (SNGD): Normalized Gradient Descent (NGD) is an adaptation of traditional Gradient Descent where the updates in each iteration are purely based on the direction of the gradients, and not the gradients themselves (achieved by normalizing the gradients). SNGD is the stochastic version of NGD, where weight updates are performed at the end of every training sample, instead of the complete set of samples. Mini-batch SNGD updates the parameters at the end of every mini-batch of samples, as in mini-batch Stochastic Gradient Descent (SGD). In the remainder of this paper, we refer to mini-batch SNGD as SNGD itself, as is common for SGD. Algorithm 1 describes the SNGD methodology for a generic GLM problem. DANTE: Given this background, Algorithm 2 outlines the proposed method, DANTE. Consider the autoencoder problem below: f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: min W Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, upon fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: where v = φ2(cid:104)W2, x(cid:105). This is once again solved by mini-batch SNGD, as before. min W Ex∼D(cid:107)φ1(cid:104)W1, v(cid:105) − x(cid:107)2 2, 4 Under review as a conference paper at ICLR 2018 results that are essential for our work. We present the notion of a locally quasi-convex function (as ', 'original_lines': 'fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 where φW2 (cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). This is once again solved by mini-batch SNGD, as before. results that are essential to our work. We present the notion of a locally quasi-convex function (as ', 'after_paragraph_idx': None, 'before_paragraph_idx': 18}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Theorem 3.5. In the noisy GLM with generalized ReLU activation, assuming ||w∗|| ≤ W , given ˆerr(w) is w ∈ B(0, W ), then with probability ≥ 1 − δ after m ≥ 4W b2 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'in Step 5, we simply use the given bounds on the variables xi, w, w∗ due to the setup of the problem (w ∈ Bd(0, W ), and xi ∈ Bd, the unit d-dimensional ball, as in Defn 2.6). We also prove a similar result for the Noisy GLM below. ', 'modified_lines': '', 'original_lines': ' 6 Under review as a conference paper at ICLR 2018 (a) Phase - 1 (b) Phase - 2 Figure 1: An illustration of the proposed multi-layer DANTE (best viewed in color). In each training phase, the outer pairs of weights (shaded in gold) are treated as a single-layer autoencoder to be trained using single-layer DANTE, followed by the inner single-layer auroencoder (shaded in black). These two phases are followed by a finetuning process that may be empirically determined, similar to standard deep autoencoder training. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 end', 'after_section': None, 'context_after': 'Figure 2: Plots of training (left) and test (right) errors (y-axis) vs training iterations for single hidden- layer autoencoder with Sigmoid (top) and Leaky ReLU (bottom) activations for both DANTE and SGD. 4 EXPERIMENTS AND RESULTS Our first set of experiments were carried out with the standard benchmarking setup of the MNIST dataset1, with 60, 000 data samples used for training and 10, 000 samples for testing. Experiments ', 'paragraph_idx': 29, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': '(a) Single-layer autoencoder with Sigmoid activa- tion (b) Single-layer autoencoder with Leaky ReLU activation (a) ionosphere dataset (b) satimage dataset (c) svmguide4 dataset (d) USPS dataset Figure 3: Comparison of proposed DANTE vs SGD on various other datasets. The x-axis on all sub-figures is number of mini-batch iterations and y-axis denotes test error. (Best viewed in color) We validated DANTE by training autoencoders on the standard MNIST dataset LeCun et al. (1998) as well as other datasets from the UCI repository.We also conducted experiments with multi-layer autoencoders as well as with single-layer autoencoders with varying number of hidden neurons. Each of these experiments and the corresponding results are described below. ', 'original_lines': 'Algorithm 3: DANTE for a multi-layer autoencoder Input :Encoder e with weights U, Decoder d with weights V, Number of hidden layers 2n − 1, Learning rates η, Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , initial values U0, V0, minibatch size b 1 t := 1 2 for l = 1 to n do while |f (Ut, Vt) − f (Ut−1, Vt−1)| ≥ (cid:15) or t < TAM do //Use SNGD for minimizations ut l ← arg min u (cid:16) Ex∼D d(e(x, Ut [1:l−1] · u · Ut−1 [l+1:n−1]), Vt [1:l−1] · Vt−1 [l:n−1]) − x (cid:17)2 vt n−l ← arg min v (cid:16) Ex∼D t := t + 1 d(e(x, Ut [1:l] · Ut−1 [l+1:n−1]), Vt [1:n−l−1] · v · Vt−1 [n−l+1:n−1]) − x (cid:17)2 3 4 5 end 6 7 end Output :U, V (a) Single-layer autoencoder with Sigmoid activation (b) Single-layer autoencoder with Leaky ReLU acti- vation We validated DANTE by training autoencoders on an expanded 32 × 32 variant of the standard MNIST dataset LeCun et al. (1998) as well as other datasets from the UCI repository. We also conducted experiments with multi-layer autoencoders, as well as studies with varying number of hidden neurons on single-layer autoencoders. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 2b. The results for ReLU showed an improvement, and DANTE was consistently superior to SGD across the iterations. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Autoencoder with ReLU Activation: A single-layer autoencoder was trained using DANTE and SGD with 600 hidden units, a Leaky ReLU activation (with leakiness parameter 0.01), a learning rate of 0.001, Mean-Squared Error loss function and a minibatch size of 500. The results are shown in ', 'modified_lines': '', 'original_lines': ' 1http://yann.lecun.com/exdb/mnist/ 8 Under review as a conference paper at ICLR 2018 (a) ionosphere dataset (b) satimage dataset (c) svmguide4 dataset (d) USPS dataset Figure 3: Comparison of proposed DANTE vs SGD on various other datasets. The x-axis on all sub-figures is number of mini-batch iterations and y-axis denotes test error. (Best viewed in color) MNIST ionosphere satimage svmguide4 USPS SGD DANTE 88.65% 89.71% 92.45% 96.22% 16.50 % 15.70 % 70.37% 87.65% 89.49% 90.43 % Figure 4: Reconstructions using the autoencoder models with ReLU activation. Top: Model trained using SGD; Bottom: Model trained us- ing DANTE. Table 1: Classification accuracies using ReLU autoencoder features on different datasets ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS AND RESULTS', 'after_section': '4 EXPERIMENTS AND RESULTS', 'context_after': 'Multi-Layer Autoencoder: We also studied the performance of the proposed multi-layer DANTE method (Algorithm 3) for the MNIST dataset. Figure 5 shows the results obtained by stacking two 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ ', 'paragraph_idx': 38, 'before_section': None, 'context_before': 'datasets. It can be clearly seen that the proposed DANTE method demonstrates superior generalization performance on most datasets. ', 'modified_lines': '(a) Architecture: 1024->500->500->1024 (b) Architecture: 1024->750->500->750->1024 Figure 5: Plots of (a) training error and (b) test error vs training iterations for multi-layer autoencoders with generalized (leaky) ReLU activations for both DANTE and SGD. single-layer autoencoders, each with the generalized (leaky) ReLU activation (note that a two single- layer autoencoder corresponds to 4 layers in the overall network, as mentioned in the architecture on the figure). Evidently, the figure shows promising performance for DANTE in this experiment too. 5 DISCUSSIONS AND CONCLUSION In this work, we presented a novel methodology, Deep AlterNations for Training autoEncoders (DANTE), to efficiently train autoencoders using a non-backpropagation method, in particular, using alternating minimization. The proposed method formulates training an autoencoder as a set of ', 'original_lines': '', 'after_paragraph_idx': 38, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(a) 50 hidden neurons ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': '', 'original_lines': ' (a) Architecture: 1024->500->500->1024 (b) Architecture: 1024->750->500->750->1024 Figure 5: Plots of (a) training error and (b) test error vs training iterations for multi-layer autoencoders with generalized (leaky) ReLU activations for both DANTE and SGD. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '10 Under review as a conference paper at ICLR 2018 REFERENCES ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'toencoder with generalized (leaky) ReLU activation, with a varying number of nodes in the hidden layer. ', 'modified_lines': 'Generalized Linear Model (GLM) problems, which allows us to use an efficient solver proposed in Hazan et al. (2015) in an alternating fashion to train deep autoencoders. While the earlier work was proposed only for GLMs with sigmoidal activation functions, we propose an extension of local- quasi-convexity to a wider class of activation functions - in particular, the ReLU. We introduce a generalized ReLU activation which allows the Stochastic Normalized Gradient Descent method to have provably faster convergence, when compared to a sigmoidal activation. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in variations of the autoencoder such as the Convolutional Autoencoder Masci et al. (2011). 6 CONCLUSION In this work, we have extended the definitions of quasi-convexity to use subgradients in order to 2a , w∗(cid:17) − SLQC, enable us to prove that the GLM with generalized ReLU activation is mathematically validating that ReLU is far superior to Sigmoid as an activation function. (cid:15), b2W (cid:16) Additionally, we have proposed a new algorithm for training autoencoders involving alternating minimization techniques and Stochastic Normal Gradient Descent. We showed experimentally, on using hyperparameters suggested theoretically, our proposed algorithm drastically outperforms traditional methods, with the case of ReLUs showing immense improvement. In this process of experimental validation, we have demonstrated the sensitivity of our algorithm to learning rate and batch size. ', 'original_lines': 'single-layer autoencoders, each with the generalized (leaky) ReLU activation (note that a two single- layer autoencoder corresponds to 4 layers in the overall network, as mentioned in the architecture on the figure). Evidently, the figure shows promising performance for DANTE in this experiment too. 5 CONCLUSIONS AND FUTURE WORK In this work, we presented a novel methodology, Deep AlterNations for Training autoEncoders (DANTE), to efficiently train autoencoders using alternating minimization, thus providing an effective alternative to backpropagation. We formulate training each layer of an autoencoder as a Generalized Linear Model (GLM) problem with an activation function, and leverage recent results to use Stochastic Normalized Gradient Descent (SNGD) to train each step of the autoencoder. While recent work showed that a GLM with a sigmoid activation function is SLQC (which SNGD solves with a provable convergence bound), we introduced a new generalized ReLU activation function, and showed that a GLM with this activation function is also SLQC, thus allowing us to expand the applicability of the proposed method to autoencoders with both sigmoid and ReLU family of activation functions. In particular, we extended the definitions of local quasi-convexity to use subgradients in order to − SLQC, which improves prove that the GLM with generalized ReLU activation is the convergence bound for the corresponding GLM with the generalized ReLU (as compared to a GLM with sigmoid). We also showed how DANTE can be extended to train multi-layer autoencoders. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. 2a , w∗(cid:17) (cid:15), b2W (cid:16) DANTE can not only be used to train autoencoders, but can be extended to train standard multi-layer neural networks too. One could use DANTE to train a neural network layer-wise in a round robin fashion, and then finetune end-to-end using SGD. In case of autoencoders with tied weights, one would ideally use DANTE to learn the weights of the required layers, and then finetune end-to-end using a method such as SGD. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in theoretically analyzing the end-to-end convergence of the method for deep multi-layer autoencoders. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, and Tom Goldstein. Training In 33rd International ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning Internal Representation by Back-propagating Errors. Nature, 323(9):533–536, 1986. ', 'modified_lines': '', 'original_lines': ' 11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 19:21:42
|
ICLR.cc/2018/Conference
|
BkCSDWbAb
|
B1jrEMbCZ
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 INTRODUCTION 1 Under review as a conference paper at ICLR 2018 2 RELATED WORK ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ABSTRACT ', 'modified_lines': 'We present DANTE, a novel method for training neural networks, in particular autoencoders, using the alternating minimization principle. DANTE provides a distinct perspective in lieu of traditional gradient-based backpropagation techniques commonly used to train deep networks. It utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimiza- tion problem. We show that for autoencoder configurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to be very promising when compared to those trained using traditional backpropaga- tion techniques, both in terms of training speed, as well as feature extraction and reconstruction performance. For much of the recent march of deep learning, gradient-based backpropagation methods, e.g. Stochastic Gradient Descent (SGD) and its variants, have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. However, this surge of success of backpropagation-based methods in recent years has somewhat overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder- decoder networks and generative adversarial models, the reliance on backpropagation methods remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD- based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we point the reader to Taylor et al. (2016). From another perspective, there has been marked progress in recent years in the area of non-convex optimization (beyond deep learning), which has resulted in scalable methods such as iterated hard thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate solutions. In this work, we investigate a non-backpropagation strategy to train neural networks, leveraging recent advances in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), and it offers an alternating minimization-based technique for training neural networks - in particular, autoencoders. DANTE is based on a simple but useful observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and stochastic normalized gradient descent Hazan et al. (2015) to train autoencoder networks. The key contributions of this work are summarized below: • We show that viewing each layer of an a neural network as applying an ensemble of generalized linear transformations, allows the problem of training the network to be cast as a bi-quasi-convex optimization problem (exact statement later). • We exploit this intuition by employing an alternating minimization strategy DANTE that reduces the problem of training the layers to quasi-convex optimization problems. • We utilize state-of-the-art Stochastic Normalized Gradient Descent (SNGD) technique Hazan et al. (2015) for quasi-convex optimization to provide an efficient implementation of DANTE for networks with sigmoidal activation functions. However, a limitation of SNGD is its inability to handle link non-differentiable functions such as the ReLU. • To overcome this limitation, we introduce the generalized ReLU, a variant of the popular ReLU activation function and show how the SNGD technique can be applied with the generalized ReLU function. This presents an augmentation in the state of the art in quasi- convex optimization and may be of independent interest. This allows DANTE to train AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the SNGD method offers provably more rapid convergence with the general- ized ReLU function than it does even for the sigmoidal activation. This is corroborated in experiments as well. A key advantage of our approach is the ability to exploit these theo- retical results to set learning rates and batch sizes without any fine tuning/cross-validation required. • We also show DANTE can be easily extended to train deep AEs with multiple hidden layers. • We empirically validate DANTE with both the generalized ReLU and sigmoid activations and establish that DANTE does provide comparable or better test errors, reconstructions and classification performance (with the learned representations), when compared to an identical network train using standard used mini-batch SGD-based backpropagation. ', 'original_lines': 'In this work, we present a novel method, DANTE, to train neural networks - in particular, autoencoders - using alternating minimization. This method provides a different perspective in lieu of traditional gradient-based backpropagation com- monly used to train deep networks such as autoencoders. DANTE utilizes an adaptation of quasi-convex optimization techniques to cast autoencoder training as a bi-quasi-convex optimization problem. We show that for autoencoder con- figurations with both differentiable (e.g. sigmoid) and non-differentiable (e.g. ReLU) activation functions, we can perform alternations effectively. In experi- ments on standard datasets, autoencoders trained using DANTE were found to be very promising when compared to those trained using traditional backpropagation techniques, both in terms of solutions as well as feature extraction and reconstruc- tion performance. We also extended DANTE to multi-layer settings and showed that the proposed method performs promisingly in this setting too. For much of the recent march of deep learning, gradient-based backpropagation methods (e.g. Stochastic Gradient Descent, SGD) have been the mainstay of practitioners. The use of these methods, especially on vast amounts of data, has led to unprecedented progress in several areas of artificial intelligence. On one hand, the intense focus on these techniques has led to an intimate understanding of hardware requirements and code optimizations needed to execute these routines on large datasets in a scalable manner. Today, myriad off-the-shelf and highly optimized packages exist that can churn reasonably large datasets on GPU architectures with relatively mild human involvement and little bootstrap effort. However, this surge of success of backpropagation-based methods in recent years has overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder-decoder networks and generative adversarial models, the reliance on backpropagation methods remains. While reinforcement learning methods are becoming increasingly used [REF] today, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD-based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we request the reader to refer Taylor et al. (2016). From another perspective, there has been marked progress in the area of non-convex optimization (beyond deep learning) in recent years, which has resulted in scalable methods such as iterated hard thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate solutions. In this work, we seek to formulate a non-backpropagation strategy to train neural networks, leveraging a recent result in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), an alternating minimization-based technique for training neural networks - in particular, autoencoders. DANTE is based on our observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and the more recent stochastic normalized gradient descent Hazan et al. (2015) to train an autoencoder. The key contributions of this work are summarized below: • We introduce DANTE, a new method to efficiently train neural networks using alternating minimization. DANTE trains Autoencoder (AE) networks through alternating optimization of quasi-convex - Strictly-Locally-Quasi-Convex (SLQC), to be specific - problems using an efficient stochastic solver. • DANTE views each layer of a neural network as a Generalized Linear Model (GLM). While it has been recently shown that a GLM with sigmoid activation function is SLQC Hazan et al. (2015), we introduce a generalized version of the popularly used ReLU activation function, and show that a GLM with such non-differentiable activation functions can also be shown to be SLQC. This allows us to apply our methodology to AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the Stochastic Normalized Gradient Descent (SNGD) method offers provably more rapid convergence under this newly introduced activation function for an idealized GLM problem than it does for the sigmoidal activation function. A key advantage of our approach is that the theoretical result provides a direction to set the learning rate and batch size to provide an acceptable level of convergence while training the AE. • We empirically validate DANTE with both generalized ReLU and sigmoid activations and establish that the proposed method provides comparable or better test error, reconstructions and classification performance (with the learned representations), when compared with a regularly used mini-batch SGD setup. • We also show that the proposed methodology can be extended to train deep AEs, and our results on deep AEs show promise in this setting too. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '2 Under review as a conference paper at ICLR 2018 3 DANTE: DEEP ALTERNATIONS FOR TRAINING AUTOENCODERS 3.1 PROBLEM FORMULATION Consider a neural network with L layers. Each layer l ∈ {1, 2, . . . , L} has nl nodes and is character- ized by a linear operator Wl ∈ Rnl−1×nl and a non-linear activation function φl : Rnl → Rnl . The activations generated by the layer l are denoted by al ∈ Rnl . We denote by a0, the input activations f (W; x) = φL(cid:104)WL, φL−1(cid:104)WL−1, · · · , φ1(cid:104)W1, x(cid:105)(cid:105)(cid:105) ', 'paragraph_idx': 8, 'before_section': '2 RELATED WORK', 'context_before': 'Direction Method of Multipliers (ADMM) and Bregman iterations Taylor et al. (2016). The focus of this method, however, was on scaling the training of neural networks to a distributed setting on multiple cores across a computing cluster. Jaderberg also proposed the idea of ’synthetic gradients’ in ', 'modified_lines': 'Jaderberg et al. (2016). While this approach is interesting, this work is more focused towards a more efficient way to carry out gradient-based parameter updates in a neural network. However, in our work, we focus on an entirely new approach to training neural networks – in particular, autoencoders – using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the a range of datasets. Although alternating minimization has found much appeal in areas such as matrix factorization Jain et al. (2013), to the best of our knowledge, this is the first such effort in using alternating principles to train neural networks with related performance guarantees. In this section, we will first set notation and establish the problem setting, then present details of the DANTE method, including the SNGD algorithm. For sake of simplicity, we consider networks with just a single hidden layer. We then offer some theoretical insight intro DANTE’s inner workings, which also allow us to arrive at the generalized ReLU activation function, and finally describe how DANTE can be extended to deep networks with multiple hidden layers. and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105) ∈ Rnl , where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'original_lines': 'Jaderberg et al. (2016). While this approach is interesting, this work is more towards a more efficient way to carry out gradient-based parameter updates in a neural network. In this work, we focus on a new approach to train neural networks - in particular, autoencoders - using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the considered datasets. To the best of our knowledge, this is the first such effort in using alternating SNGD to train neural networks with related performance guarantees. and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105), where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 8}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': '3.1 PROBLEM FORMULATION', 'context_after': 'represented as f (W; x) = φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) to describe our methodology. We describe in a later section on how this idea can be extended to deep multi-layer autoencoders. (Note that our definition of a single-layer autoencoder is equivalent to a two-layer neural network in a classification setting, by ', 'paragraph_idx': 11, 'before_section': '3.1 PROBLEM FORMULATION', 'context_before': '(3) ', 'modified_lines': 'For purpose of simplicity and convenience, we first consider the case of a single-layer autoencoder, ', 'original_lines': 'For purposes of simplicity and convenience, we consider the case of a single-layer autoencoder, ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'min W1 3 ', 'paragraph_idx': 14, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': 'Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2 ', 'modified_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter (note that φW2(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105)). Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2. ', 'original_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, inverting the network and fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter. Ex∼D(cid:107)φ(cid:104)W1, x(cid:105) − x(cid:107)2 2 where φ(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result in Hazan et al. (2015) that a GLM with a differentiable activation, such as sigmoid, is Strictly Locally Quasi-Convex (SLQC), which allows us to use SNGD to solve each sub-problem of the alternating setup efficiently (with performance guarantees). We further show that a GLM with a non-differentiable activation – in particular, a generalized Rectified Linear Unit (ReLU) – is also SLQC, thus allowing us to extend the proposed alternating strategy, DANTE, to ReLU-based autoencoders too. We note that while we have developed this idea to train autoencoders in this work (since our approach relates closely to the greedy layerwise training in autoencoders), DANTE can be used to train standard multi-layer neural networks too (discussed in Section 5). ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'Input 1 for t = 1 to T do 2 ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 Algorithm 1: Stochastic Normalized Gradient Descent (SNGD) ', 'modified_lines': ':Number of iterations T , training data S = {(xi, yi)}m minibatch size b, Initialization parameters w0 i=1 ∈ Rd × R, learning rate η, ', 'original_lines': ':Number of iterations T , {(xi, yi)}m parameters w0 i=1 ∈ Rd, learning rate η, minibatch size b, Initial ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_after': 'i=1(yi − φ(cid:104)w, xi(cid:105))2 3 ', 'paragraph_idx': 16, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': '(cid:107)gt(cid:107) wt+1 = wt − η · ˆgt ', 'modified_lines': 'i=1 ∼ Unif(S) ', 'original_lines': 'i=1 ∼ D ', 'after_paragraph_idx': 16, 'before_paragraph_idx': 16}, {'section': '4 56 end', 'after_section': '4 56 end', 'context_after': 'Output :Model given by wT Algorithm 2: DANTE: Deep AlterNations for Training autoEncoders Input :Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , Number of 2, learning rate η, minibatch size b 1, w0 ', 'paragraph_idx': 17, 'before_section': None, 'context_before': '5 6 end ', 'modified_lines': '//Select a random mini-batch of training points Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result by Hazan et al. (2015) that shows that GLMs with nice, differentiable link functions such as sigmoid (or even a combination of sigmoids such as φW2(·)), satisfy a property the authors name Strict Locally Quasi-Convexity (SLQC), which allows techniques such as SNGD to solve the GLM problems effectively. This is quite advantageous for us since it allows us to solve each sub-problem of the alternating setup efficiently. In a subsequent section, we will show that GLMs with non-differentiable activation – in particular, a generalized Rectified Linear Unit (ReLU) – can also satisfy the SLQC property, thus allowing us to extend the proposed alternating strategy, DANTE, to ReLU-based autoencoders too. We note that while we have developed this idea to train autoencoders in this work (since our approach relates closely to the greedy layer-wise training in autoencoders), DANTE can be used to train standard multi-layer neural networks too (discussed in Section 5). 3.2 METHODOLOGY We begin our presentation of the proposed method by briefly reviewing the Stochastic Normalized Gradient Descent (SNGD) method, which is used to execute the inner steps of DANTE. We explain in the next subsection, the rationale behind the choice of SNGD as the optimizer. We stress that although DANTE does use stochastic gradient-style methods internally (such as the SNGD algorithm), the overall strategy adopted by DANTE is not a descent-based strategy, rather an alternating-minimization strategy. Stochastic Normalized Gradient Descent (SNGD): Normalized Gradient Descent (NGD) is an adaptation of traditional Gradient Descent where the updates in each iteration are purely based on the direction of the gradients, while ignoring their magnitudes. This is achieved by normalizing the gradients. SNGD is the stochastic version of NGD, where weight updates are performed using individual (randomly chosen) training samples, instead of the complete set of samples. Mini-batch SNGD generalizes this by applying updates to the parameters at the end of every mini-batch of samples, as does mini-batch Stochastic Gradient Descent (SGD). In the remainder of this paper, we refer to mini-batch SNGD as SNGD itself, as is common for SGD. Algorithm 1 describes the SNGD methodology for a generic GLM problem. DANTE: Given this background, Algorithm 2 outlines the proposed method, DANTE. Consider the autoencoder problem below for a single hidden layer network: min W f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: min W Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2, where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, 4 Under review as a conference paper at ICLR 2018 iterations for SNGD TSN GD, initial values w0 ', 'original_lines': 'iterations for each SNGD TSN GD, initial values w0 ', 'after_paragraph_idx': 17, 'before_paragraph_idx': None}, {'section': '4 56 end', 'after_section': None, 'context_after': 'min W 2, 3.3 RATIONALE To describe the motivation for our alternating strategy in DANTE, we first define key terms and introduced in Hazan et al. (2015)) and show that under certain realizability conditions, empirical objective functions induced by Generalized Linear Models (GLMs) are locally quasi-convex. To this end, we introduce a new activation function, the generalized ReLU, and show that the GLM with the ', 'paragraph_idx': 20, 'before_section': None, 'context_before': 'Output :w1, w2 ', 'modified_lines': 'fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 where φW2 (cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). This is once again solved by mini-batch SNGD, as before. results that are essential to our work. We present the notion of a locally quasi-convex function (as ', 'original_lines': '3.2 METHODOLOGY We begin our presentation of the proposed method by briefly reviewing the Stochastic Normalized Gradient Descent (SNGD) method, which is used in each step of our alternating strategy. (We explain in the next section, the rationale for the choice of SNGD). Stochastic Normalized Gradient Descent (SNGD): Normalized Gradient Descent (NGD) is an adaptation of traditional Gradient Descent where the updates in each iteration are purely based on the direction of the gradients, and not the gradients themselves (achieved by normalizing the gradients). SNGD is the stochastic version of NGD, where weight updates are performed at the end of every training sample, instead of the complete set of samples. Mini-batch SNGD updates the parameters at the end of every mini-batch of samples, as in mini-batch Stochastic Gradient Descent (SGD). In the remainder of this paper, we refer to mini-batch SNGD as SNGD itself, as is common for SGD. Algorithm 1 describes the SNGD methodology for a generic GLM problem. DANTE: Given this background, Algorithm 2 outlines the proposed method, DANTE. Consider the autoencoder problem below: f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: min W Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, upon fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: where v = φ2(cid:104)W2, x(cid:105). This is once again solved by mini-batch SNGD, as before. min W Ex∼D(cid:107)φ1(cid:104)W1, v(cid:105) − x(cid:107)2 2, 4 Under review as a conference paper at ICLR 2018 results that are essential for our work. We present the notion of a locally quasi-convex function (as ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': '{(xi, yi)}m samples (cid:16) exp(2(cid:107)w∗(cid:107)) (cid:15)2 ', 'paragraph_idx': 23, 'before_section': '1 ← arg min', 'context_before': 'ˆerr(w) = E(x,y)∼D (yi − φ((cid:104)w∗, xi(cid:105)))2 ', 'modified_lines': '(Hazan et al., 2015, Lemma 3.2) show that if we draw m ≥ Ω 1 − δ, the empirical error function i=1 from a GLM with the sigmoid activation function, then with probability at least ', 'original_lines': '(Hazan et al., 2015, Lemma 3.2) showed that if we draw m ≥ Ω the empirical error function i=1 from a GLM with the sigmoid activation function, then with probability at least 1 − δ, ', 'after_paragraph_idx': 23, 'before_paragraph_idx': 23}, {'section': 'Abstract', 'after_section': None, 'context_after': 'm (cid:88) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '1 m ', 'modified_lines': '', 'original_lines': 'is ((cid:15), e(cid:107)w∗(cid:107)2 , w∗)-SLQC in w. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Proof. Consider ||w|| ≤ W such that ˆerrm(w) = 1 i=1(yi − φ(cid:104)w, xi(cid:105))2 ≥ (cid:15), where m is the m ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(cid:15), 2b2W a ', 'modified_lines': '', 'original_lines': '5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': '3.1 PROBLEM FORMULATION', 'context_after': 'a ', 'paragraph_idx': 11, 'before_section': None, 'context_before': '(cid:15)2 ', 'modified_lines': 'Note that by performing each DANTE alternation with a freshly sampled batch of data points, we can easily establish SLQC properties of the problem, and as a consequence, harness the convergence guarantees provided by Theorem 3.6. Thus, given a single-layer autoencoder with either a sigmoid activation function or a generalized ReLU activation function, DANTE uses SNGD to solve a SLQC problem in each alternating step, which converges to the optima solution in each step with very high probability, as shown above in Theorem 3.6. Note that this allows us to cover a large family of activation functions, since the tanh activation is simply a rescaled sigmoid and the generalized ReLU includes many variants of ReLUs More importantly, note that the convergence rate of SNGD depends crucially on the κ parameter. Whereas the GLM error function with sigmoid activation has κ = eW Hazan et al. (2015), we obtain κ = 2b2W (i.e. linear in W ) for the generalized ReLU setting, an exponential improvement! This is significant as in Theorem 3.6, the number of iterations T depends on κ2. This shows that SNGD offers accelerated convergence with generalized ReLU GLMs than sigmoid GLMs. ', 'original_lines': 'Thus, given a single-layer autoencoder with either a sigmoid activation function or a generalized ReLU activation function (since tanh is a rescaled sigmoid and the generalized ReLU includes many variants of ReLUs, this covers a large family of activation functions), DANTE uses SNGD to solve a SLQC problem in each alternating step, which converges to the optima solution in each step with 6 Under review as a conference paper at ICLR 2018 (a) Phase - 1 (b) Phase - 2 Figure 1: An illustration of the proposed multi-layer DANTE (best viewed in color). In each training phase, the outer pairs of weights (shaded in gold) are treated as a single-layer autoencoder to be trained using single-layer DANTE, followed by the inner single-layer auroencoder (shaded in black). These two phases are followed by a finetuning process that may be empirically determined, similar to standard deep autoencoder training. Algorithm 3: DANTE for a multi-layer autoencoder Input :Encoder e with weights U, Decoder d with weights V, Number of hidden layers 2n − 1, Learning rates η, Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , initial values U0, V0, minibatch size b 1 t := 1 2 for l = 1 to n do while |f (Ut, Vt) − f (Ut−1, Vt−1)| ≥ (cid:15) or t < TAM do //Use SNGD for minimizations ut l ← arg min u (cid:16) Ex∼D d(e(x, Ut [1:l−1] · u · Ut−1 [l+1:n−1]), Vt [1:l−1] · Vt−1 [l:n−1]) − x (cid:17)2 vt n−l ← arg min v (cid:16) Ex∼D t := t + 1 d(e(x, Ut [1:l] · Ut−1 [l+1:n−1]), Vt [1:n−l−1] · v · Vt−1 [n−l+1:n−1]) − x (cid:17)2 3 4 5 end 6 7 end Output :U, V very high probability, as shown above in Theorem 3.6. Furthermore, while the GLM error function with sigmoid activation has κ = eW Hazan et al. (2015), we obtain κ = 2b2W (linear in W ) for the generalized ReLU setting. This is significant as in Theorem 3.6, the number of iterations T is lower-bounded by an expression with κ in the numerator. ', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': '5 end', 'after_section': '5 end', 'context_after': '1http://yann.lecun.com/exdb/mnist/ ', 'paragraph_idx': 37, 'before_section': '5 end', 'context_before': 'Figure 2b. The results for ReLU showed an improvement, and DANTE was consistently superior to SGD across the iterations. ', 'modified_lines': 'In Figure 3, we also show the reconstructions obtained by both trained models (DANTE and SGD) for the autoencoder with the Generalized ReLU activation. The model trained using DANTE shows ', 'original_lines': 'In Figure 4, we also show the best reconstructions - both qualitatively and quantitatively - obtained by both trained models (DANTE and SGD) for the autoencoder with the leaky ReLU activation. The model trained using DANTE shows qualitatively better reconstructions, when compared to reconstructions obtained using a model trained by SGD under the same settings. We also conducted experiments to study the effectiveness of the feature representations learned using the models trained using DANTE and SGD in the same setting. After training, we passed the dataset through the autoencoder, extracted the hidden layer representations, and then trained a linear SVM. The ', 'after_paragraph_idx': 38, 'before_paragraph_idx': 36}, {'section': '5 end', 'after_section': None, 'context_after': 'MNIST ionosphere svmguide4 USPS SGD DANTE 92.45% 96.22% Table 1: Classification accuracies using ReLU autoencoder features on different datasets Varying Number of Hidden Neurons: Given the decomposable nature of the proposed solution to learning autoencoders, we also studied the effect of varying hyperparameters across the layers, in particular, the number of hidden neurons in a single-layer autoencoder. The results of these (a) Architecture: 1024->500->500->1024 (b) Architecture: 1024->750->500->750->1024 with generalized (leaky) ReLU activations for both DANTE and SGD. Multi-Layer Autoencoder: We also studied the performance of the proposed multi-layer DANTE single-layer autoencoders, each with the generalized (leaky) ReLU activation (note that a two single- layer autoencoder corresponds to 4 layers in the overall network, as mentioned in the architecture on In this work, we presented a novel methodology, Deep AlterNations for Training autoEncoders Under review as a conference paper at ICLR 2018 2a , w∗(cid:17) (cid:15), b2W (cid:16) Under review as a conference paper at ICLR 2018 ', 'paragraph_idx': 37, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2018 ', 'modified_lines': ' (a) Single-layer autoencoder with Sigmoid activation (b) Single-layer autoencoder with Generalized ReLU activation Figure 2: Plots of training and test errors vs training iterations for single hidden-layer autoencoder with Sigmoid (left) and Generalized ReLU (right) activations for both DANTE and SGD. 93.6% 92.44% 87.65% 70.37% 90.43% 89.49% Figure 3: Reconstructions using the autoencoder models with ReLU activation. Top Row: Orig- inal Images; Middle Row:Model trained using DANTE; Bottom: Model trained using SGD. comparable performance as a model trained by SGD under the same settings, in this case. We also conducted experiments to study the effectiveness of the feature representations learned using the models trained using DANTE and SGD in the same setting. After training, we passed the dataset through the autoencoder, extracted the hidden layer representations, and then trained a linear SVM. The classification accuracy results using the hidden representations are given in Table 1. The table clearly shows the superior performance of DANTE on this task. Experiments on other datasets: We also studied the performance of proposed method on other standard datasets 2, viz. Ionosphere (34 dimensions, 351 datapoints), SVMGuide4 (10 dimensions, 300 datapoints), and USPS (256 dimensions, 7291 datapoints). Figure 4 and Table 1 show the performance of the proposed method vs SGD on the abovementioned datasets. It can be clearly seen that the proposed DANTE method demonstrates superior generalization performance on most datasets. 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ (a) ionosphere dataset (b) svmguide4 dataset (c) USPS dataset Figure 4: Comparison of proposed DANTE vs SGD on other datasets from the UCI repository. The x-axis on all sub-figures is number of mini-batch iterations and y-axis denotes test error, which shows the generalization performance. (Best viewed in color; DANTE = purple, SGD = green) 9 Under review as a conference paper at ICLR 2018 (a) 200 hidden neurons (b) 300 hidden neurons (c) 400 hidden neurons (d) 600 hidden neurons Figure 5: Plots of training and test error vs training iterations on a single-layer autoencoder with generalized ReLU activation, with a varying number of nodes in the hidden layer. experiments are shown in Figure 5. The plots show that when the number of hidden neurons is low, DANTE reaches its minumum value much sooner (considering this is a subgradient method, one can always choose the best iterate over training) than SGD, although SGD finds a slightly better solution. However, when the number of hidden neurons increases, DANTE starts getting consistently better. This can be attributed to the fact that the subproblem is relatively more challenging for an alternating optimization setting when the number of hidden neurons is lesser. Figure 6: Plots of (a) training error and (b) test error vs training iterations for multi-layer autoencoders method (Algorithm 3) for the MNIST dataset. Figure 6 shows the results obtained by stacking two the figure). Evidently, the figure shows promising performance for DANTE in this experiment. Note that Figure 6b shows two spikes: one when the training for the next pair of layers in the autoencoder begins, and another when the end-to-end finetuning process is done. This is not present in Figure 6a, since the 500 → 500 layer in between is only randomly initialized, and is not trained using DANTE or SGD. 5 CONCLUSIONS AND FUTURE WORK (DANTE), to efficiently train autoencoders using alternating minimization, thus providing an effective alternative to backpropagation. We formulate training each layer of an autoencoder as a Generalized Linear Model (GLM) problem with an activation function, and leverage recent results to use Stochastic 10 Normalized Gradient Descent (SNGD) to train each step of the autoencoder. While recent work showed that a GLM with a sigmoid activation function is SLQC (which SNGD solves with a provable convergence bound), we introduced a new generalized ReLU activation function, and showed that a GLM with this activation function is also SLQC, thus allowing us to expand the applicability of the proposed method to autoencoders with both sigmoid and ReLU family of activation functions. In particular, we extended the definitions of local quasi-convexity to use subgradients in order to − SLQC, which improves prove that the GLM with generalized ReLU activation is the convergence bound for the corresponding GLM with the generalized ReLU (as compared to a GLM with sigmoid). We also showed how DANTE can be extended to train multi-layer autoencoders. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. DANTE can not only be used to train autoencoders, but can be extended to train standard multi-layer neural networks too. One could use DANTE to train a neural network layer-wise in a round robin fashion, and then finetune end-to-end using SGD. In case of autoencoders with tied weights, one would ideally use DANTE to learn the weights of the required layers, and then finetune end-to-end using a method such as SGD. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in theoretically analyzing the end-to-end convergence of the method for deep multi-layer autoencoders. 11 ', 'original_lines': 'satimage 88.65% 89.71% 16.50 % 15.70 % 70.37% 87.65% 89.49% 90.43 % Figure 4: Reconstructions using the autoencoder models with ReLU activation. Top: Model trained using SGD; Bottom: Model trained us- ing DANTE. classification accuracy results using the hidden representations on various datasets are given in Table 1. experiments are shown in Figure 6. The plots firstly show that DANTE performs competitively when compared to SGD. When the number of hidden neurons is relatively lower, both DANTE and SGD show comparable performance (SGD takes longer to reach the same solution, but finds a slightly better solution at the end); however, when the number of hidden neurons increases, DANTE is consistently better. This can be attributed to the fact that the subproblem is relatively more challenging for an alternating optimization setting when the number of hidden neurons is lesser. Experiments on other datasets: We also studied the performance of proposed method on other standard datasets 2, viz. Ionosphere (34 dimensions, 351 datapoints), Satimage (36 dimensions, 4435 datapoints), SVMGuide4 (10 dimensions, 300 datapoints), and USPS (256 dimensions, 7291 datapoints). Figure 3 shows the performance of the proposed method vs SGD on the above mentioned datasets. It can be clearly seen that the proposed DANTE method demonstrates superior generalization performance on most datasets. Figure 5: Plots of (a) training error and (b) test error vs training iterations for multi-layer autoencoders method (Algorithm 3) for the MNIST dataset. Figure 5 shows the results obtained by stacking two the figure). Evidently, the figure shows promising performance for DANTE in this experiment too. 5 DISCUSSIONS AND CONCLUSION (DANTE), to efficiently train autoencoders using a non-backpropagation method, in particular, using alternating minimization. The proposed method formulates training an autoencoder as a set of 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ 9 (a) 50 hidden neurons (b) 200 hidden neurons (c) 300 hidden neurons (d) 400 hidden neurons (e) 500 hidden neurons (f) 600 hidden neurons Figure 6: Plots of training (left) and test (right) error vs training iterations on a single-layer au- toencoder with generalized (leaky) ReLU activation, with a varying number of nodes in the hidden layer. Generalized Linear Model (GLM) problems, which allows us to use an efficient solver proposed in Hazan et al. (2015) in an alternating fashion to train deep autoencoders. While the earlier work was proposed only for GLMs with sigmoidal activation functions, we propose an extension of local- quasi-convexity to a wider class of activation functions - in particular, the ReLU. We introduce a generalized ReLU activation which allows the Stochastic Normalized Gradient Descent method to have provably faster convergence, when compared to a sigmoidal activation. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in variations of the autoencoder such as the Convolutional Autoencoder Masci et al. (2011). 6 CONCLUSION In this work, we have extended the definitions of quasi-convexity to use subgradients in order to − SLQC, enable us to prove that the GLM with generalized ReLU activation is mathematically validating that ReLU is far superior to Sigmoid as an activation function. Additionally, we have proposed a new algorithm for training autoencoders involving alternating minimization techniques and Stochastic Normal Gradient Descent. We showed experimentally, on using hyperparameters suggested theoretically, our proposed algorithm drastically outperforms traditional methods, with the case of ReLUs showing immense improvement. In this process of experimental validation, we have demonstrated the sensitivity of our algorithm to learning rate and batch size. 10 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Yurii E. Nesterov. Minimization methods for nonsmooth convex and quasiconvex functions. Matekon, 29:519–531, 1984. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. ', 'modified_lines': '', 'original_lines': 'Jonathan Masci, Ueli Meier, Dan Cire¸san, and Jürgen Schmidhuber. Stacked convolutional auto- encoders for hierarchical feature extraction. Artificial Neural Networks and Machine Learning– ICANN 2011, pp. 52–59, 2011. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
|
2017-10-27 20:17:07
|
ICLR.cc/2018/Conference
|
B1jrEMbCZ
|
B1Zl9fZRZ
|
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'From another perspective, there has been marked progress in recent years in the area of non-convex optimization (beyond deep learning), which has resulted in scalable methods such as iterated hard solutions. In this work, we investigate a non-backpropagation strategy to train neural networks, leveraging recent advances in quasi-convex optimization. Our method is called DANTE (Deep AlterNations for Training autoEncoders), and it offers an alternating minimization-based technique ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder- decoder networks and generative adversarial models, the reliance on backpropagation methods ', 'modified_lines': 'remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD-based techniques that are inherently serial (Taylor et al. (2016)), vanishing gradients, especially for certain activation functions (Hochreiter & Schmidhuber (1997)), convergence of stochastic techniques to local optima (Anandkumar & Ge (2016)), and many more. For a well-referenced critique of gradient-based methods, we point the reader to Taylor et al. (2016). thresholding (Blumensath & Davies (2009)) and alternating minimization (Jain et al. (2013)) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate ', 'original_lines': 'remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD- based techniques that are inherently serial Taylor et al. (2016), vanishing gradients, especially for certain activation functions Hochreiter & Schmidhuber (1997), convergence of stochastic techniques to local optima Anandkumar & Ge (2016), and many more. For a well-referenced critique of gradient-based methods, we point the reader to Taylor et al. (2016). thresholding Blumensath & Davies (2009) and alternating minimization Jain et al. (2013) as methods of choice for solving large-scale sparse recovery, matrix completion, and tensor factorization tasks. Several of these methods not only scale well to large problems, but also offer provably accurate ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'reduces the problem of training the layers to quasi-convex optimization problems. DANTE for networks with sigmoidal activation functions. However, a limitation of SNGD • To overcome this limitation, we introduce the generalized ReLU, a variant of the popular • We also show DANTE can be easily extended to train deep AEs with multiple hidden layers. • We empirically validate DANTE with both the generalized ReLU and sigmoid activations classification performance (with the learned representations), when compared to an identical 2 RELATED WORK for training a variety of neural networks including multi-layer perceptrons, convolutional neural networks, autoencoders, recurrent networks and the like. Recent years have seen the development of other methods, predominantly based on least-squares approaches, used to train neural networks. method to train a neural network. In particular, they introduced the Method of Auxiliary Constraints proposed an Expectation-Maximization (EM) approach derived from a hierarchical generative model called the Deep Rendering Model (DRM), and also used least-squared parameter updates in each of the EM steps. They showed that forward propagation in a convolutional neural network was ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'DANTE is based on a simple but useful observation that the problem of training a single hidden-layer autoencoder can be cast as a bi-quasiconvex optimization problem (described in Section 3.1). This ', 'modified_lines': 'observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi- convex problems including normalized gradient descent (Nesterov (1984)) and stochastic normalized gradient descent (Hazan et al. (2015)) to train autoencoder networks. The key contributions of this work are summarized below: • We show that viewing each layer of a neural network as applying an ensemble of generalized linear transformations, allows the problem of training the network to be cast as a bi-quasi- convex optimization problem (exact statement later). • We exploit this intuition by employing an alternating minimization strategy, DANTE, that • We utilize the state-of-the-art Stochastic Normalized Gradient Descent (SNGD) technique (Hazan et al. (2015)) for quasi-convex optimization to provide an efficient implementation of is its inability to handle non-differentiable link functions such as the ReLU. ReLU activation function and show how SNGD may be applied with the generalized ReLU function. This presents an augmentation in the state-of-the-art in quasi-convex optimization and may be of independent interest. This allows DANTE to train AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that SNGD offers provably more rapid convergence with the generalized ReLU function than it does even for the sigmoidal activation. This is corroborated in experiments as well. A key advantage of our approach is that these theoretical results can be used to set learning rates and batch sizes without finetuning/cross-validation. and establish that DANTE provides comparable or better test errors, reconstructions and network trained using standard mini-batch SGD-based backpropagation. Backpropagation-based techniques date back to the early days of neural network research (Rumelhart et al. (1986); Chauvin & Rumelhart (1995)) but remain to this day, the most commonly used methods Carreira-Perpinan and Wang (Carreira-Perpinan & Wang (2014)) proposed a least-squares based (MAC), and used quadratic penalties to enforce equality constraints. Patel et al. (Patel et al. (2015)) ', 'original_lines': 'observation allows us to use an alternating optimization strategy to train the autoencoder, where each step involves relatively simple quasi-convex problems. DANTE then uses efficient solvers for quasi-convex problems including normalized gradient descent Nesterov (1984) and stochastic normalized gradient descent Hazan et al. (2015) to train autoencoder networks. The key contributions of this work are summarized below: • We show that viewing each layer of an a neural network as applying an ensemble of generalized linear transformations, allows the problem of training the network to be cast as a bi-quasi-convex optimization problem (exact statement later). • We exploit this intuition by employing an alternating minimization strategy DANTE that • We utilize state-of-the-art Stochastic Normalized Gradient Descent (SNGD) technique Hazan et al. (2015) for quasi-convex optimization to provide an efficient implementation of is its inability to handle link non-differentiable functions such as the ReLU. ReLU activation function and show how the SNGD technique can be applied with the generalized ReLU function. This presents an augmentation in the state of the art in quasi- convex optimization and may be of independent interest. This allows DANTE to train AEs with both differentiable and non-differentiable activation functions, including ReLUs and sigmoid. • We show that the SNGD method offers provably more rapid convergence with the general- ized ReLU function than it does even for the sigmoidal activation. This is corroborated in experiments as well. A key advantage of our approach is the ability to exploit these theo- retical results to set learning rates and batch sizes without any fine tuning/cross-validation required. and establish that DANTE does provide comparable or better test errors, reconstructions and network train using standard used mini-batch SGD-based backpropagation. Backpropagation-based techniques date back to the early days of neural network research Rumelhart et al. (1986); Chauvin & Rumelhart (1995) but remain to this day, the most commonly used methods Carreira-Perpinan and Wang Carreira-Perpinan & Wang (2014) proposed a least-squares based (MAC), and used quadratic penalties to enforce equality constraints. Patel et al. Patel et al. (2015) ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'of this method, however, was on scaling the training of neural networks to a distributed setting on multiple cores across a computing cluster. Jaderberg also proposed the idea of ’synthetic gradients’ in Jaderberg et al. (2016). While this approach is interesting, this work is more focused towards a more ', 'paragraph_idx': 8, 'before_section': '2 RELATED WORK', 'context_before': 'available implementations or even published training results to compare against. More recently, Taylor et al. proposed a method to train neural networks using the Alternating ', 'modified_lines': 'Direction Method of Multipliers (ADMM) and Bregman iterations (Taylor et al. (2016)). The focus ', 'original_lines': 'Direction Method of Multipliers (ADMM) and Bregman iterations Taylor et al. (2016). The focus ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 7}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': '3 ', 'paragraph_idx': 16, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': 'Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2. ', 'modified_lines': 'Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result by Hazan ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 15}, {'section': 'Abstract', 'after_section': None, 'context_after': 'et al. (2015) that shows that GLMs with nice, differentiable link functions such as sigmoid (or even a combination of sigmoids such as φW2(·)), satisfy a property the authors name Strict Locally Quasi-Convexity (SLQC), which allows techniques such as SNGD to solve the GLM problems ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Output :Model given by wT ', 'modified_lines': '', 'original_lines': 'Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result by Hazan ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 56 end', 'after_section': None, 'context_after': '4 ', 'paragraph_idx': 20, 'before_section': '4 56 end', 'context_before': 'where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, ', 'modified_lines': 'fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: min W Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2, where φW2 (cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). This is once again solved by mini-batch SNGD, as before. ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': 'Abstract', 'after_section': None, 'context_after': '3.3 RATIONALE ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '6 end Output :w1, w2 ', 'modified_lines': '', 'original_lines': ' fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: min W Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2, where φW2 (cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105). This is once again solved by mini-batch SNGD, as before. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'ˆerr(w) = E(x,y)∼D (yi − φ((cid:104)w∗, xi(cid:105)))2 ', 'paragraph_idx': 24, 'before_section': '1 ← arg min', 'context_before': '(yi − φ((cid:104)w∗, xi(cid:105)))2 Similarly, a noisy GLM is defined by the existence of a w∗ ∈ Rd, which is the global minimizer of ', 'modified_lines': 'the error function: ', 'original_lines': 'the error function: E(x,y)∼D(y|x) = φ((cid:104)w∗, x(cid:105)). the expected error is then: ', 'after_paragraph_idx': 24, 'before_paragraph_idx': 24}, {'section': 'Abstract', 'after_section': None, 'context_after': 'is ((cid:15), e(cid:107)w∗(cid:107)2 , w∗)-SLQC in w. However, this is a bit restrictive since proof of this result critically uses properties of the sigmoid that are not satisfied by other popular activation functions such as the ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'i=1 (yi − φ(cid:104)w, xi(cid:105))2 ', 'modified_lines': '', 'original_lines': ' 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'in Step 5, we simply use the given bounds on the variables xi, w, w∗ due to the setup of the problem (w ∈ Bd(0, W ), and xi ∈ Bd, the unit d-dimensional ball, as in Defn 2.6). We also prove a similar result for the Noisy GLM below. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'In the above proof, we first use the fact (in Step 1) that in the GLM, there is some w∗ such that φ(cid:104)w∗, xi(cid:105) = yi. Then, we use the fact (in Steps 2 and 4) that the generalized ReLU function is b-Lipschitz, and the fact that the minimum value of the quasigradient of g is a (Step 3). Subsequently, ', 'modified_lines': '', 'original_lines': ' 6 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '1http://yann.lecun.com/exdb/mnist/ 8 Under review as a conference paper at ICLR 2018 MNIST ionosphere ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Figure 2b. The results for ReLU showed an improvement, and DANTE was consistently superior to SGD across the iterations. ', 'modified_lines': '', 'original_lines': 'In Figure 3, we also show the reconstructions obtained by both trained models (DANTE and SGD) for the autoencoder with the Generalized ReLU activation. The model trained using DANTE shows (a) Single-layer autoencoder with Sigmoid activation (b) Single-layer autoencoder with Generalized ReLU activation Figure 2: Plots of training and test errors vs training iterations for single hidden-layer autoencoder with Sigmoid (left) and Generalized ReLU (right) activations for both DANTE and SGD. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS AND RESULTS', 'after_section': '4 EXPERIMENTS AND RESULTS', 'context_after': 'comparable performance as a model trained by SGD under the same settings, in this case. We also conducted experiments to study the effectiveness of the feature representations learned using the models trained using DANTE and SGD in the same setting. After training, we passed the dataset ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'ReLU autoencoder features on different datasets ', 'modified_lines': '(a) ionosphere dataset (b) svmguide4 dataset (c) USPS dataset Figure 4: Comparison of proposed DANTE vs SGD on other datasets from the UCI repository. The x-axis on all sub-figures is number of mini-batch iterations and y-axis denotes test error, which shows the generalization performance. (Best viewed in color; DANTE = purple, SGD = green) In Figure 3, we also show the reconstructions obtained by both trained models (DANTE and SGD) for the autoencoder with the Generalized ReLU activation. The model trained using DANTE shows ', 'original_lines': '', 'after_paragraph_idx': 41, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Varying Number of Hidden Neurons: Given the decomposable nature of the proposed solution to learning autoencoders, we also studied the effect of varying hyperparameters across the layers, in particular, the number of hidden neurons in a single-layer autoencoder. The results of these experiments are shown in Figure 5. The plots show that when the number of hidden neurons is low, DANTE reaches its minumum value much sooner (considering this is a subgradient method, one can ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Experiments on other datasets: We also studied the performance of proposed method on other standard datasets 2, viz. Ionosphere (34 dimensions, 351 datapoints), SVMGuide4 (10 dimensions, 300 datapoints), and USPS (256 dimensions, 7291 datapoints). Figure 4 and Table 1 show the ', 'modified_lines': 'performance of the proposed method vs SGD on the abovementioned datasets. It can be seen that DANTE demonstrates superior generalization performance on most datasets. ', 'original_lines': 'performance of the proposed method vs SGD on the abovementioned datasets. It can be clearly seen that the proposed DANTE method demonstrates superior generalization performance on most datasets. 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ (a) ionosphere dataset (b) svmguide4 dataset (c) USPS dataset Figure 4: Comparison of proposed DANTE vs SGD on other datasets from the UCI repository. The x-axis on all sub-figures is number of mini-batch iterations and y-axis denotes test error, which shows the generalization performance. (Best viewed in color; DANTE = purple, SGD = green) 9 Under review as a conference paper at ICLR 2018 (a) 200 hidden neurons (b) 300 hidden neurons (c) 400 hidden neurons (d) 600 hidden neurons Figure 5: Plots of training and test error vs training iterations on a single-layer autoencoder with generalized ReLU activation, with a varying number of nodes in the hidden layer. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'REFERENCES Animashree Anandkumar and Rong Ge. Efficient Approaches for Escaping Higher Order Saddle ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'method for deeper autoencoders, as well as in theoretically analyzing the end-to-end convergence of the method for deep multi-layer autoencoders. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}]
|
2017-10-27 20:41:13
|
ICLR.cc/2018/Conference
|
B1Zl9fZRZ
|
BylzizbCZ
|
[{'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'In order to go beyond the sigmoid activation function, we introduce a new generalized ReLU activation function. ', 'paragraph_idx': 24, 'before_section': '1 ← arg min', 'context_before': '(yi − φ(cid:104)w, xi(cid:105))2 ', 'modified_lines': 'is ((cid:15), e(cid:107)w∗(cid:107)2, w∗)-SLQC in w. However, this is a bit restrictive, since the proof of this result critically uses properties of the sigmoid function, which are not satisfied by other popular activation functions such as the ReLU. ', 'original_lines': 'is ((cid:15), e(cid:107)w∗(cid:107)2 , w∗)-SLQC in w. However, this is a bit restrictive since proof of this result critically uses properties of the sigmoid that are not satisfied by other popular activation functions such as the ReLU. ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 24}]
|
2017-10-27 20:46:00
|
ICLR.cc/2018/Conference
|
BylzizbCZ
|
Byq8jfZA-
|
[]
|
2017-10-27 20:47:14
|
ICLR.cc/2018/Conference
|
Byq8jfZA-
|
rkw6w3iMG
|
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'perform the alternations very effectively. DANTE effortlessly extends to networks with multiple hidden layers and varying network configurations. In experiments on standard datasets, autoencoders trained using the proposed method were found to ', 'modified_lines': 'be very promising and competitive to traditional backpropagation techniques, both in terms of quality of solution, as well as training speed. ', 'original_lines': 'be very promising when compared to those trained using traditional backpropaga- tion techniques, both in terms of training speed, as well as feature extraction and reconstruction performance. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'From another perspective, there has been marked progress in recent years in the area of non-convex optimization (beyond deep learning), which has resulted in scalable methods such as iterated hard ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'overshadowed the need to continue to look for options beyond backprogagation to train deep networks. Despite several advancements in deep learning with respect to novel architectures such as encoder- decoder networks and generative adversarial models, the reliance on backpropagation methods ', 'modified_lines': 'remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD- based techniques that are inherently serial (Taylor et al. (2016)); vanishing gradients, especially for certain activation functions (Hochreiter & Schmidhuber (1997)); convergence of stochastic techniques to local optima (Anandkumar & Ge (2016)); and many more. For a well-referenced recent critique of gradient-based methods, we point the reader to Taylor et al. (2016). ', 'original_lines': 'remains. While reinforcement learning methods are becoming increasingly popular, their scope is limited to a particular family of settings such as agent-based systems or reward-based learning. Recent efforts have studied the limitations of SGD-based backpropagation, including parallelization of SGD-based techniques that are inherently serial (Taylor et al. (2016)), vanishing gradients, especially for certain activation functions (Hochreiter & Schmidhuber (1997)), convergence of stochastic techniques to local optima (Anandkumar & Ge (2016)), and many more. For a well-referenced critique of gradient-based methods, we point the reader to Taylor et al. (2016). ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '2 RELATED WORK ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': '• We also show DANTE can be easily extended to train deep AEs with multiple hidden layers. • We empirically validate DANTE with both the generalized ReLU and sigmoid activations ', 'modified_lines': 'and establish that DANTE provides competitive test errors, reconstructions and classification performance (with the learned representations), when compared to an identical network trained using standard mini-batch SGD-based backpropagation. ', 'original_lines': 'and establish that DANTE provides comparable or better test errors, reconstructions and classification performance (with the learned representations), when compared to an identical network trained using standard mini-batch SGD-based backpropagation. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 6}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'More recently, Taylor et al. proposed a method to train neural networks using the Alternating Direction Method of Multipliers (ADMM) and Bregman iterations (Taylor et al. (2016)). The focus ', 'paragraph_idx': 7, 'before_section': '2 RELATED WORK', 'context_before': 'called the Deep Rendering Model (DRM), and also used least-squared parameter updates in each of the EM steps. They showed that forward propagation in a convolutional neural network was equivalent to the inference on their DRM. Unfortunately, neither of these methods has publicly ', 'modified_lines': 'available implementations or published training results to compare against. ', 'original_lines': 'available implementations or even published training results to compare against. ', 'after_paragraph_idx': 8, 'before_paragraph_idx': 7}, {'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'knowledge, this is the first such effort in using alternating principles to train neural networks with related performance guarantees. 3 DANTE: DEEP ALTERNATIONS FOR TRAINING AUTOENCODERS ', 'paragraph_idx': 9, 'before_section': '2 RELATED WORK', 'context_before': 'Jaderberg et al. (2016). While this approach is interesting, this work is more focused towards a more efficient way to carry out gradient-based parameter updates in a neural network. ', 'modified_lines': 'In our work, we focus on an entirely new approach to training neural networks – in particular, autoencoders – using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the a range of datasets. Although alternating minimization has found much appeal in areas such as matrix factorization (Jain et al. (2013)), to the best of our 2 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'However, in our work, we focus on an entirely new approach to training neural networks – in particular, autoencoders – using alternating optimization, quasi-convexity and SNGD, and show that this approach shows promising results on the a range of datasets. Although alternating minimization has found much appeal in areas such as matrix factorization Jain et al. (2013), to the best of our 2 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 8}, {'section': '3.1 PROBLEM FORMULATION', 'after_section': '3.1 PROBLEM FORMULATION', 'context_after': 'into it to compute its own activations as al = φl(cid:104)Wl, al−1(cid:105) ∈ Rnl , where φ(cid:104)., .(cid:105) denotes φ((cid:104)., .(cid:105)) for simplicity of notation. A multi-layer neural network is formed by nesting such layers to form a composite function f given as follows: ', 'paragraph_idx': 11, 'before_section': '3.1 PROBLEM FORMULATION', 'context_before': 'Consider a neural network with L layers. Each layer l ∈ {1, 2, . . . , L} has nl nodes and is character- ized by a linear operator Wl ∈ Rnl−1×nl and a non-linear activation function φl : Rnl → Rnl . The activations generated by the layer l are denoted by al ∈ Rnl . We denote by a0, the input activations ', 'modified_lines': 'and n0 to be the number of input activations i.e. a0 ∈ Rn0. Each layer uses activations being fed ', 'original_lines': 'and n0 to be the number of input activations i.e.a0 ∈ Rn0 . Each layer uses activations being fed ', 'after_paragraph_idx': 11, 'before_paragraph_idx': 11}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'min W2 ', 'paragraph_idx': 13, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': '(5) ', 'modified_lines': 'An important observation here is that if we fix W1, then Eqn (5) turns into a set of Generalized Linear Model problems with φ2 as the activation function, i.e. ', 'original_lines': 'An important observation here is that if we fix W1, then Eqn (5) turns into a Generalized Linear Model problem with φ2 as the activation function, i.e. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 13}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': None, 'context_after': 'min W1 Ex∼D(cid:107)φW2(cid:104)W1, x(cid:105) − x(cid:107)2 2. 3 ', 'paragraph_idx': 14, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': 'Ex∼D(cid:107)φ2(cid:104)W2, z(cid:105) − x(cid:107)2 2 ', 'modified_lines': 'where z = φ1(cid:104)W1, x(cid:105). We exploit this observation in this work. In particular, we leverage a recent result by Hazan et al. (2015) that shows that GLMs with nice, differentiable link functions such as sigmoid (or even a combination of sigmoids such as φW2 (·)), satisfy a property the authors name Strict Locally Quasi-Convexity (SLQC), which allows techniques such as SNGD to solve the GLM problems effectively. Similarly, fixing W2 turns the problem into yet another SLQC problem, this time with W1 as the parameter (note that φW2(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105)). ', 'original_lines': 'where z = φ1(cid:104)W1, x(cid:105). Similarly, fixing W2 turns the problem into yet another Generalized Linear Model problem, this time with W1 as the parameter (note that φW2(cid:104)·(cid:105) = φ2(cid:104)W2, φ1(cid:104)·(cid:105)(cid:105)). Thus, a single-layer autoencoder is a combination of two Generalized Linear Model (GLM) problems, and we exploit this key observation in this work. In particular, we leverage a recent result by Hazan ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'after_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_after': 'i=1(yi − φ(cid:104)w, xi(cid:105))2 3 ', 'paragraph_idx': 17, 'before_section': '2 = (cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)22', 'context_before': '(cid:107)gt(cid:107) wt+1 = wt − η · ˆgt ', 'modified_lines': 'i=1 ∼ Uniform(S) ', 'original_lines': 'i=1 ∼ Unif(S) ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': 'Abstract', 'after_section': None, 'context_after': 'This is quite advantageous for us since it allows us to solve each sub-problem of the alternating setup efficiently. In a subsequent section, we will show that GLMs with non-differentiable activation ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '//Select a random mini-batch of training points Output :Model given by wT ', 'modified_lines': '', 'original_lines': ' et al. (2015) that shows that GLMs with nice, differentiable link functions such as sigmoid (or even a combination of sigmoids such as φW2(·)), satisfy a property the authors name Strict Locally Quasi-Convexity (SLQC), which allows techniques such as SNGD to solve the GLM problems effectively. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 56 end', 'after_section': None, 'context_after': 'min W ', 'paragraph_idx': 20, 'before_section': '4 56 end', 'context_before': 'f (W1, W2) = Ex∼D(cid:107)φ2(cid:104)W2, φ1(cid:104)W1, x(cid:105)(cid:105) − x(cid:107)2 2 ', 'modified_lines': 'Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a set of GLM problems: ', 'original_lines': 'Upon fixing the parameters of the lower layer i.e. W1, it is easy to see that we are left with a GLM problem: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': '4 56 end', 'after_section': None, 'context_after': 'min W ', 'paragraph_idx': 20, 'before_section': '4 56 end', 'context_before': 'where z = φ1(cid:104)W1, x(cid:105). DANTE solves this intermediate problem using SNGD steps by sampling several mini-batches of data points and performing updates as dictated by Algorithm 1. Similarly, ', 'modified_lines': 'fixing the parameters of the upper layer, i.e. W2, we are left with another set of problems: ', 'original_lines': 'fixing the parameters of the upper layer, i.e. W2, we are left with another GLM problem: ', 'after_paragraph_idx': None, 'before_paragraph_idx': 20}, {'section': '2 RELATED WORK', 'after_section': None, 'context_after': '1 t := 1 2 while |f (W t ', 'paragraph_idx': 8, 'before_section': None, 'context_before': 'Input :Stopping threshold (cid:15), Number of iterations of alternating minimization TAM , Number of ', 'modified_lines': 'iterations for SNGD TSN GD, initial values W 0 2 , learning rate η, minibatch size b 1 , W 0 ', 'original_lines': 'iterations for SNGD TSN GD, initial values w0 2, learning rate η, minibatch size b 1, w0 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': '3.3 RATIONALE To describe the motivation for our alternating strategy in DANTE, we first define key terms and results that are essential to our work. We present the notion of a locally quasi-convex function (as introduced in Hazan et al. (2015)) and show that under certain realizability conditions, empirical Definition 3.1 (Local-Quasi-Convexity). Let x, z ∈ Rd, κ, (cid:15) > 0 and let f : Rd → R be a differentiable function. Then f is said to be ((cid:15), κ, z)-Strictly-Locally-Quasi-Convex (SLQC) in x, if at least one of the following applies: ', 'paragraph_idx': 24, 'before_section': None, 'context_before': '5 6 end ', 'modified_lines': 'Output :W t 1, W t 2 objective functions induced by Generalized Linear Models (GLMs) are locally quasi-convex. We then introduce a new activation function, the generalized ReLU, and show that the GLM with the generalized ReLU also satisfies this property. We cite a result that shows that SNGD converges to the optimum solution provably for locally quasi-convex functions, and subsequently extend this result to the newly introduced activation function. We also generalize the definition of locally quasi-convex to functions on matrices, which allows us to relate these ideas to layers in neural networks. ', 'original_lines': 'Output :w1, w2 objective functions induced by Generalized Linear Models (GLMs) are locally quasi-convex. To this end, we introduce a new activation function, the generalized ReLU, and show that the GLM with the generalized ReLU also satisfies this property. We then cite a result that shows that SNGD converges to the optimum solution provably for locally quasi-convex functions, and extend this result to the newly introduced activation function. ', 'after_paragraph_idx': 24, 'before_paragraph_idx': None}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': '{(xi, yi)}m (cid:16) exp(2(cid:107)w∗(cid:107)) (cid:15)2 ', 'paragraph_idx': 25, 'before_section': '1 ← arg min', 'context_before': 'i=1 ', 'modified_lines': '(yi − φ((cid:104)w, xi(cid:105)))2 Similarly, a noisy GLM is defined by the existence of a w∗ ∈ Rd such E(x,y)∼D[y| x] = φ((cid:104)w∗, x(cid:105)), which is the global minimizer of the error function: err(w) = E(x,y)∼D (yi − φ((cid:104)w, xi(cid:105)))2 Without any loss in generality, we use xi ∈ Bd, the unit d-dimensional ball. (Hazan et al., 2015, Lemma 3.2) shows that if we draw m ≥ Ω the empirical error function samples of i=1 from a GLM with the sigmoid activation function, then with probability at least 1 − δ, ', 'original_lines': '(yi − φ((cid:104)w∗, xi(cid:105)))2 Similarly, a noisy GLM is defined by the existence of a w∗ ∈ Rd, which is the global minimizer of the error function: ˆerr(w) = E(x,y)∼D (yi − φ((cid:104)w∗, xi(cid:105)))2 (Hazan et al., 2015, Lemma 3.2) show that if we draw m ≥ Ω 1 − δ, the empirical error function samples i=1 from a GLM with the sigmoid activation function, then with probability at least ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'Definition 3.3. (Generalized ReLU) The generalized ReLU function f : R → R, 0 < a < b, a, b ∈ R is defined as: f (x) = (cid:26) ax ', 'paragraph_idx': 25, 'before_section': '1 ← arg min', 'context_before': '(yi − φ(cid:104)w, xi(cid:105))2 ', 'modified_lines': 'is ((cid:15), e(cid:107)w∗(cid:107)2, w∗)-SLQC in w. However, this result is restrictive, since its proof relies on properties of the sigmoid function, which are not satisfied by other popular activation functions such as the ReLU. We hence introduce a new generalized ReLU activation function to study the relevance of this result in a broader setting (which has more use in practice). 5 Under review as a conference paper at ICLR 2018 ', 'original_lines': 'is ((cid:15), e(cid:107)w∗(cid:107)2, w∗)-SLQC in w. However, this is a bit restrictive, since the proof of this result critically uses properties of the sigmoid function, which are not satisfied by other popular activation functions such as the ReLU. In order to go beyond the sigmoid activation function, we introduce a new generalized ReLU activation function. 5 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': 25, 'before_paragraph_idx': 25}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': '(cid:26) a b x < 0 x ≥ 0 Theorem 3.4. In the idealized GLM with generalized ReLU activation, assuming ||w∗|| ≤ W , ˆerr(w) is ', 'paragraph_idx': 26, 'before_section': None, 'context_before': 'x ≤ 0 x > 0 ', 'modified_lines': 'This function is differentiable at every point except 0. Note that this definition subsumes variants of ReLU such as the leaky ReLU (Xu et al. (2015)). We define the function g that provides a valid subgradient for the generalized ReLU at all x to be: g(x) = While SLQC is originally defined for differentiable functions, we now show that with the above definition of the subgradient, the GLM with the generalized ReLU is also SLQC. This allows us to use the SNGD as an effective optimizer for DANTE to train autoencoders with different kinds of activation functions. ', 'original_lines': 'This function is differentiable at every point except 0. Note that this definition also subsumes variants of ReLU such as the leaky ReLU Xu et al. (2015). We define the function g that gives a valid subgradient at all x for the generalized ReLU to be g(x) = While SLQC is primarily defined for differentiable functions, we now show that with the above definition of the subgradient, the GLM with the generalized ReLU is also SLQC. ', 'after_paragraph_idx': 26, 'before_paragraph_idx': None}, {'section': '4 56 end', 'after_section': None, 'context_after': 'a Proof. Consider ||w|| ≤ W such that ˆerrm(w) = 1 i=1(yi − φ(cid:104)w, xi(cid:105))2 ≥ (cid:15), where m is the m . Let g be the subgradient of the generalized ReLU activation and G be the subgradient of ˆerrm(w). (Note that as before, g(cid:104)., .(cid:105) denotes g((cid:104)., .(cid:105))). Then: ', 'paragraph_idx': 21, 'before_section': None, 'context_before': '(cid:16) ', 'modified_lines': '(cid:15), 2b3W total number of samples. Also let v be a point (cid:15)/κ-close to minima w∗ with κ = 2b3W ', 'original_lines': '(cid:15), 2b2W total number of samples. Also let v be a point (cid:15)/κ-close to minima w∗ with κ = 2b2W ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 ← arg min', 'after_section': None, 'context_after': '(Step 3) i=1 ≥ 2ab−1(cid:15) − ', 'paragraph_idx': 27, 'before_section': '1 ← arg min', 'context_before': 'g(cid:104)w, xi(cid:105)(b−1 (φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗,xi(cid:105))2 − |φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗,xi(cid:105)|||xi||||w∗ − v|| ', 'modified_lines': 'ab−1 (φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗, xi(cid:105))2 − b|φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗,xi(cid:105)|||xi||||w∗ − v|| ab−1 (φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗, xi(cid:105))2 − b2||(cid:104)w, xi(cid:105) − (cid:104)w∗,xi(cid:105)|| ', 'original_lines': 'ab−1 (φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗, xi(cid:105))2 − |φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗,xi(cid:105)|||xi||||w∗ − v|| ab−1 (φ(cid:104)w, xi(cid:105) − φ(cid:104)w∗, xi(cid:105))2 − b||(cid:104)w, xi(cid:105) − (cid:104)w∗,xi(cid:105)|| ', 'after_paragraph_idx': None, 'before_paragraph_idx': 27}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'We also prove a similar result for the Noisy GLM below. Theorem 3.5. In the noisy GLM with generalized ReLU activation, assuming ||w∗|| ≤ W , given (cid:16) a − SLQC in w. , w∗(cid:17) Theorem 3.6 (Hazan et al. (2015)). Let (cid:15), δ, G, M, κ > 0, let f : Rd → R and w∗ = arg minw f (w). Assume that for b ≥ b0((cid:15), δ, T ), with probability ≥ 1 − δ, ft defined in Algorithm 1 is ((cid:15), κ, w∗)-SLQC ∀w, and |ft| ≤ M ∀t ∈ {1, · · · , T } . If we run SNGD with T ≥ κ2||w1−w∗||2 ', 'paragraph_idx': 27, 'before_section': '1 ← arg min', 'context_before': 'φ(cid:104)w∗, xi(cid:105) = yi. Then, we use the fact (in Steps 2 and 4) that the generalized ReLU function is b-Lipschitz, and the fact that the minimum value of the quasigradient of g is a (Step 3). Subsequently, in Step 5, we simply use the given bounds on the variables xi, w, w∗ due to the setup of the problem ', 'modified_lines': '(w ∈ Bd(0, W ), and xi ∈ Bd, the unit d-dimensional ball, as defined earlier in this section). 6 Under review as a conference paper at ICLR 2018 w ∈ B(0, W ), then with probability ≥ 1 − δ after m ≥ 288b4W 2 log(1/δ)/(cid:15)2 samples, ˆerr(w) is (cid:15), 2b3W a2(cid:15)2 The proof for Theorem 3.5 is included in Appendix A.1. We connect the above results with a result from Hazan et al. (2015) (stated below) which shows that SNGD provably converges to the optimum for SLQC functions, and hence, with very high probability, for empirical objective functions induced by noisy GLM instances too. ', 'original_lines': '(w ∈ Bd(0, W ), and xi ∈ Bd, the unit d-dimensional ball, as in Defn 2.6). ˆerr(w) is w ∈ B(0, W ), then with probability ≥ 1 − δ after m ≥ 4W b2 (cid:15), 2b2W a(cid:15)2 log(1/δ)/(cid:15)2 samples, 6 Under review as a conference paper at ICLR 2018 (a) Phase - 1 (b) Phase - 2 Figure 1: An illustration of the proposed multi-layer DANTE (best viewed in color). In each training phase, the outer pairs of weights (shaded in gold) are treated as a single-layer autoencoder to be trained using single-layer DANTE, followed by the inner single-layer auroencoder (shaded in black). These two phases are followed by a finetuning process that may be empirically determined, similar to standard deep autoencoder training. The proof for Theorem 3.5 is included in Appendix A (Supplementary Section). Now, we review a result from Hazan et al. (2015) which shows that SNGD provably converges to the optimum for SLQC functions, and hence, with very high probability, for empirical objective functions induced by noisy GLM instances too. ', 'after_paragraph_idx': 27, 'before_paragraph_idx': 27}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'a ', 'paragraph_idx': 30, 'before_section': '1 ← arg min', 'context_before': '(cid:15)2 ', 'modified_lines': 'The results so far show that SNGD provides provable convergence for idealized and noisy GLM problems with both sigmoid and ReLU family of activation functions. We note that alternate activation functions such as tanh (which is simply a rescaled sigmoid) and leaky ReLU (Xu et al. (2015)) are variants of the aforementioned functions. In Algorithm 2, it is evident that each node of the output layer presents a GLM problem (and hence, SLQC) w.r.t. the corresponding weights from W2. We show in Appendices A.2 and A.3 how the entire layer is SLQC w.r.t. W2, by generalizing the definition of SLQC to matrices. In case of W1, while the problem may not directly represent a GLM, we show in Appendix A.3 that our generalized definition of SLQC to functions on matrices allows us to prove that Step 4 of Algorithm 2 is also SLQC w.r.t. W1. Thus, given a single-layer autoencoder with either sigmoid or ReLU activation functions, DANTE provides an effective alternating minimization strategy that uses SNGD to solve SLQC problems in each alternating step, each of which converges to its respective (cid:15)-suboptimal solution with high probability, as shown above in Theorem 3.6. Importantly, note that the convergence rate of SNGD depends on the κ parameter. Whereas the GLM error function with sigmoid activation has κ = eW Hazan et al. (2015), we obtain κ = 2b3W (i.e. linear in W ) for the generalized ReLU setting, which is an exponential improvement. This is significant as in Theorem 3.6, the number of iterations T depends on κ2. This shows that SNGD offers accelerated convergence with generalized ReLU GLMs (introduced in this work) when compared to sigmoid GLMs. ', 'original_lines': 'Note that by performing each DANTE alternation with a freshly sampled batch of data points, we can easily establish SLQC properties of the problem, and as a consequence, harness the convergence guarantees provided by Theorem 3.6. Thus, given a single-layer autoencoder with either a sigmoid activation function or a generalized ReLU activation function, DANTE uses SNGD to solve a SLQC problem in each alternating step, which converges to the optima solution in each step with very high probability, as shown above in Theorem 3.6. Note that our contributions allow us to cover a large family of activation functions, since the tanh activation is simply a rescaled sigmoid and the generalized ReLU includes many variants of ReLUs. More importantly, note that the convergence rate of SNGD depends crucially on the κ parameter. Whereas the GLM error function with sigmoid activation has κ = eW Hazan et al. (2015), we obtain κ = 2b2W (i.e. linear in W ) for the generalized ReLU setting, an exponential improvement! This is significant as in Theorem 3.6, the number of iterations T depends on κ2. This shows that SNGD offers accelerated convergence with generalized ReLU GLMs than sigmoid GLMs. ', 'after_paragraph_idx': 30, 'before_paragraph_idx': 30}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'Note that it may be possible to use other schemes to use DANTE for multi-layer autoencoders such as a round-robin scheme, where each layer is trained separately one after the other in the sequence in which the layers appear in the network. 7 Under review as a conference paper at ICLR 2018 Algorithm 3: DANTE for a multi-layer autoencoder Input :Encoder e with weights U, Decoder d with weights V, Number of hidden layers 2n − 1, TAM , initial values U0, V0, minibatch size b 1 t := 1 ', 'paragraph_idx': 33, 'before_section': '1 ← arg min', 'context_before': 'In the previous sections, we illustrated how a single hidden-layer autoencoder can be cast as a set of SLQC problems and proposed an alternating minimization method, DANTE. This approach can be generalized to deep autoencoders by considering the greedy layer-wise approach to training a neural ', 'modified_lines': 'network (Bengio et al. (2007)). In this approach, each pair of layers of a deep stacked autoencoder is successively trained in order to obtain the final representation. Each pair of layers considered in this paradigm is a single hidden-layer autoencoder, which can be cast as pairs of SLQC problems that can be trained using DANTE. Therefore, training a deep autoencoder using greedy layer-wise approach can be modeled as a series of SLQC problem pairs. Algorithm 3 summarizes the proposed approach to use DANTE for a deep autoencoder, and Figure 1 illustrates the approach. 4 EXPERIMENTS AND RESULTS We validated DANTE by training autoencoders on an expanded 32×32 variant of the standard MNIST dataset (LeCun et al. (1998)) as well as other datasets from the UCI repository. We also conducted experiments with multi-layer autoencoders, as well as studied with varying number of hidden neurons (a) Phase - 1 (b) Phase - 2 Figure 1: An illustration of the proposed multi-layer DANTE (best viewed in color). In each training phase, the outer pairs of weights (shaded in gold) are treated as a single-layer autoencoder to be trained using single-layer DANTE, followed by the inner single-layer auroencoder (shaded in black). These two phases are followed by a finetuning process that may be empirically determined, similar to standard deep autoencoder training. Learning rate η, Stopping threshold (cid:15), Number of iterations of alternating minimization ', 'original_lines': 'network (Bengio et al. (2007)). In this approach, each pair of layers of a deep stacked autoencoder is successively trained in order to obtain the final representation. Each pair of layers considered in this paradigm is a single hidden-layer autoencoder, which can be cast as a pair of SLQC problems that can be trained using DANTE. Therefore, training a deep autoencoder using greedy layer-wise approach can be modeled as a series of SLQC problem pairs. Algorithm 3 summarizes the proposed approach to use DANTE for a deep autoencoder, and Figure 1 illustrates the approach. Learning rates η, Stopping threshold (cid:15), Number of iterations of alternating minimization ', 'after_paragraph_idx': 34, 'before_paragraph_idx': 33}, {'section': '5 end', 'after_section': None, 'context_after': '(a) Single-layer autoencoder with Sigmoid activation (b) Single-layer autoencoder with Generalized ReLU ', 'paragraph_idx': 37, 'before_section': '5 end', 'context_before': 'Output :U, V ', 'modified_lines': 'on single-layer autoencoders. Our experiments on MNIST used the standard benchmarking setup of the dataset1, with 60, 000 data samples used for training and 10, 000 samples for testing. Experiments were conducted using Torch 7 ( Collobert et al. (2011)). Autoencoder with Sigmoid Activation: A single-layer autoencoder (equivalent to a neural net- work with one hidden layer) with a sigmoid activation was trained using DANTE as well as standard backprop-SGD (represented as SGD in the results, for convenience) using the standard Mean-Squared Error loss function. The experiments considered 600 hidden units, a learning rate of 0.001, and a minibatch size of 500 (same setup was maintained for SGD and the SNGD used inside DANTE for fair comparison; one could optimize both SGD and SNGD to improve the absolute result values.) We studied the performance by varying the number of hidden neurons, and show those results later in this section. The results are shown in Figure 2a. The figure shows that while DANTE takes slightly (negligibly) longer to reach a local minimum, it obtains a better solution than SGD. (We note that the time taken for the iterations were comparable across both DANTE and backprop-SGD.) Autoencoder with ReLU Activation: Similar to the above experiment, a single-layer autoencoder with a leaky ReLU activation was trained using DANTE and backprop-SGD using the Mean-Squared Error loss function. Once again, the experiments considered 600 units in the hidden layer of the 1http://yann.lecun.com/exdb/mnist/ 8 Under review as a conference paper at ICLR 2018 ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '5 end', 'after_section': None, 'context_after': 'MNIST ionosphere svmguide4 USPS SGD ', 'paragraph_idx': 40, 'before_section': None, 'context_before': 'Figure 2: Plots of training and test errors vs training iterations for single hidden-layer autoencoder with Sigmoid (left) and Generalized ReLU (right) activations for both DANTE and SGD. ', 'modified_lines': 'vehicle ', 'original_lines': ' 4 EXPERIMENTS AND RESULTS We validated DANTE by training autoencoders on an expanded 32×32 variant of the standard MNIST dataset LeCun et al. (1998) as well as other datasets from the UCI repository. We also conducted experiments with multi-layer autoencoders, as well as studies with varying number of hidden neurons on single-layer autoencoders. Our experiments on MNIST used the standard benchmarking setup of the dataset1, with 60, 000 data samples used for training and 10, 000 samples for testing. Experiments were conducted using Torch 7 Collobert et al. (2011). Autoencoder with Sigmoid Activation: A single-layer autoencoder (equivalent to a neural net- work with one hidden layer) was trained using DANTE and SGD with 600 hidden units, a sigmoid activation, a learning rate of 0.001, standard Mean-Squared Error loss function and a minibatch size of 500 (same setup was maintained for SGD and the SNGD used inside DANTE for fair comparison; one could optimize both SGD and SNGD to improve the absolute result values). The results are shown in Figure 2a. While DANTE takes slightly (negligibly) longer to reach a local minimum, it obtains a better solution than SGD. Autoencoder with ReLU Activation: A single-layer autoencoder was trained using DANTE and SGD with 600 hidden units, a Leaky ReLU activation (with leakiness parameter 0.01), a learning rate of 0.001, Mean-Squared Error loss function and a minibatch size of 500. The results are shown in Figure 2b. The results for ReLU showed an improvement, and DANTE was consistently superior to SGD across the iterations. 1http://yann.lecun.com/exdb/mnist/ 8 Under review as a conference paper at ICLR 2018 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 end', 'after_section': '5 end', 'context_after': 'Multi-Layer Autoencoder: We also studied the performance of the proposed multi-layer DANTE method (Algorithm 3) for the MNIST dataset. Figure 6 shows the results obtained by stacking two single-layer autoencoders, each with the generalized (leaky) ReLU activation (note that a two single- layer autoencoder corresponds to 4 layers in the overall network, as mentioned in the architecture on Under review as a conference paper at ICLR 2018 5 CONCLUSIONS AND FUTURE WORK In this work, we presented a novel methodology, Deep AlterNations for Training autoEncoders (DANTE), to efficiently train autoencoders using alternating minimization, thus providing an effective 2a , w∗(cid:17) (cid:16) REFERENCES ', 'paragraph_idx': 42, 'before_section': '5 end', 'context_before': 'This can be attributed to the fact that the subproblem is relatively more challenging for an alternating optimization setting when the number of hidden neurons is lesser. ', 'modified_lines': '(a) Architecture: 1024->500->500->1024 (b) Architecture: 1024->750->500->750->1024 Figure 6: Plots of training error and test error vs training iterations for multi-layer autoencoders with generalized (leaky) ReLU activations for both DANTE and SGD. the figure). The figure shows promising performance for DANTE in this experiment. Note that Figure 6b shows two spikes: one when the training for the next pair of layers in the autoencoder begins, and another when the end-to-end finetuning process is done. This is not present in Figure 6a, since the 500 → 500 layer in between is only randomly initialized, and is not trained using DANTE or SGD. 10 alternative to backpropagation. We formulated the task of training each layer of an autoencoder as a Strictly Locally Quasi-Convex (SLQC) problem, and leveraged recent results to use Stochastic Normalized Gradient Descent (SNGD) as an effective method to train each layer of the autoencoder. While recent work was restricted to using sigmoidal activation functions, we introduced a new generalized ReLU activation function, and showed that a GLM with this activation function also satisfies the SLQC property, thus allowing us to expand the applicability of the proposed method to autoencoders with both sigmoid and ReLU family of activation functions. In particular, we extended the definitions of local quasi-convexity to use subgradients in order to prove that the GLM with − SLQC, which improves the convergence bound for generalized ReLU activation is SLQC in the GLM with the generalized ReLU (as compared to a GLM with sigmoid). We also showed how DANTE can be extended to train multi-layer autoencoders. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it provides a competitive alternative to standard backprop-SGD, as evidenced in the experimental results. (cid:15), b3W Future Work and Extensions. DANTE can not only be used to train autoencoders, but can be extended to train standard multi-layer neural networks too. One could use DANTE to train a neural network layer-wise in a round robin fashion, and then finetune end-to-end using backprop-SGD. In case of autoencoders with tied weights, one could use DANTE to learn the weights of the required layers, and then finetune end-to-end using a method such as SGD. Our future work will involve a more careful study of the proposed method for deeper autoencoders, including the settings mentioned above, as well as in studying performance bounds for the end-to-end alternating minimization strategy for the proposed method. ', 'original_lines': 'the figure). Evidently, the figure shows promising performance for DANTE in this experiment. Note 2https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ 9 (a) 200 hidden neurons (b) 300 hidden neurons (c) 400 hidden neurons (d) 600 hidden neurons Figure 5: Plots of training and test error vs training iterations on a single-layer autoencoder with generalized ReLU activation, with a varying number of nodes in the hidden layer. (a) Architecture: 1024->500->500->1024 (b) Architecture: 1024->750->500->750->1024 Figure 6: Plots of (a) training error and (b) test error vs training iterations for multi-layer autoencoders with generalized (leaky) ReLU activations for both DANTE and SGD. that Figure 6b shows two spikes: one when the training for the next pair of layers in the autoencoder begins, and another when the end-to-end finetuning process is done. This is not present in Figure 6a, since the 500 → 500 layer in between is only randomly initialized, and is not trained using DANTE or SGD. alternative to backpropagation. We formulated the task of training each layer of an autoencoder as a Generalized Linear Model (GLM) problem with an activation function, and leveraged recent results to use Stochastic Normalized Gradient Descent (SNGD) to train each step of the autoencoder. While recent work was restricted to using sigmoidal activation functions, we introduced a new generalized ReLU activation function, and showed that a GLM with this activation function also satisfies the SLQC property, thus allowing us to expand the applicability of the proposed method to autoencoders with both sigmoid and ReLU family of activation functions. In particular, we extended the definitions of local quasi-convexity to use subgradients in order to prove that the GLM with generalized ReLU − SLQC, which significantly improves the convergence bound for the activation is corresponding GLM with the generalized ReLU (as compared to a GLM with sigmoid). We also showed how DANTE can be extended to train multi-layer autoencoders. We empirically validated DANTE with both sigmoidal and ReLU activations on standard datasets as well as in a multi-layer setting, and observed that it performs comparably or better than the standard SGD approach to train autoencoders. (cid:15), b2W 10 Under review as a conference paper at ICLR 2018 DANTE can not only be used to train autoencoders, but can be extended to train standard multi-layer neural networks too. One could use DANTE to train a neural network layer-wise in a round robin fashion, and then finetune end-to-end using SGD. In case of autoencoders with tied weights, one would ideally use DANTE to learn the weights of the required layers, and then finetune end-to-end using a method such as SGD. Our future work will involve a more careful study of the proposed method for deeper autoencoders, as well as in theoretically analyzing the end-to-end convergence of the method for deep multi-layer autoencoders. ', 'after_paragraph_idx': 42, 'before_paragraph_idx': 42}, {'section': '1 ← arg min', 'after_section': '1 ← arg min', 'context_after': 'a ', 'paragraph_idx': 27, 'before_section': None, 'context_before': '2ξi(φ(cid:104)w∗, xi(cid:105) − φ(cid:104)w, xi(cid:105)) Consider ||w|| ≤ W such that ˆerrm(w) − ˆerrm(w∗) ≥ (cid:15). Also, let v be a point (cid:15)/κ-close to minima ', 'modified_lines': 'w∗ with κ = 2b3W . Let g be the subgradient of the generalized ReLU activation and G be the subgradient of ˆerrm(w), as before. Then: ', 'original_lines': 'w∗, κ = 2b2W . Let g be the subgradient of the generalized ReLU activation and G be the subgradient of ˆerrm(w). Then: ', 'after_paragraph_idx': 27, 'before_paragraph_idx': None}]
|
2017-12-23 11:02:23
|
ICLR.cc/2018/Conference
|
rkw6w3iMG
|
Hy5pvhsff
|
[]
|
2017-12-23 11:02:26
|
ICLR.cc/2018/Conference
|
Hy5pvhsff
|
HkP6Y1-Rb
|
[]
|
2018-01-25 15:41:20
|
ICLR.cc/2018/Conference
|
H1o2e7Z0b
|
H132zmZCW
|
[]
|
2017-10-27 21:18:44
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.