Dataset Viewer
Auto-converted to Parquet
paper_id
stringlengths
15
40
source
stringclasses
2 values
year
int32
2.02k
2.03k
paper_text
stringlengths
25.4k
748k
anonymized_paper_text
stringlengths
22.6k
734k
decision_label
stringclasses
2 values
decision_text
stringclasses
14 values
average_review_score
float32
1
9.33
018a6bc9b1e5715041d81f310f0bb7714d575a3f
iclr
2,017
NEURAL CODE COMPLETION Chang Liu*, Xin Wang*, Richard Shin, Joseph E. Gonzalez, Dawn Song University of California, Berkeley ABSTRACT Code completion, an essential part of modern software development, yet can be challenging for dynamically typed programming languages. In this paper we explore the use of neural network techniques to automatically learn code completion from a large corpus of dynamically typed JavaScript code. We show different neural networks that leverage not only token level information but also structural information, and evaluate their performance on different prediction tasks. We demonstrate that our models can outperform the state-of-the-art approach, which is based on decision tree techniques, on both next non-terminal and next terminal prediction tasks by 3.8 points and 0.5 points respectively. We believe that neural network techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. 1 INTRODUCTION As the scale and complexity of modern software libraries and tools continue to grow, code completion has become an essential feature in modern integrated development environments (IDEs). By suggesting the right libraries, APIs, and even variables in real-time, intelligent code completion engines can substantially accelerate software development. Furthermore, as many projects move to dynamically typed and interpreted languages, effective code completion can help to reduce costly errors by eliminating typos and identifying the right arguments from context. However, existing approaches to intelligent code completion either rely on strong typing (e.g., Visual Studio for C++), which limits their applicability to widely used dynamically typed languages (e.g., JavaScript and Python), or are based on simple heuristics and term frequency statistics which are often brittle and are relatively error-prone. In particular, Raychev et al. (2016) proposes the state-of-the-art probabilistic model for code, which generalizes both simple \( n \)-gram models and probabilistic grammar approaches. This approach, however, examines only a limited number of elements in the source code when completing the code. Therefore, the effectiveness of this approach may not scale well to large programs. In this paper we explore the use of deep learning techniques to address the challenges of code completion for the widely used and dynamically typed JavaScript programming language. We formulate the code completion problem as a sequential prediction task over the traversal of a parse-tree structure consisting of both non-terminal structural nodes and terminal nodes encoding program text. We then present simple, yet expressive, LSTM-based (Hochreiter & Schmidhuber (1997)) models that leverage additional side information obtained by parsing the program structure. Compared to widely used heuristic techniques, deep learning for code completion offers the opportunity to learn rich contextual models that can capture language and even library specific code patterns without requiring complex rules or expert intervention. We evaluate our recurrent neural network architecture on an established benchmark dataset for the JavaScript code completion. Our evaluations reveal several findings: (1) when evaluated on short programs, our RNN-based models can achieve better performance on the next node prediction tasks compared to the prior art (Bielik et al. (2016); Raychev et al. (2016)), which are based on decision-tree models; (2) our models’ prediction accuracies on longer programs, which is provided in the test set, but were not evaluated upon by previous work, are better than our models’ accuracies on shorter *The first and second authors contributed equally and are listed in an alphabetical order. Figure 1: Code Completion Example in IntelliJ IDEA Figure 2: Correct prediction of the program in Figure 1 programs; and (3) in the scenario that the code completion engine suggests a list of candidates, our RNN-based models allow users to choose from a list of 5 candidates rather than inputting manually for over 96% of all time when this is possible. These promising results encourage more investigation into developing neural network approaches for the code completion problem. We believe that our work not only highlights the importance of the field of neural network-based code completion, but is also an important step toward neural network-based program synthesis. 2 RELATED WORK Existing approaches that build probabilistic models for code can typically be categorized as n-gram models (Hindle et al., 2012; Nguyen et al., 2013; Tu et al., 2014), probabilistic grammars (Collins, 2003; Allamanis & Sutton, 2014; Allamanis et al., 2015; Maddison & Tarlow, 2014; Liang et al., 2010), and log-bilinear models (Allamanis et al., 2015). Bielik et al. (2016) generalizes the PCFG approach and n-gram approach, while Raychev et al. (2016a) further introduces decision tree approaches to generalize Bielik et al. (2016). Raychev et al. (2014) and White et al. (2015) explore how to use recurrent neural networks (RNNs) to facilitate the code completion task. However, these works only consider running RNNs on top of a token sequence to build a probabilistic model. Although the input sequence considered in Raychev et al. (2014) is produced from an abstract object, the structural information contained in the abstract syntax tree is not directly leveraged by the RNN structure in both of these two works. In contrast, we consider extending LSTM, a RNN structure, to leverage the structural information directly for the code prediction task. Recently there has been an increasing interest in developing neural networks for program synthesis (Ling et al., 2016; Beltagy & Quirk (2016); Dong & Lapata (2016); Chen et al., (2016)). These works all consider synthesizing a program based on inputs in other formats such as images or natural language descriptions. 3 CODE COMPLETION VIA BIG CODE In this section, we first introduce the problem of code completion and its challenges. Then we explain abstract syntax trees (AST), which we use as the input for our problems. Lastly, we formally define the code completion problem in different settings as several prediction problems based on a partial AST. 3.1 CODE COMPLETION: AN EXAMPLE Code completion is a feature in some integrated development environments (IDEs) to speed up programmers’ coding process. Figure 1 demonstrates this feature in IntelliJ IDEA. In this example, a part of a JavaScript program has been input to the IDE. When the dot symbol (i.e., “.”) is added after _webpack_require_, the IDE prompts with a list of candidates that the programmer is most likely to input next. When a candidate matches the intention, the programmer can choose it from the list rather than typing it manually. In this work, we define the code completion problem as predicting the next symbol while a program is being written. We consider this problem as an important first step toward completing an entire program. Traditional code completion techniques are developed by the programming language community to leverage context information for prediction. For example, when a programmer writes a Java program and inputs a variable name and then a dot symbol, the code completion engine will analyze the class of the variable and prompt the members of the class. In programming language literature, such information is referred to as type information. Statically typed languages, such as C and Java, enforces type checking at static time, so that the code completion engine can take advantage of full type information to make prediction without executing the code. In recent years, dynamically typed languages, such as Python or JavaScript, have become increasingly popular. In these languages, type checking is usually performed dynamically while executing a program. Thus, type information may be only partially available to the code completion engine while the programmer is writing the code. Despite their popularity, the dynamic typing of these languages makes code completion for them challenging. For example, in Figure 1, the next symbol to be added is p. This symbol does not appear in the previous part of the program, and thus the code completion engine in IntelliJ IDEA IDE cannot prompt with this symbol. However, this challenge may be remedied by leveraging a large corpus of code, a.k.a., big code. In fact, _webpack_require_.p is a frequently used combination appearing in many programs on Github.com, one of the largest repositories of source code. Therefore, a code completion engine powered by big code is likely to learn this combination and to prompt p. In fact, our methods discussed in later sections can predict this case very well (Figure 2). 3.2 ABSTRACT SYNTAX TREE Regardless of whether it is dynamically typed or statically typed, any programming language has an unambiguous context free grammar (CFG), which can be used to parse source code into an abstract syntax tree (AST). Further, an AST can be converted back into source code easily. Therefore we consider the input of our code completion problem as an AST, which is a typical assumption made by most code completion engines. An AST is a rooted tree. In an AST, each non-leaf node corresponds to a non-terminal in the CFG specifying structure information. In JavaScript, non-terminals may be ExpressionStatement, ForStatement, IfStatement, SwitchStatement, etc. Each leaf node corresponds to a terminal in the CFG encoding program text. There are infinite possibilities for terminals. They can be variable names, string or numerical literals, operators, etc. Figure 3 illustrates a part of the AST of the code snippet in Figure 1. In this tree, a node without a surrounding box (e.g., ExpressionStatement, etc.) denotes a non-terminal node. A node embraced by an orange surrounding box (e.g., installedModules) denotes a terminal node. At the bottom of the figure, there is a non-terminal node Property and a terminal node p. They have not been observed by the editor, so we use green to indicate this fact. Note that each non-terminal has at most one terminal as its child. In a traditional code completion engine, the AST can be further processed by a type checker so that type information will be attached to each node. In this work, however, we focus on dynamically typed languages, and type information is not always available. Therefore, we do not consider the type information provided by a compiler, and leave it for our future work. 3.3 PROBLEM SETUP In this work, we consider the input to be a partial AST, and the code completion problem is to predict the next node given the partial AST. In the following, we first define a partial AST, and then present the code completion problems in different scenarios. Input: a partial AST. Given a complete AST \( T \), we define a partial AST to be a subtree \( T' \) of \( T \), such that for each node \( n \) in \( T' \), its left set \( L_T(n) \) with respect to \( T \) is a subset of \( T' \), i.e., \( L_T(n) \subseteq T' \). Here, the left set \( L_T(n) \) of a node \( n \) with respect to \( T \) is defined as the set of all nodes in the in-order sequence during the depth-first search of \( T \) that are visited earlier than \( n \). Under this definition, in each partial AST \( T' \), there exists the right-most node \( n_R \), such that all other nodes in \( T' \) form its left set \( L_T(n_R) \). The next node in the in-order depth-first search visiting sequence after \( n_R \) is also the first node not appearing in \( T' \). We call this node the next node following the partial AST. Figure 4 illustrates these concepts using the example in Figure 3. In the rest of the paper, we also refer to a partial AST as a query. Next node prediction. Given a partial AST, the next node prediction problem, as suggested by its name, is to predict the next node following the partial AST. Based on the node’s kind, i.e., whether its a non-terminal node or a terminal one, we can categorize the problem into the next non-terminal prediction problem and the next terminal prediction problem. Although the next terminal prediction problem may sound more interesting, the next non-terminal prediction problem is also important, since it predicts the structure of the program. For example, when then next non-terminal is ForStatement, the next token in the source program is the keyword for, which does not have a corresponding terminal in the dataset. In this case, a model able to predict the next non-terminal can be used by the code-completion engine to emit the keyword for. These two tasks are also the same problems considered by previous works employing domain specific languages to achieve heuristic-based code completion (Raychev et al. (2016b); Bielik et al. (2016)). Predicting the next node versus predicting the next token. A natural alternative formulation of the problem is predicting the next token given the token sequence that has been inputted so far. Such a formulation, however, does not take advantage of the AST information, which is very easy to acquire with a suitable parser. Predicting the next node allows taking advantage of such information to enable more intelligent code completion. In particular, predicting the next non-terminal allows completing the structure of a code block rather than a single (keyword) token. For example, when the next token is a keyword for, the corresponding next non-terminal is ForStatement, which corresponding to the following code block: for( ___ ; ___ ; ___ ) { // for-loop body } In this case, successfully predicting the next non-terminal node allows completing not only the next key token for, but also tokens such as (, ;, ), {, and }. Such structure completion enabled by predicting the next non-terminal is more compelling in modern IDEs. Predicting the next terminal node allows completing identifiers, properties, literals, etc., which is similar to the next token prediction. However, predicting the next terminal node can leverage the information of the predicting node’s non-terminal parent, indicating what is being predicted, i.e., an identifier, a property, or a literal, etc. For example, when completing the following expression: __webpack_require_. the code completion engine with AST information will predict a property of __webpack_require_, while the engine without AST information only learns two tokens __webpack_require_ and a dot ". " and tries to predict the next token without any constraint. In our evaluation, we show that by leveraging the information from the non-terminal parent can significantly improve the performance. In this work, we focus on the next node prediction task, and leave the comparison with next token prediction as our future work. Joint prediction. A more important problem than predicting only the next non-terminal or terminal itself is to predict the next non-terminal and terminal together. We refer to this task to predict both next non-terminal and terminal as the joint prediction problem. We hope code completion can be used to generate the entire parsing tree in the end, and joint prediction is one step further toward this goal than next node prediction. Formally, the joint prediction problem that we consider is that, given a partial AST whose following node is a non-terminal one, we want to predict both the next non-terminal and the next terminal. There may be non-terminal nodes which do not have a terminal child (e.g., the AssignmentStatement). In this case, we artificially add an EMPTY terminal as its child. Note that this treatment is the same as in [Bielik et al., 2016]. We count it as a correct prediction if both the next non-terminal and terminal are predicted correctly. Denying prediction. There may be infinite possibilities for terminals, so it is impossible to predict all terminals correctly. We consider an alternative scenario that, when it thinks that the programmer will input a rare terminal, the code completion engine should have the ability to identify this case, and deny predicting the next node(s). In our problem, we build a vocabulary for frequent terminals. All terminals not in this vocabulary are considered as an UNK terminal. In this case, when it predicts UNK for the next terminal, the code completion model is considered as denying prediction. Since non-terminals’ vocabulary size is very small, denying prediction is only considered for the next terminal prediction, but not for the next non-terminal prediction. 4 MODELS In this section, we present the basic models considered in this work. In particular, given a partial AST as input, we first convert the AST into its left-child right-sibling representation, and serialize it as its in-order depth first search sequence. Thus, we consider the input for the next non-terminal prediction as a sequence of length k, i.e., \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\). Here, for each \(i\), \(N_i\) is a non-terminal, and \(T_i\) is the terminal child of \(N_i\). For each non-terminal node \(N_i\), we encode not only its kind, but also whether the non-terminal has at least one non-terminal child, and/or one right-sibling. In doing so, from an input sequence, we can reconstruct the original AST. This encoding is also employed by [Raychev et al., 2016a]. We refer to each element in the sequence (e.g., \((N_i, T_i)\)) as a token. As mentioned above, a non-terminal without a terminal child is considered to have an EMPTY child. This input sequence \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\) is the only input for all problems except the next terminal prediction. For the next terminal prediction problem, besides the input sequence, we also have the information about the parent of the current predicting terminal, which is a non-terminal, i.e., \(N_{k+1}\). Throughout the rest of the discussion, we assume that both \(N_i\) and \(T_i\) employ one-hot encoding. The vocabulary sets of non-terminals and terminals are separate. 4.1 NEXT NON-TERMINAL PREDICTION Given an input sequence, our first model predicts the next non-terminal. The architecture is illustrated in Figure 5. We refer to this model as NT2N, which stands for using the sequence of Figure 5: Architecture (NT2N) for predicting the next non-terminal. Non-terminal and Terminal pairs TO predict the next Non-terminal. We first explain each layer of NT2N, and then introduce two variants of this model. Embedding non-terminal and terminal. Given an input sequence, the embedding of each token is computed as \[ E_i = AN_i + BT_i \] where \( A \) is a \( J \times V_N \) matrix and \( B \) is a \( J \times V_T \) matrix. Here \( J \) is the size of the embedding vector, \( V_N \) and \( V_T \) are the vocabulary sizes of non-terminals and terminals respectively. LSTM layer. Then the embedded sequence is fed into a LSTM layer to get the hidden state. In particular, a LSTM cell takes an input token and a hidden state \( h_{i-1}, c_{i-1} \) from the previous LSTM cell as input, computes a hidden state \( h_i, c_i \), and outputs \( h_i \), based on the following formulas: \[ \begin{pmatrix} q \\ f \\ o \\ g \end{pmatrix} = \begin{pmatrix} \sigma \\ \sigma \\ \sigma \\ \tanh \end{pmatrix} P_{J,2J} \left( \begin{pmatrix} x_i \\ h_{i-1} \end{pmatrix} \right) \] \[ c_i = f \odot c_{i-1} + q \odot g \] \[ h_i = o \odot \tanh(c_i) \] Here, \( P_{J,2J} \) denotes a \( J \times 2J \) parameter matrix, where \( J \) is the size of the hidden state, i.e. dimension of \( h_i \), which is equal to the size of embedding vectors. \( \sigma \) and \( \odot \) denote the sigmoid function and pointwise multiplication respectively. Softmax layer. Assume \( h_k \) is the output hidden state of the last LSTM cell. \( h_k \) is fed into a softmax classifier to predict the next non-terminal. In particular, we have \[ \hat{N}_{k+1} = \text{softmax}(W_N \times h_k + b_N) \] where \( W_N \) and \( b_N \) are a matrix of size \( V_N \times J \) and a \( V_N \)-dimensional vector respectively. Using only non-terminal inputs. One variant of this model is to omit all terminal information from the input sequence. In this case, the embedding is computed as \( E_i = AN_i \). We refer to this model as N2N, which stands for using Non-terminal sequence TO predict the next Non-terminal. Predicting the next terminal and non-terminal together. Based on NT2N, we can predict not only the next non-terminal but also the next terminal, using \[ \hat{T}_{k+1} = \text{softmax}(W_T \times h_k + b_T) \] where \( W_T \) and \( b_T \) are a matrix of size \( V_T \times J \) and a \( V_T \)-dimensional vector respectively. In this case, the loss function has an extra term to give supervision on predicting \( \hat{T} \). We refer to this model as NT2NT, which stands for *using the sequence of Non-terminal and Terminal pairs TO predict the next Non-terminal and Terminal pair*. 4.2 NEXT TERMINAL PREDICTION In the next terminal prediction problem, the partial AST does not only contain \((N_1, T_1), ..., (N_k, T_k)\), but also \(N_{k+1}\). In this case, we can employ the architecture in Figure 6 to predict \(T_{k+1}\). In particular, we first get the LSTM output \(h_k\) in the same way as in NT2N. The final prediction is based on \[ \hat{T}_{k+1} = \text{softmax}(W_T h_k + W_{NT} N_{k+1} + b_T) \] where \(W_{NT}\) is a matrix of size \(V_T \times V_N\), and \(W_T\) and \(b_T\) are the same as in NT2NT. We refer to this model as NTN2T, which stands for *Non-terminal and Terminal pair sequence and the next Non-terminal TO predict the next Terminal*. Note that the model NT2NT can also be used for the next terminal prediction task, although the non-terminal information \(N_{k+1}\) is not leveraged. We will compare the two approaches later. 4.3 JOINT PREDICTION We consider two approaches to predict the next non-terminal and the next terminal together. The first approach is NT2NT, which is designed to predict the two kinds of nodes together. An alternative approach is to (1) use a next non-terminal approach \(X\) to predict the next non-terminal; and (2) feed the predicted non-terminal and the input sequence into NTN2T to predict the next terminal. We refer to such an approach as X+NTN2T. 4.4 DENYING PREDICTION We say a model *denies prediction* when it predicts the next terminal to be UNK, a special terminal to substitute rare terminals. However, due to the large amount of rare terminals, the occurrences of UNK may be much greater than any single frequent terminals. In this case, a model that can deny prediction may tend to predict UNKs, and thus may predict for fewer queries than it should. To mitigate this problem, we modify the loss function to be adaptive. Specifically, training a machine learning model \(f_\theta\) is to optimize the following objective: \[ \arg\min_\theta \sum_i l(f_\theta(q_i), y_i) \] where \(\{(q_i, y_i)\}\) is the training dataset consisting pairs of a query \(q_i\) and its ground truth next token \(y_i\). \(l\) is the loss function to measure the distance between the prediction \(\hat{y}_i = f_\theta(q_i)\) and the ground <table> <tr> <th colspan="2">Training set</th> <th colspan="2">Test set</th> <th colspan="2">Overall</th> </tr> <tr> <th>Programs</th> <td>100,000</td> <th>Programs</th> <td>50,000</td> <th>Non-terminal</th> <td>44</td> </tr> <tr> <th>Queries</th> <td>1.7 \times 10^8</td> <th>Queries</th> <td>8.3 \times 10^7</td> <th>Terminal</th> <td>3.1 \times 10^6</td> </tr> </table> Table 1: Statistics of the dataset truth \( y_i \). We choose \( l \) to be the standard cross-entropy loss. We introduce a weight \( \alpha_i \) for each sample \( (q_i, y_i) \) in the training dataset to change the objective to be as follows: \[ \argmin_{\theta} \sum_i \alpha_i l(f_\theta(q_i), y_i) \] When training a model not allowed to deny prediction, we set \( \alpha_i = 0 \) for \( y_i = \text{UNK} \), and \( \alpha_i = 1 \) otherwise. In doing so, it is equivalent to remove all queries whose ground truth next token is UNK. When training a model that allows denying prediction, we set all \( \alpha_i \) to be 1. To denote this case, we put a notation “+D” at the end of the model, (e.g., NT2NT+D, etc.). 5 EVALUATION 5.1 DATASET We use the JavaScript dataset[2] provided by [Raychev et al. (2016b)] to evaluate different approaches. The statistics of the dataset can be found in Table 1. [Raychev et al. (2016a)] provides an approach, called PHOG, for the next token prediction. The reported accuracy results are based on a subset of \( 5.3 \times 10^7 \) queries from the full test set. Specifically, [Raychev et al. (2016a)] chose all queries in each program containing fewer than 30,000 tokens[7]. When we compare with their results, we use the same testset. Otherwise, without a special specification, our results are based on the full test set consisting of \( 8.3 \times 10^7 \) queries. 5.2 TRAINING DETAILS Vocabulary In our dataset, there are 44 different kinds of non-terminals. Combining two more bits of information to indicate whether the non-terminal has a child and/or a right sibling, there are at most 176 different non-terminals. However, not all such combinations are possible: a ForStatement must have a child. In total, the vocabulary size for non-terminals is 97. For terminals, we sort all terminals in the training set by their frequencies. Then we choose the 50,000 most frequent terminals to build the vocabulary. We further add three special terminals: UNK for out-of-vocabulary tokens, EOF indicating the end of program, and Empty for the non-terminal which does not have a terminal. Note that about 45% terminals in the dataset are Empty terminals. Training details. We use a single layer LSTM network with hidden unit size of 1500 as our base model. To train the model, we use Adam ([Kingma & Ba (2014)] with base learning rate 0.001. The learning rate is multiplied by 0.9 every 0.2 epochs. We clip the gradients’ norm to 5. The batch size is \( b = 80 \). We use truncated backpropagation through time, by unrolling the LSTM model \( s = 50 \) times to take an input sequence of length 50 in each batch (and therefore each batch contains \( b \times s = 4000 \) tokens). We divide each program into segments consisting of \( s \) consecutive tokens. The last segment of a program, which may not be full, is padded with (EOF) tokens. We coalesce multiple epochs together. We organize all training data into \( b \) buckets. In each epoch, we randomly shuffle all programs in the training data to construct a queue. Whenever a bucket is empty, a program is popped from the queue and all segments of the program are inserted into the empty bucket sequentially. When the queue becomes empty, i.e., the current epoch finishes, all programs are re-shuffled randomly to reconstruct the queue. Each mini-batch is formed by \( b \) segments, i.e., one segment popped from each bucket. When the training data has been shuffled for \( e = 8 \) times, i.e., \( e \) epochs are inserted into the [2] http://www.srl.inf.ethz.ch/js150 [7] This detail was not explained in the paper. We contacted the authors to confirm it. ![Training epoch illustration](page_184_120_1207_354.png) Figure 7: Training epoch illustration <table> <tr> <th>Categories</th> <th>Previous work<br>Raychev et al. (2016a)</th> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> </tr> <tr> <td>One model accuracy</td> <td></td> <td>79.4 ± 0.2%</td> <td>84.8 ± 0.1%</td> <td>84.0 ± 0.1%</td> </tr> <tr> <td>Ensemble accuracy</td> <td></td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> </tr> </table> Table 2: Next non-terminal prediction results bucket, we stop adding whole programs, and start adding only the first segment of each program: when a bucket is empty, a program is chosen randomly, and its first segment is added to the bucket. We terminate the training process when all buckets are empty at the same time. That is, all programs from the first 8 epochs have been trained. This is illustrated in Figure 7. The hidden states are initialized with \( h_0, c_0 \), which are two trainable vectors. The hidden states of LSTM from the previous segment are fed into the next one as input if both segments belong to the same program. Otherwise, the hidden states are reset to be \( h_0, c_0 \). We observe that resetting the hidden states for every new program improves the performance a lot. We initialize all parameters in \( h_0, c_0 \) to be 0. All other parameters are initialized with values uniformly randomly sampled from \([−0.05, 0.05]\). For each model, we train 5 sets of parameters using different random initializations. We evaluate the ensemble of the 5 models by averaging 5 softmax outputs. In our evaluation, we find that the ensemble improves the accuracy by 1 to 3 points in general. 5.3 NEXT NODE PREDICTION In this section, we present the results of our models on next node prediction, and compare them with the counterparts in Bielik et al. (2016), which is the state-of-the-art on these tasks. Therefore, we use the same testset consisting of \( 5.3 \times 10^7 \) queries as in Bielik et al. (2016). In the following, we first report results of next non-terminal prediction and of next terminal prediction, then evaluate our considered models’ performance on programs with different lengths. Next non-terminal prediction. The results are presented in Table 2. From the table, we can observe that both NT2N and NT2NT can outperform Raychev et al. (2016a). In particular, an ensemble of 5 NT2N models improves [Raychev et al., 2016a] by 3.8 percentage points. We also report the average accuracies of the 5 single models and the variance among them. We observe that the variance is very small, i.e., \( 0.1\% - 0.2\% \). This indicates that the trained models’ accuracies are robust to random initialization. Among the neural network approaches, NT2NT’s performance is lower than NT2N, even given that the former is provided with more supervision. This shows that given the limited capacity of the model, it may learn to trade off non-terminal prediction performance in favor of the terminal prediction task it additionally needs to perform. <table> <tr> <th rowspan="2">Categories</th> <th colspan="2">Previous work</th> <th colspan="2">Our considered models</th> </tr> <tr> <th>Raychev et al. (2016a)</th> <th>NT2NT</th> <th>NTN2T</th> </tr> <tr> <td>One model accuracy</td> <td>82.9%</td> <td>76.6 ± 0.1%</td> <td>81.9 ± 0.1%</td> </tr> <tr> <td>Overall</td> <td></td> <td>78.6%</td> <td>83.4%</td> </tr> </table> Table 3: Next terminal prediction results <table> <tr> <th rowspan="2">Top 1 accuracy</th> <th colspan="2">Non-terminal</th> <th colspan="2">Terminal</th> </tr> <tr> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> <th>NTN2T</th> <th>NT2NT</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> <td>83.4%</td> <td>78.6%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>87.7%</td> <td>94.4%</td> <td>92.7%</td> <td>89.0%</td> <td>85.8%</td> </tr> <tr> <td>Overall</td> <td>84.2%</td> <td>90.1%</td> <td>88.5%</td> <td>85.4%</td> <td>81.2%</td> </tr> <tr> <th colspan="6">Top 5 accuracy</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>97.9%</td> <td>98.9%</td> <td>98.7%</td> <td>87.9%</td> <td>86.4%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>98.8%</td> <td>99.6%</td> <td>99.4%</td> <td>91.5%</td> <td>90.5%</td> </tr> <tr> <td>Overall</td> <td>98.2%</td> <td>99.1%</td> <td>98.9%</td> <td>89.2%</td> <td>87.8%</td> </tr> </table> Table 4: Next token prediction on programs with different lengths. Next terminal prediction. The results are presented in Table 3. We observe that an ensemble of 5 NTN2T models can outperform Raychev et al. (2016a) by 0.5 points. Without the ensemble, its accuracies are around 82.1%, i.e., 0.8 points less than Raychev et al. (2016a). For the 5 single models, we also observe that the variance on their accuracies is also very small, i.e., 0.1%. On the other hand, we observe that NT2NT has much worse performance than NTN2T, i.e., by 4.8 percentage points. This shows that leveraging additional information about the parent non-terminal of the current predicting terminal can improve the performance significantly. Prediction accuracies on programs with different lengths. We examine our considered models’ performance over different subsets of the test set. In particular, we consider the queries in programs containing no more than 30,000 tokens, which is the same as used in Bielik et al. (2016); Raychev et al. (2016a). We also consider the rest of the queries in programs which have more than 30,000 tokens. The results are presented in Table 4. We can observe that for both non-terminal and terminal prediction, accuracies on longer programs are higher than on shorter programs. This shows that a LSTM-based model may become more accurate when observing more code inputted by programmers. We also report top 5 prediction accuracy. We can observe that the top 5 accuracy improves upon top 1 accuracy dramatically. This metric corresponds to the code completion scenario that an IDE may pop up a list of few (i.e., 5) candidates for users to choose from. In particular, NT2N can achieve 99.1% top-5 accuracy on the non-terminal prediction task. On the other hand, NTN2T can also achieve 89.2% accuracy on the terminal prediction task. In the test set, there are 7.4% of tokens in the data whose ground truth is UNK, i.e., non-top 50,000 most frequent tokens. This means that NTN2T can predict over \( 89.2 / (100 - 7.4\%) = 96.3\% \) of all tokens whose ground truth is not UNK. Therefore, this means that the users can choose from the popup list without typing the token manually over 96% of all time that the code completion is possible if the completion is restricted to the top 50,000 most frequent tokens in the dataset. The effectiveness of different UNK thresholds. We evaluate the effectiveness of how to choose the threshold to cut for UNK terminals on the accuracy. We randomly sample 1/10 of the training dataset and the test dataset and vary the thresholds to cut for UNK terminals from 10000 to 80000. We plot the percentage of UNK terminals in both the full test set and its subset in Figure 8. We can observe that the distributions of UNK terminals are almost the same in both sets. Further, when the threshold is 10000, i.e., all terminals out of the top 10000 most frequent ones are turned into UNKs, there are more than 11% UNK queries (i.e., queries with ground truth being UNK) in the test set. When the threshold increases to 50000 or more, this number drops to 7% to 6%. The variance of the UNK queries’ percentages is not large when threshold of UNK is varied from 50000 to 80000. Figure 8: Percentage of UNK tokens in the entire test data and the sampled subset of the test data by varying the UNK threshold from 10000 to 80000. Figure 9: Accuracies of different models trained over the sampled subset of training data by varying the UNK threshold from 10000 to 80000. <table> <tr> <th></th> <th>NT2NT</th> <th>N2N+NTN2T</th> <th>NT2N+NTN2T</th> </tr> <tr> <th>Top 1 accuracy</th> <td>73.9%</td> <td>72.0%</td> <td>77.7%</td> </tr> </table> Table 5: Predicting non-terminal and terminal together We train one NTN2T model for each threshold, and evaluate it using the sampled test set. The accuracies of different models are plotted in Figure 9. The trend of different models’ accuracies is similar to the trend of the percentage of non-UNK tokens in the test set. This is expected, since when the threshold increases the model has more chance to make correct predictions for original UNK queries. However, we observe that this is not always the case. For example, the accuracies of models trained with thresholds being 30000 and 40000 are almost the same, i.e., the difference is only 0.02%. We make similar observations among the models trained with thresholds being 60000, 70000, and 80000. Notice that we have observed above that when we train 5 models with different random initialization, the variance of the accuracies of these models is within 0.1%. Therefore, we conclude that when we increase the UNK threshold from 30000 to 40000 and from 60000 to 80000, the accuracies do not change significantly. One potential explanation is that when increasing the UNK threshold, while it has more chance to predict those otherwise UNK terminals, a model may also more likely make mistakes when it needs to choose the next terminal from more candidates. 5.4 JOINT PREDICTION In this section, we evaluate different approaches to predict the next non-terminal and terminal together for the joint prediction task. In fact, NT2NT is designed for this task. Alternative approaches can predict the next non-terminal first, and then predict the next terminal based on the predicted next non-terminal. We choose NTN2T method as the second step to predict the next terminal, and we examine two different approaches as the first step to predict the next non-terminal: N2N and NT2N. Therefore, we compare three methods in total. The top 1 accuracy results are presented in Table 5. N2N+NTN2T is less effective than NT2N+NTN2T, as expected, since when predicting the non-terminal in the first step, N2N is less effective than NT2N as we have shown in Table 4. On the other hand, NT2NT’s performance is better than N2N+NTN2T, but is worse than NT2N+NTN2T. We observe that for all these three combinations, we have \[ \Pr(\hat{T}_{k+1} = T_{k+1} \land \hat{N}_{k+1} = N_{k+1}) > \Pr(\hat{T}_{k+1} = T_{k+1}) \Pr(\hat{N}_{k+1} = N_{k+1}) \] These facts indicate that the events of the next non-terminal and terminal being predicted correctly are not independent, but very relevant to each other instead. This is also the case for NT2NT, even though NT2NT predicts the next non-terminal and the next terminal independently conditional upon the LSTM hidden states. <table> <tr> <th></th> <th>NT2NT</th> <th>NT2NT+D</th> <th>NTN2T</th> <th>NTN2T+D</th> </tr> <tr> <td>Overall accuracy</td> <td>81.2%</td> <td>85.1%</td> <td>85.4%</td> <td>89.9%</td> </tr> <tr> <td>Accuracy on non-UNK terminals</td> <td>87.6%</td> <td>87.5%</td> <td>92.2%</td> <td>91.8%</td> </tr> <tr> <td>Deny prediction rate</td> <td>0%</td> <td>5.2%</td> <td>0%</td> <td>6.1%</td> </tr> </table> Table 6: Deny prediction results. **Top 1 accuracy** is computed as the percentage of all queries (including the ones whose ground truth is **UNK**) that can be predicted correctly, i.e., the prediction matches the ground truth even when the ground truth is **UNK**. **Accuracy on non-UNK terminals** measures the accuracy of each model on all non-UNK terminals. **Deny rate** is calculated as the percentage of all queries that a model denies prediction. **Prediction accuracy** is the top 1 accuracy over those queries that a model does not deny prediction, i.e., the prediction is not **UNK**. ![Line plot showing overall accuracies and accuracies on non-UNK terminals by varying alpha.](page_370_670_808_246.png) Figure 10: Overall accuracies and accuracies on non-UNK terminals by varying \( \alpha \). 5.5 DENYING PREDICTION We compare the models which do not deny prediction (i.e., NT2NT and NTN2T) and those which do (i.e., NT2NT+D and NTN2T+D). Results are presented in Table[6]. For a reference, in the test set, there are 7.42% UNK queries. We can observe that deny prediction models (i.e., +D models) have higher accuracies than the corresponding original models. This is expected. Since deny prediction models allow predicting UNK terminals, while NT2NT and NTN2T fail on all UNK queries, +D will succeed on most of them. We further evaluate the accuracy on non-UNK terminals. One may expect that since +D models may prefer to predict UNK, a standard model should have a higher accuracy on non-UNK terminals than its deny prediction counterpart. The results show that this is indeed the case, but the margin is very small, i.e., 0.1% for NT2NT and 0.3% for NTN2T. This means that, allowing denying prediction does not necessarily sacrifice a model’s ability on predicting non-UNK terminals. We are also interested in how frequent a +D model will deny prediction. We can observe that NTN2T+D will deny prediction for only 6.1% of all queries, which is even less than the percentage of UNK queries (i.e., 7.42%). This shows that although we allow the model to deny prediction, it is conservative when executing this privilege. This partially explains why NTN2T+D’s accuracy on non-UNK terminals is not much less than NTN2T’s. **Effectiveness of the value of \( \alpha \).** We are interested in how the hyperparameter \( \alpha \) in a +D model affects its accuracy. We train 11 different NTN2T+D models on the 1/10 subset of the training set, which is used above to examine the effectiveness of UNK thresholds, by varying \( \alpha \) from 0.0 to 1.0. Notice that \( \alpha = 0.0 \), this model becomes a standard NTN2T model. We plot both overall accuracies and accuracies on non-UNK terminals in Figure[10]. We observe the same effect as above: 1) the overall accuracy for \( \alpha = 1 \) is 6% higher than the one for \( \alpha = 0 \); and 2) the accuracy on non-UNK terminals for \( \alpha = 1 \) is less than the one for \( \alpha = 0 \), but the margin is not large (i.e., less than 1%). When we increase \( \alpha \) from 0 to 0.3, we can observe that the overall accuracy steeply increases. When we further increase \( \alpha \), however, the overall accuracy becomes steady. This is also the case for accuracy on non-UNK terminals. The result of this experiment shows that how to set \( \alpha \) is a trade-off between the overall accuracy and the accuracy on non-UNK terminals and how to choose \( \alpha \) depends on the application. 5.6 RUNTIME We evaluate our models’ runtime performance. Our models are implemented in TensorFlow (Abadi et al. (2016)). We evaluate our models on a machine equipped with 16 Intel Xeon CPUs, 16 GB RAM, and a single GPU Tesla K80. All queries from the same program are processed incrementally. That is, given two queries \( A, B \), if \( A \) has one more node than \( B \), then the LSTM outputs for \( B \) will be reused for processing \( A \), so that only the additional node in \( A \) needs to be processed. Note that this is consistent with the practice where programs are written incrementally from beginning to end. For each model, we feed in one query at a time into the model. There are 3939 queries in total coming from randomly chosen programs. We measure the overall response latency for each query. We observe that the query response time is consistent across all queries. On average, each model takes around 16 milliseconds to respond a query on GPU, and around 33 milliseconds on CPU. Note that these numbers are from just a proof of concept implementation and we have not optimized the code. Considering that a human being usually does not type in a token within 30 milliseconds, we conclude that our approach is efficient enough for potential practical usage. We emphasize that these numbers do not directly correspond to the runtime latency when the techniques are deployed to a code completion engine, since the changes of AST serialization may not be sequential while users are programming incrementally. This analysis, however, only provides an evidence to show the feasibility of applying our approach toward a full-fledged code completion engine. 6 CONCLUSION In this paper we introduce, motivate, and formalize the problem of automatic code completion. We describe LSTM-based approaches that capture parsing structure readily available in the code completion task. We introduce a simple LSTM architecture to model program context. We then explore several variants of our basic architecture for different variants of the code completion problem. We evaluate our techniques on a challenging JavaScript code completion benchmark and compare against the state-of-the-art code completion approach. We demonstrate that deep learning techniques can achieve better prediction accuracy by learning program patterns from big code. In addition, we find that using deep learning techniques, our models perform better for longer programs than for shorter ones, and when the code completion engine can pop up a list of candidates, our approach allows users to choose from the list instead of inputting the token over 96% of all time that this is possible. We also evaluate our approaches’ runtime performance and demonstrate that deep code completion has the potential to run in real-time as users type. We believe that deep learning techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. ACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments. This material is based upon work partially supported by the National Science Foundation under Grant No. TWC-1409915, and a DARPA grant FA8750-15-2-0104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA. REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. Miltiadis Allamanis and Charles Sutton. Mining idioms from source code. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 472–483. ACM, 2014. Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Suggesting accurate method and class names. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pp. 38–49. ACM, 2015. I. Beltagy and Chris Quirk. Improved semantic parsers for if-then statements. In ACL, 2016. Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: Probabilistic Model for Code. In ICML, 2016. Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, and Mingcheng Chen. Latent attention for if-then program synthesis. In NIPS, 2016. Michael Collins. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589–637, 2003. Li Dong and Mirella Lapata. Language to logical form with neural attention. In ACL, 2016. Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In 2012 34th International Conference on Software Engineering (ICSE), pp. 837–847. IEEE, 2012. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 639–646, 2010. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Andrew Senior, Fumin Wang, and Phil Blunsom. Latent predictor networks for code generation. CoRR, 2016. URL http://arxiv.org/abs/1603.06744 Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. In ICML, 2014. Tung Thanh Nguyen, Anh Tuan Nguyen, Hoan Anh Nguyen, and Tien N Nguyen. A statistical semantic language model for source code. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pp. 532–542. ACM, 2013. Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In ACM SIGPLAN Notices, volume 49, pp. 419–428. ACM, 2014. Veselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 731–747. ACM, 2016a. Veselin Raychev, Pavol Bielik, Martin Vechev, and Andreas Krause. Learning programs from noisy data. In POPL, 2016b. Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 269–280. ACM, 2014. Martin White, Christopher Vendome, Mario Linares-Vásquez, and Denys Poshyvanyk. Toward deep learning software repositories. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, 2015.
ABSTRACT Code completion, an essential part of modern software development, yet can be challenging for dynamically typed programming languages. In this paper we explore the use of neural network techniques to automatically learn code completion from a large corpus of dynamically typed JavaScript code. We show different neural networks that leverage not only token level information but also structural information, and evaluate their performance on different prediction tasks. We demonstrate that our models can outperform the state-of-the-art approach, which is based on decision tree techniques, on both next non-terminal and next terminal prediction tasks by 3.8 points and 0.5 points respectively. We believe that neural network techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. 1 INTRODUCTION As the scale and complexity of modern software libraries and tools continue to grow, code completion has become an essential feature in modern integrated development environments (IDEs). By suggesting the right libraries, APIs, and even variables in real-time, intelligent code completion engines can substantially accelerate software development. Furthermore, as many projects move to dynamically typed and interpreted languages, effective code completion can help to reduce costly errors by eliminating typos and identifying the right arguments from context. However, existing approaches to intelligent code completion either rely on strong typing (e.g., Visual Studio for C++), which limits their applicability to widely used dynamically typed languages (e.g., JavaScript and Python), or are based on simple heuristics and term frequency statistics which are often brittle and are relatively error-prone. In particular, Raychev et al. (2016) proposes the state-of-the-art probabilistic model for code, which generalizes both simple \( n \)-gram models and probabilistic grammar approaches. This approach, however, examines only a limited number of elements in the source code when completing the code. Therefore, the effectiveness of this approach may not scale well to large programs. In this paper we explore the use of deep learning techniques to address the challenges of code completion for the widely used and dynamically typed JavaScript programming language. We formulate the code completion problem as a sequential prediction task over the traversal of a parse-tree structure consisting of both non-terminal structural nodes and terminal nodes encoding program text. We then present simple, yet expressive, LSTM-based (Hochreiter & Schmidhuber (1997)) models that leverage additional side information obtained by parsing the program structure. Compared to widely used heuristic techniques, deep learning for code completion offers the opportunity to learn rich contextual models that can capture language and even library specific code patterns without requiring complex rules or expert intervention. We evaluate our recurrent neural network architecture on an established benchmark dataset for the JavaScript code completion. Our evaluations reveal several findings: (1) when evaluated on short programs, our RNN-based models can achieve better performance on the next node prediction tasks compared to the prior art (Bielik et al. (2016); Raychev et al. (2016)), which are based on decision-tree models; (2) our models’ prediction accuracies on longer programs, which is provided in the test set, but were not evaluated upon by previous work, are better than our models’ accuracies on shorter *The first and second authors contributed equally and are listed in an alphabetical order. Figure 1: Code Completion Example in IntelliJ IDEA Figure 2: Correct prediction of the program in Figure 1 programs; and (3) in the scenario that the code completion engine suggests a list of candidates, our RNN-based models allow users to choose from a list of 5 candidates rather than inputting manually for over 96% of all time when this is possible. These promising results encourage more investigation into developing neural network approaches for the code completion problem. We believe that our work not only highlights the importance of the field of neural network-based code completion, but is also an important step toward neural network-based program synthesis. 2 RELATED WORK Existing approaches that build probabilistic models for code can typically be categorized as n-gram models (Hindle et al., 2012; Nguyen et al., 2013; Tu et al., 2014), probabilistic grammars (Collins, 2003; Allamanis & Sutton, 2014; Allamanis et al., 2015; Maddison & Tarlow, 2014; Liang et al., 2010), and log-bilinear models (Allamanis et al., 2015). Bielik et al. (2016) generalizes the PCFG approach and n-gram approach, while Raychev et al. (2016a) further introduces decision tree approaches to generalize Bielik et al. (2016). Raychev et al. (2014) and White et al. (2015) explore how to use recurrent neural networks (RNNs) to facilitate the code completion task. However, these works only consider running RNNs on top of a token sequence to build a probabilistic model. Although the input sequence considered in Raychev et al. (2014) is produced from an abstract object, the structural information contained in the abstract syntax tree is not directly leveraged by the RNN structure in both of these two works. In contrast, we consider extending LSTM, a RNN structure, to leverage the structural information directly for the code prediction task. Recently there has been an increasing interest in developing neural networks for program synthesis (Ling et al., 2016; Beltagy & Quirk (2016); Dong & Lapata (2016); Chen et al., (2016)). These works all consider synthesizing a program based on inputs in other formats such as images or natural language descriptions. 3 CODE COMPLETION VIA BIG CODE In this section, we first introduce the problem of code completion and its challenges. Then we explain abstract syntax trees (AST), which we use as the input for our problems. Lastly, we formally define the code completion problem in different settings as several prediction problems based on a partial AST. 3.1 CODE COMPLETION: AN EXAMPLE Code completion is a feature in some integrated development environments (IDEs) to speed up programmers’ coding process. Figure 1 demonstrates this feature in IntelliJ IDEA. In this example, a part of a JavaScript program has been input to the IDE. When the dot symbol (i.e., “.”) is added after _webpack_require_, the IDE prompts with a list of candidates that the programmer is most likely to input next. When a candidate matches the intention, the programmer can choose it from the list rather than typing it manually. In this work, we define the code completion problem as predicting the next symbol while a program is being written. We consider this problem as an important first step toward completing an entire program. Traditional code completion techniques are developed by the programming language community to leverage context information for prediction. For example, when a programmer writes a Java program and inputs a variable name and then a dot symbol, the code completion engine will analyze the class of the variable and prompt the members of the class. In programming language literature, such information is referred to as type information. Statically typed languages, such as C and Java, enforces type checking at static time, so that the code completion engine can take advantage of full type information to make prediction without executing the code. In recent years, dynamically typed languages, such as Python or JavaScript, have become increasingly popular. In these languages, type checking is usually performed dynamically while executing a program. Thus, type information may be only partially available to the code completion engine while the programmer is writing the code. Despite their popularity, the dynamic typing of these languages makes code completion for them challenging. For example, in Figure 1, the next symbol to be added is p. This symbol does not appear in the previous part of the program, and thus the code completion engine in IntelliJ IDEA IDE cannot prompt with this symbol. However, this challenge may be remedied by leveraging a large corpus of code, a.k.a., big code. In fact, _webpack_require_.p is a frequently used combination appearing in many programs on Github.com, one of the largest repositories of source code. Therefore, a code completion engine powered by big code is likely to learn this combination and to prompt p. In fact, our methods discussed in later sections can predict this case very well (Figure 2). 3.2 ABSTRACT SYNTAX TREE Regardless of whether it is dynamically typed or statically typed, any programming language has an unambiguous context free grammar (CFG), which can be used to parse source code into an abstract syntax tree (AST). Further, an AST can be converted back into source code easily. Therefore we consider the input of our code completion problem as an AST, which is a typical assumption made by most code completion engines. An AST is a rooted tree. In an AST, each non-leaf node corresponds to a non-terminal in the CFG specifying structure information. In JavaScript, non-terminals may be ExpressionStatement, ForStatement, IfStatement, SwitchStatement, etc. Each leaf node corresponds to a terminal in the CFG encoding program text. There are infinite possibilities for terminals. They can be variable names, string or numerical literals, operators, etc. Figure 3 illustrates a part of the AST of the code snippet in Figure 1. In this tree, a node without a surrounding box (e.g., ExpressionStatement, etc.) denotes a non-terminal node. A node embraced by an orange surrounding box (e.g., installedModules) denotes a terminal node. At the bottom of the figure, there is a non-terminal node Property and a terminal node p. They have not been observed by the editor, so we use green to indicate this fact. Note that each non-terminal has at most one terminal as its child. In a traditional code completion engine, the AST can be further processed by a type checker so that type information will be attached to each node. In this work, however, we focus on dynamically typed languages, and type information is not always available. Therefore, we do not consider the type information provided by a compiler, and leave it for our future work. 3.3 PROBLEM SETUP In this work, we consider the input to be a partial AST, and the code completion problem is to predict the next node given the partial AST. In the following, we first define a partial AST, and then present the code completion problems in different scenarios. Input: a partial AST. Given a complete AST \( T \), we define a partial AST to be a subtree \( T' \) of \( T \), such that for each node \( n \) in \( T' \), its left set \( L_T(n) \) with respect to \( T \) is a subset of \( T' \), i.e., \( L_T(n) \subseteq T' \). Here, the left set \( L_T(n) \) of a node \( n \) with respect to \( T \) is defined as the set of all nodes in the in-order sequence during the depth-first search of \( T \) that are visited earlier than \( n \). Under this definition, in each partial AST \( T' \), there exists the right-most node \( n_R \), such that all other nodes in \( T' \) form its left set \( L_T(n_R) \). The next node in the in-order depth-first search visiting sequence after \( n_R \) is also the first node not appearing in \( T' \). We call this node the next node following the partial AST. Figure 4 illustrates these concepts using the example in Figure 3. In the rest of the paper, we also refer to a partial AST as a query. Next node prediction. Given a partial AST, the next node prediction problem, as suggested by its name, is to predict the next node following the partial AST. Based on the node’s kind, i.e., whether its a non-terminal node or a terminal one, we can categorize the problem into the next non-terminal prediction problem and the next terminal prediction problem. Although the next terminal prediction problem may sound more interesting, the next non-terminal prediction problem is also important, since it predicts the structure of the program. For example, when then next non-terminal is ForStatement, the next token in the source program is the keyword for, which does not have a corresponding terminal in the dataset. In this case, a model able to predict the next non-terminal can be used by the code-completion engine to emit the keyword for. These two tasks are also the same problems considered by previous works employing domain specific languages to achieve heuristic-based code completion (Raychev et al. (2016b); Bielik et al. (2016)). Predicting the next node versus predicting the next token. A natural alternative formulation of the problem is predicting the next token given the token sequence that has been inputted so far. Such a formulation, however, does not take advantage of the AST information, which is very easy to acquire with a suitable parser. Predicting the next node allows taking advantage of such information to enable more intelligent code completion. In particular, predicting the next non-terminal allows completing the structure of a code block rather than a single (keyword) token. For example, when the next token is a keyword for, the corresponding next non-terminal is ForStatement, which corresponding to the following code block: for( ___ ; ___ ; ___ ) { // for-loop body } In this case, successfully predicting the next non-terminal node allows completing not only the next key token for, but also tokens such as (, ;, ), {, and }. Such structure completion enabled by predicting the next non-terminal is more compelling in modern IDEs. Predicting the next terminal node allows completing identifiers, properties, literals, etc., which is similar to the next token prediction. However, predicting the next terminal node can leverage the information of the predicting node’s non-terminal parent, indicating what is being predicted, i.e., an identifier, a property, or a literal, etc. For example, when completing the following expression: __webpack_require_. the code completion engine with AST information will predict a property of __webpack_require_, while the engine without AST information only learns two tokens __webpack_require_ and a dot ". " and tries to predict the next token without any constraint. In our evaluation, we show that by leveraging the information from the non-terminal parent can significantly improve the performance. In this work, we focus on the next node prediction task, and leave the comparison with next token prediction as our future work. Joint prediction. A more important problem than predicting only the next non-terminal or terminal itself is to predict the next non-terminal and terminal together. We refer to this task to predict both next non-terminal and terminal as the joint prediction problem. We hope code completion can be used to generate the entire parsing tree in the end, and joint prediction is one step further toward this goal than next node prediction. Formally, the joint prediction problem that we consider is that, given a partial AST whose following node is a non-terminal one, we want to predict both the next non-terminal and the next terminal. There may be non-terminal nodes which do not have a terminal child (e.g., the AssignmentStatement). In this case, we artificially add an EMPTY terminal as its child. Note that this treatment is the same as in [Bielik et al., 2016]. We count it as a correct prediction if both the next non-terminal and terminal are predicted correctly. Denying prediction. There may be infinite possibilities for terminals, so it is impossible to predict all terminals correctly. We consider an alternative scenario that, when it thinks that the programmer will input a rare terminal, the code completion engine should have the ability to identify this case, and deny predicting the next node(s). In our problem, we build a vocabulary for frequent terminals. All terminals not in this vocabulary are considered as an UNK terminal. In this case, when it predicts UNK for the next terminal, the code completion model is considered as denying prediction. Since non-terminals’ vocabulary size is very small, denying prediction is only considered for the next terminal prediction, but not for the next non-terminal prediction. 4 MODELS In this section, we present the basic models considered in this work. In particular, given a partial AST as input, we first convert the AST into its left-child right-sibling representation, and serialize it as its in-order depth first search sequence. Thus, we consider the input for the next non-terminal prediction as a sequence of length k, i.e., \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\). Here, for each \(i\), \(N_i\) is a non-terminal, and \(T_i\) is the terminal child of \(N_i\). For each non-terminal node \(N_i\), we encode not only its kind, but also whether the non-terminal has at least one non-terminal child, and/or one right-sibling. In doing so, from an input sequence, we can reconstruct the original AST. This encoding is also employed by [Raychev et al., 2016a]. We refer to each element in the sequence (e.g., \((N_i, T_i)\)) as a token. As mentioned above, a non-terminal without a terminal child is considered to have an EMPTY child. This input sequence \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\) is the only input for all problems except the next terminal prediction. For the next terminal prediction problem, besides the input sequence, we also have the information about the parent of the current predicting terminal, which is a non-terminal, i.e., \(N_{k+1}\). Throughout the rest of the discussion, we assume that both \(N_i\) and \(T_i\) employ one-hot encoding. The vocabulary sets of non-terminals and terminals are separate. 4.1 NEXT NON-TERMINAL PREDICTION Given an input sequence, our first model predicts the next non-terminal. The architecture is illustrated in Figure 5. We refer to this model as NT2N, which stands for using the sequence of Figure 5: Architecture (NT2N) for predicting the next non-terminal. Non-terminal and Terminal pairs TO predict the next Non-terminal. We first explain each layer of NT2N, and then introduce two variants of this model. Embedding non-terminal and terminal. Given an input sequence, the embedding of each token is computed as \[ E_i = AN_i + BT_i \] where \( A \) is a \( J \times V_N \) matrix and \( B \) is a \( J \times V_T \) matrix. Here \( J \) is the size of the embedding vector, \( V_N \) and \( V_T \) are the vocabulary sizes of non-terminals and terminals respectively. LSTM layer. Then the embedded sequence is fed into a LSTM layer to get the hidden state. In particular, a LSTM cell takes an input token and a hidden state \( h_{i-1}, c_{i-1} \) from the previous LSTM cell as input, computes a hidden state \( h_i, c_i \), and outputs \( h_i \), based on the following formulas: \[ \begin{pmatrix} q \\ f \\ o \\ g \end{pmatrix} = \begin{pmatrix} \sigma \\ \sigma \\ \sigma \\ \tanh \end{pmatrix} P_{J,2J} \left( \begin{pmatrix} x_i \\ h_{i-1} \end{pmatrix} \right) \] \[ c_i = f \odot c_{i-1} + q \odot g \] \[ h_i = o \odot \tanh(c_i) \] Here, \( P_{J,2J} \) denotes a \( J \times 2J \) parameter matrix, where \( J \) is the size of the hidden state, i.e. dimension of \( h_i \), which is equal to the size of embedding vectors. \( \sigma \) and \( \odot \) denote the sigmoid function and pointwise multiplication respectively. Softmax layer. Assume \( h_k \) is the output hidden state of the last LSTM cell. \( h_k \) is fed into a softmax classifier to predict the next non-terminal. In particular, we have \[ \hat{N}_{k+1} = \text{softmax}(W_N \times h_k + b_N) \] where \( W_N \) and \( b_N \) are a matrix of size \( V_N \times J \) and a \( V_N \)-dimensional vector respectively. Using only non-terminal inputs. One variant of this model is to omit all terminal information from the input sequence. In this case, the embedding is computed as \( E_i = AN_i \). We refer to this model as N2N, which stands for using Non-terminal sequence TO predict the next Non-terminal. Predicting the next terminal and non-terminal together. Based on NT2N, we can predict not only the next non-terminal but also the next terminal, using \[ \hat{T}_{k+1} = \text{softmax}(W_T \times h_k + b_T) \] where \( W_T \) and \( b_T \) are a matrix of size \( V_T \times J \) and a \( V_T \)-dimensional vector respectively. In this case, the loss function has an extra term to give supervision on predicting \( \hat{T} \). We refer to this model as NT2NT, which stands for *using the sequence of Non-terminal and Terminal pairs TO predict the next Non-terminal and Terminal pair*. 4.2 NEXT TERMINAL PREDICTION In the next terminal prediction problem, the partial AST does not only contain \((N_1, T_1), ..., (N_k, T_k)\), but also \(N_{k+1}\). In this case, we can employ the architecture in Figure 6 to predict \(T_{k+1}\). In particular, we first get the LSTM output \(h_k\) in the same way as in NT2N. The final prediction is based on \[ \hat{T}_{k+1} = \text{softmax}(W_T h_k + W_{NT} N_{k+1} + b_T) \] where \(W_{NT}\) is a matrix of size \(V_T \times V_N\), and \(W_T\) and \(b_T\) are the same as in NT2NT. We refer to this model as NTN2T, which stands for *Non-terminal and Terminal pair sequence and the next Non-terminal TO predict the next Terminal*. Note that the model NT2NT can also be used for the next terminal prediction task, although the non-terminal information \(N_{k+1}\) is not leveraged. We will compare the two approaches later. 4.3 JOINT PREDICTION We consider two approaches to predict the next non-terminal and the next terminal together. The first approach is NT2NT, which is designed to predict the two kinds of nodes together. An alternative approach is to (1) use a next non-terminal approach \(X\) to predict the next non-terminal; and (2) feed the predicted non-terminal and the input sequence into NTN2T to predict the next terminal. We refer to such an approach as X+NTN2T. 4.4 DENYING PREDICTION We say a model *denies prediction* when it predicts the next terminal to be UNK, a special terminal to substitute rare terminals. However, due to the large amount of rare terminals, the occurrences of UNK may be much greater than any single frequent terminals. In this case, a model that can deny prediction may tend to predict UNKs, and thus may predict for fewer queries than it should. To mitigate this problem, we modify the loss function to be adaptive. Specifically, training a machine learning model \(f_\theta\) is to optimize the following objective: \[ \arg\min_\theta \sum_i l(f_\theta(q_i), y_i) \] where \(\{(q_i, y_i)\}\) is the training dataset consisting pairs of a query \(q_i\) and its ground truth next token \(y_i\). \(l\) is the loss function to measure the distance between the prediction \(\hat{y}_i = f_\theta(q_i)\) and the ground <table> <tr> <th colspan="2">Training set</th> <th colspan="2">Test set</th> <th colspan="2">Overall</th> </tr> <tr> <th>Programs</th> <td>100,000</td> <th>Programs</th> <td>50,000</td> <th>Non-terminal</th> <td>44</td> </tr> <tr> <th>Queries</th> <td>1.7 \times 10^8</td> <th>Queries</th> <td>8.3 \times 10^7</td> <th>Terminal</th> <td>3.1 \times 10^6</td> </tr> </table> Table 1: Statistics of the dataset truth \( y_i \). We choose \( l \) to be the standard cross-entropy loss. We introduce a weight \( \alpha_i \) for each sample \( (q_i, y_i) \) in the training dataset to change the objective to be as follows: \[ \argmin_{\theta} \sum_i \alpha_i l(f_\theta(q_i), y_i) \] When training a model not allowed to deny prediction, we set \( \alpha_i = 0 \) for \( y_i = \text{UNK} \), and \( \alpha_i = 1 \) otherwise. In doing so, it is equivalent to remove all queries whose ground truth next token is UNK. When training a model that allows denying prediction, we set all \( \alpha_i \) to be 1. To denote this case, we put a notation “+D” at the end of the model, (e.g., NT2NT+D, etc.). 5 EVALUATION 5.1 DATASET We use the JavaScript dataset[2] provided by [Raychev et al. (2016b)] to evaluate different approaches. The statistics of the dataset can be found in Table 1. [Raychev et al. (2016a)] provides an approach, called PHOG, for the next token prediction. The reported accuracy results are based on a subset of \( 5.3 \times 10^7 \) queries from the full test set. Specifically, [Raychev et al. (2016a)] chose all queries in each program containing fewer than 30,000 tokens[7]. When we compare with their results, we use the same testset. Otherwise, without a special specification, our results are based on the full test set consisting of \( 8.3 \times 10^7 \) queries. 5.2 TRAINING DETAILS Vocabulary In our dataset, there are 44 different kinds of non-terminals. Combining two more bits of information to indicate whether the non-terminal has a child and/or a right sibling, there are at most 176 different non-terminals. However, not all such combinations are possible: a ForStatement must have a child. In total, the vocabulary size for non-terminals is 97. For terminals, we sort all terminals in the training set by their frequencies. Then we choose the 50,000 most frequent terminals to build the vocabulary. We further add three special terminals: UNK for out-of-vocabulary tokens, EOF indicating the end of program, and Empty for the non-terminal which does not have a terminal. Note that about 45% terminals in the dataset are Empty terminals. Training details. We use a single layer LSTM network with hidden unit size of 1500 as our base model. To train the model, we use Adam ([Kingma & Ba (2014)] with base learning rate 0.001. The learning rate is multiplied by 0.9 every 0.2 epochs. We clip the gradients’ norm to 5. The batch size is \( b = 80 \). We use truncated backpropagation through time, by unrolling the LSTM model \( s = 50 \) times to take an input sequence of length 50 in each batch (and therefore each batch contains \( b \times s = 4000 \) tokens). We divide each program into segments consisting of \( s \) consecutive tokens. The last segment of a program, which may not be full, is padded with (EOF) tokens. We coalesce multiple epochs together. We organize all training data into \( b \) buckets. In each epoch, we randomly shuffle all programs in the training data to construct a queue. Whenever a bucket is empty, a program is popped from the queue and all segments of the program are inserted into the empty bucket sequentially. When the queue becomes empty, i.e., the current epoch finishes, all programs are re-shuffled randomly to reconstruct the queue. Each mini-batch is formed by \( b \) segments, i.e., one segment popped from each bucket. When the training data has been shuffled for \( e = 8 \) times, i.e., \( e \) epochs are inserted into the [2] http://www.srl.inf.ethz.ch/js150 [7] This detail was not explained in the paper. We contacted the authors to confirm it. ![Training epoch illustration](page_184_120_1207_354.png) Figure 7: Training epoch illustration <table> <tr> <th>Categories</th> <th>Previous work<br>Raychev et al. (2016a)</th> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> </tr> <tr> <td>One model accuracy</td> <td></td> <td>79.4 ± 0.2%</td> <td>84.8 ± 0.1%</td> <td>84.0 ± 0.1%</td> </tr> <tr> <td>Ensemble accuracy</td> <td></td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> </tr> </table> Table 2: Next non-terminal prediction results bucket, we stop adding whole programs, and start adding only the first segment of each program: when a bucket is empty, a program is chosen randomly, and its first segment is added to the bucket. We terminate the training process when all buckets are empty at the same time. That is, all programs from the first 8 epochs have been trained. This is illustrated in Figure 7. The hidden states are initialized with \( h_0, c_0 \), which are two trainable vectors. The hidden states of LSTM from the previous segment are fed into the next one as input if both segments belong to the same program. Otherwise, the hidden states are reset to be \( h_0, c_0 \). We observe that resetting the hidden states for every new program improves the performance a lot. We initialize all parameters in \( h_0, c_0 \) to be 0. All other parameters are initialized with values uniformly randomly sampled from \([−0.05, 0.05]\). For each model, we train 5 sets of parameters using different random initializations. We evaluate the ensemble of the 5 models by averaging 5 softmax outputs. In our evaluation, we find that the ensemble improves the accuracy by 1 to 3 points in general. 5.3 NEXT NODE PREDICTION In this section, we present the results of our models on next node prediction, and compare them with the counterparts in Bielik et al. (2016), which is the state-of-the-art on these tasks. Therefore, we use the same testset consisting of \( 5.3 \times 10^7 \) queries as in Bielik et al. (2016). In the following, we first report results of next non-terminal prediction and of next terminal prediction, then evaluate our considered models’ performance on programs with different lengths. Next non-terminal prediction. The results are presented in Table 2. From the table, we can observe that both NT2N and NT2NT can outperform Raychev et al. (2016a). In particular, an ensemble of 5 NT2N models improves [Raychev et al., 2016a] by 3.8 percentage points. We also report the average accuracies of the 5 single models and the variance among them. We observe that the variance is very small, i.e., \( 0.1\% - 0.2\% \). This indicates that the trained models’ accuracies are robust to random initialization. Among the neural network approaches, NT2NT’s performance is lower than NT2N, even given that the former is provided with more supervision. This shows that given the limited capacity of the model, it may learn to trade off non-terminal prediction performance in favor of the terminal prediction task it additionally needs to perform. <table> <tr> <th rowspan="2">Categories</th> <th colspan="2">Previous work</th> <th colspan="2">Our considered models</th> </tr> <tr> <th>Raychev et al. (2016a)</th> <th>NT2NT</th> <th>NTN2T</th> </tr> <tr> <td>One model accuracy</td> <td>82.9%</td> <td>76.6 ± 0.1%</td> <td>81.9 ± 0.1%</td> </tr> <tr> <td>Overall</td> <td></td> <td>78.6%</td> <td>83.4%</td> </tr> </table> Table 3: Next terminal prediction results <table> <tr> <th rowspan="2">Top 1 accuracy</th> <th colspan="2">Non-terminal</th> <th colspan="2">Terminal</th> </tr> <tr> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> <th>NTN2T</th> <th>NT2NT</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> <td>83.4%</td> <td>78.6%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>87.7%</td> <td>94.4%</td> <td>92.7%</td> <td>89.0%</td> <td>85.8%</td> </tr> <tr> <td>Overall</td> <td>84.2%</td> <td>90.1%</td> <td>88.5%</td> <td>85.4%</td> <td>81.2%</td> </tr> <tr> <th colspan="6">Top 5 accuracy</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>97.9%</td> <td>98.9%</td> <td>98.7%</td> <td>87.9%</td> <td>86.4%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>98.8%</td> <td>99.6%</td> <td>99.4%</td> <td>91.5%</td> <td>90.5%</td> </tr> <tr> <td>Overall</td> <td>98.2%</td> <td>99.1%</td> <td>98.9%</td> <td>89.2%</td> <td>87.8%</td> </tr> </table> Table 4: Next token prediction on programs with different lengths. Next terminal prediction. The results are presented in Table 3. We observe that an ensemble of 5 NTN2T models can outperform Raychev et al. (2016a) by 0.5 points. Without the ensemble, its accuracies are around 82.1%, i.e., 0.8 points less than Raychev et al. (2016a). For the 5 single models, we also observe that the variance on their accuracies is also very small, i.e., 0.1%. On the other hand, we observe that NT2NT has much worse performance than NTN2T, i.e., by 4.8 percentage points. This shows that leveraging additional information about the parent non-terminal of the current predicting terminal can improve the performance significantly. Prediction accuracies on programs with different lengths. We examine our considered models’ performance over different subsets of the test set. In particular, we consider the queries in programs containing no more than 30,000 tokens, which is the same as used in Bielik et al. (2016); Raychev et al. (2016a). We also consider the rest of the queries in programs which have more than 30,000 tokens. The results are presented in Table 4. We can observe that for both non-terminal and terminal prediction, accuracies on longer programs are higher than on shorter programs. This shows that a LSTM-based model may become more accurate when observing more code inputted by programmers. We also report top 5 prediction accuracy. We can observe that the top 5 accuracy improves upon top 1 accuracy dramatically. This metric corresponds to the code completion scenario that an IDE may pop up a list of few (i.e., 5) candidates for users to choose from. In particular, NT2N can achieve 99.1% top-5 accuracy on the non-terminal prediction task. On the other hand, NTN2T can also achieve 89.2% accuracy on the terminal prediction task. In the test set, there are 7.4% of tokens in the data whose ground truth is UNK, i.e., non-top 50,000 most frequent tokens. This means that NTN2T can predict over \( 89.2 / (100 - 7.4\%) = 96.3\% \) of all tokens whose ground truth is not UNK. Therefore, this means that the users can choose from the popup list without typing the token manually over 96% of all time that the code completion is possible if the completion is restricted to the top 50,000 most frequent tokens in the dataset. The effectiveness of different UNK thresholds. We evaluate the effectiveness of how to choose the threshold to cut for UNK terminals on the accuracy. We randomly sample 1/10 of the training dataset and the test dataset and vary the thresholds to cut for UNK terminals from 10000 to 80000. We plot the percentage of UNK terminals in both the full test set and its subset in Figure 8. We can observe that the distributions of UNK terminals are almost the same in both sets. Further, when the threshold is 10000, i.e., all terminals out of the top 10000 most frequent ones are turned into UNKs, there are more than 11% UNK queries (i.e., queries with ground truth being UNK) in the test set. When the threshold increases to 50000 or more, this number drops to 7% to 6%. The variance of the UNK queries’ percentages is not large when threshold of UNK is varied from 50000 to 80000. Figure 8: Percentage of UNK tokens in the entire test data and the sampled subset of the test data by varying the UNK threshold from 10000 to 80000. Figure 9: Accuracies of different models trained over the sampled subset of training data by varying the UNK threshold from 10000 to 80000. <table> <tr> <th></th> <th>NT2NT</th> <th>N2N+NTN2T</th> <th>NT2N+NTN2T</th> </tr> <tr> <th>Top 1 accuracy</th> <td>73.9%</td> <td>72.0%</td> <td>77.7%</td> </tr> </table> Table 5: Predicting non-terminal and terminal together We train one NTN2T model for each threshold, and evaluate it using the sampled test set. The accuracies of different models are plotted in Figure 9. The trend of different models’ accuracies is similar to the trend of the percentage of non-UNK tokens in the test set. This is expected, since when the threshold increases the model has more chance to make correct predictions for original UNK queries. However, we observe that this is not always the case. For example, the accuracies of models trained with thresholds being 30000 and 40000 are almost the same, i.e., the difference is only 0.02%. We make similar observations among the models trained with thresholds being 60000, 70000, and 80000. Notice that we have observed above that when we train 5 models with different random initialization, the variance of the accuracies of these models is within 0.1%. Therefore, we conclude that when we increase the UNK threshold from 30000 to 40000 and from 60000 to 80000, the accuracies do not change significantly. One potential explanation is that when increasing the UNK threshold, while it has more chance to predict those otherwise UNK terminals, a model may also more likely make mistakes when it needs to choose the next terminal from more candidates. 5.4 JOINT PREDICTION In this section, we evaluate different approaches to predict the next non-terminal and terminal together for the joint prediction task. In fact, NT2NT is designed for this task. Alternative approaches can predict the next non-terminal first, and then predict the next terminal based on the predicted next non-terminal. We choose NTN2T method as the second step to predict the next terminal, and we examine two different approaches as the first step to predict the next non-terminal: N2N and NT2N. Therefore, we compare three methods in total. The top 1 accuracy results are presented in Table 5. N2N+NTN2T is less effective than NT2N+NTN2T, as expected, since when predicting the non-terminal in the first step, N2N is less effective than NT2N as we have shown in Table 4. On the other hand, NT2NT’s performance is better than N2N+NTN2T, but is worse than NT2N+NTN2T. We observe that for all these three combinations, we have \[ \Pr(\hat{T}_{k+1} = T_{k+1} \land \hat{N}_{k+1} = N_{k+1}) > \Pr(\hat{T}_{k+1} = T_{k+1}) \Pr(\hat{N}_{k+1} = N_{k+1}) \] These facts indicate that the events of the next non-terminal and terminal being predicted correctly are not independent, but very relevant to each other instead. This is also the case for NT2NT, even though NT2NT predicts the next non-terminal and the next terminal independently conditional upon the LSTM hidden states. <table> <tr> <th></th> <th>NT2NT</th> <th>NT2NT+D</th> <th>NTN2T</th> <th>NTN2T+D</th> </tr> <tr> <td>Overall accuracy</td> <td>81.2%</td> <td>85.1%</td> <td>85.4%</td> <td>89.9%</td> </tr> <tr> <td>Accuracy on non-UNK terminals</td> <td>87.6%</td> <td>87.5%</td> <td>92.2%</td> <td>91.8%</td> </tr> <tr> <td>Deny prediction rate</td> <td>0%</td> <td>5.2%</td> <td>0%</td> <td>6.1%</td> </tr> </table> Table 6: Deny prediction results. **Top 1 accuracy** is computed as the percentage of all queries (including the ones whose ground truth is **UNK**) that can be predicted correctly, i.e., the prediction matches the ground truth even when the ground truth is **UNK**. **Accuracy on non-UNK terminals** measures the accuracy of each model on all non-UNK terminals. **Deny rate** is calculated as the percentage of all queries that a model denies prediction. **Prediction accuracy** is the top 1 accuracy over those queries that a model does not deny prediction, i.e., the prediction is not **UNK**. ![Line plot showing overall accuracies and accuracies on non-UNK terminals by varying alpha.](page_370_670_808_246.png) Figure 10: Overall accuracies and accuracies on non-UNK terminals by varying \( \alpha \). 5.5 DENYING PREDICTION We compare the models which do not deny prediction (i.e., NT2NT and NTN2T) and those which do (i.e., NT2NT+D and NTN2T+D). Results are presented in Table[6]. For a reference, in the test set, there are 7.42% UNK queries. We can observe that deny prediction models (i.e., +D models) have higher accuracies than the corresponding original models. This is expected. Since deny prediction models allow predicting UNK terminals, while NT2NT and NTN2T fail on all UNK queries, +D will succeed on most of them. We further evaluate the accuracy on non-UNK terminals. One may expect that since +D models may prefer to predict UNK, a standard model should have a higher accuracy on non-UNK terminals than its deny prediction counterpart. The results show that this is indeed the case, but the margin is very small, i.e., 0.1% for NT2NT and 0.3% for NTN2T. This means that, allowing denying prediction does not necessarily sacrifice a model’s ability on predicting non-UNK terminals. We are also interested in how frequent a +D model will deny prediction. We can observe that NTN2T+D will deny prediction for only 6.1% of all queries, which is even less than the percentage of UNK queries (i.e., 7.42%). This shows that although we allow the model to deny prediction, it is conservative when executing this privilege. This partially explains why NTN2T+D’s accuracy on non-UNK terminals is not much less than NTN2T’s. **Effectiveness of the value of \( \alpha \).** We are interested in how the hyperparameter \( \alpha \) in a +D model affects its accuracy. We train 11 different NTN2T+D models on the 1/10 subset of the training set, which is used above to examine the effectiveness of UNK thresholds, by varying \( \alpha \) from 0.0 to 1.0. Notice that \( \alpha = 0.0 \), this model becomes a standard NTN2T model. We plot both overall accuracies and accuracies on non-UNK terminals in Figure[10]. We observe the same effect as above: 1) the overall accuracy for \( \alpha = 1 \) is 6% higher than the one for \( \alpha = 0 \); and 2) the accuracy on non-UNK terminals for \( \alpha = 1 \) is less than the one for \( \alpha = 0 \), but the margin is not large (i.e., less than 1%). When we increase \( \alpha \) from 0 to 0.3, we can observe that the overall accuracy steeply increases. When we further increase \( \alpha \), however, the overall accuracy becomes steady. This is also the case for accuracy on non-UNK terminals. The result of this experiment shows that how to set \( \alpha \) is a trade-off between the overall accuracy and the accuracy on non-UNK terminals and how to choose \( \alpha \) depends on the application. 5.6 RUNTIME We evaluate our models’ runtime performance. Our models are implemented in TensorFlow (Abadi et al. (2016)). We evaluate our models on a machine equipped with 16 Intel Xeon CPUs, 16 GB RAM, and a single GPU Tesla K80. All queries from the same program are processed incrementally. That is, given two queries \( A, B \), if \( A \) has one more node than \( B \), then the LSTM outputs for \( B \) will be reused for processing \( A \), so that only the additional node in \( A \) needs to be processed. Note that this is consistent with the practice where programs are written incrementally from beginning to end. For each model, we feed in one query at a time into the model. There are 3939 queries in total coming from randomly chosen programs. We measure the overall response latency for each query. We observe that the query response time is consistent across all queries. On average, each model takes around 16 milliseconds to respond a query on GPU, and around 33 milliseconds on CPU. Note that these numbers are from just a proof of concept implementation and we have not optimized the code. Considering that a human being usually does not type in a token within 30 milliseconds, we conclude that our approach is efficient enough for potential practical usage. We emphasize that these numbers do not directly correspond to the runtime latency when the techniques are deployed to a code completion engine, since the changes of AST serialization may not be sequential while users are programming incrementally. This analysis, however, only provides an evidence to show the feasibility of applying our approach toward a full-fledged code completion engine. 6 CONCLUSION In this paper we introduce, motivate, and formalize the problem of automatic code completion. We describe LSTM-based approaches that capture parsing structure readily available in the code completion task. We introduce a simple LSTM architecture to model program context. We then explore several variants of our basic architecture for different variants of the code completion problem. We evaluate our techniques on a challenging JavaScript code completion benchmark and compare against the state-of-the-art code completion approach. We demonstrate that deep learning techniques can achieve better prediction accuracy by learning program patterns from big code. In addition, we find that using deep learning techniques, our models perform better for longer programs than for shorter ones, and when the code completion engine can pop up a list of candidates, our approach allows users to choose from the list instead of inputting the token over 96% of all time that this is possible. We also evaluate our approaches’ runtime performance and demonstrate that deep code completion has the potential to run in real-time as users type. We believe that deep learning techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. ACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments. This material is based upon work partially supported by the National Science Foundation under Grant No. TWC-1409915, and a DARPA grant FA8750-15-2-0104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA.
reject
Reject
4.5
04fe2c31cb55fd5ab35962cd8699152710488db1
iclr
2,017
LEARNING TO SUPEROPTIMIZE PROGRAMS Rudy Bunel1, Alban Desmaison1, M. Pawan Kumar1,2 & Philip H.S. Torr1 1Department of Engineering Science - University of Oxford 2Alan Turing Institute Oxford, UK {rudy,alban,pawan}@robots.ox.ac.uk, [email protected] Pushmeet Kohli Microsoft Research Redmond, WA 98052, USA [email protected] ABSTRACT Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization. 1 INTRODUCTION Considering the importance of computing to human society, it is not surprising that a very large body of research has gone into the study of the syntax and semantics of programs and programming languages. Code super-optimization is an extremely important problem in this context. Given a program or a snippet of source-code, super-optimization is the task of transforming it to a version that has the same input-output behaviour but can be executed on a target compute architecture more efficiently. Superoptimization provides a natural benchmark for evaluating representations of programs. As a task, it requires the decoupling of the semantics of the program from its superfluous properties, the exact implementation. In some sense, it is the natural analogue of the paraphrase problem in natural language processing where we want to change syntax without changing semantics. Decades of research has been done on the problem of code optimization resulting in the development of sophisticated rule-based transformation strategies that are used in compilers to allow them to perform code optimization. While modern compilers implement a large set of rewrite rules and are able to achieve impressive speed-ups, they fail to offer any guarantee of optimality, thus leaving room for further improvement. An alternative approach is to search over the space of all possible programs that are equivalent to the compiler output, and select the one that is the most efficient. If the search is carried out in a brute-force manner, we are guaranteed to achieve super-optimization. However, this approach quickly becomes computationally infeasible as the number of instructions and the length of the program grows. In order to efficiently perform super-optimization, recent approaches have started to use a stochastic search procedure, inspired by Markov Chain Monte Carlo (MCMC) sampling (Schkufza et al., 2013). Briefly, the search starts at an initial program, such as the compiler output. It iteratively suggests modifications to the program, where the probability of a modification is encoded in a proposal distribution. The modification is either accepted or rejected with a probability that is dependent on the improvement achieved. Under certain conditions on the proposal distribution, the above procedure can be shown, in the limit, to sample from a distribution over programs, where the probability of a program is related to its quality. In other words, the more efficient a program, the more times it is encountered, thereby enabling super-optimization. Using this approach, high-quality implementations of real programs such as the Montgomery multiplication kernel from the OpenSSL library were discovered. These implementations outperformed the output of the gcc compiler and even expert-handwritten assembly code. One of the main factors that governs the efficiency of the above stochastic search is the choice of the proposal distribution. Surprisingly, the state of the art method, Stoke (Schkufza et al., 2013), employs a proposal distribution that is neither learnt from past behaviour nor does it depend on the syntax or semantics of the program under consideration. We argue that this choice fails to fully exploit the power of stochastic search. For example, consider the case where we are interested in performing bitwise operations, as indicated by the compiler output. In this case, it is more likely that the optimal program will contain bitshifts than floating point opcodes. Yet, Stoke will assign an equal probability of use to both types of opcodes. In order to alleviate the aforementioned deficiency of Stoke, we build a reinforcement learning framework to estimate the proposal distribution for optimizing the source code under consideration. The score of the distribution is measured as the expected quality of the program obtained via stochastic search. Using training data, which consists of a set of input programs, the parameters are learnt via the REINFORCE algorithm (Williams, 1992). We demonstrate the efficacy of our approach on two datasets. The first is composed of programs from “Hacker’s Delight” (Warren, 2002). Due to the limited diversity of the training samples, we show that it is possible to learn a prior distribution (unconditioned on the input program) that outperforms the state of the art. The second dataset contains automatically generated programs that introduce diversity in the training samples. We show that, in this more challenging setting, we can learn a conditional distribution given the initial program that significantly outperforms Stoke. 2 RELATED WORKS Super-optimization The earliest approaches for super-optimization relied on brute-force search. By sequentially enumerating all programs in increasing length orders (Granlund & Kenner, 1992; Massalin, 1987), the shortest program meeting the specification is guaranteed to be found. As expected, this approach scales poorly to longer programs or to large instruction sets. The longest reported synthesized program was 12 instructions long, on a restricted instruction set (Massalin, 1987). Trading off completeness for efficiency, stochastic methods (Schkufza et al., 2013) reduced the number of programs to test by guiding the exploration of the space, using the observed quality of programs encountered as hints. In order to improve the size of solvable instances, Phothilimthana et al. (2016) combined stochastic optimizers with smart enumerative solvers. However, the reliance of stochastic methods on a generic unspecific exploratory policy made the optimization blind to the problem at hand. We propose to tackle this problem by learning the proposal distribution. Neural Computing Similar work was done in the restricted case of finding efficient implementation of computation of value of degree k polynomials (Zaremba et al., 2014). Programs were generated from a grammar, using a learnt policy to prioritise exploration. This particular approach of guided search looks promising to us, and is in spirit similar to our proposal, although applied on a very restricted case. Another approach to guide the exploration of the space of programs was to make use of the gradients of differentiable relaxation of programs. Bunel et al. (2016) attempted this by simulating program execution using Recurrent Neural Networks. However, this provided no guarantee that the network parameters were going to correspond to real programs. Additionally, this method only had the possibility of performing local, greedy moves, limiting the scope of possible transformations. On the contrary, our proposed approach operates directly on actual programs and is capable of accepting short-term detrimental moves. Learning to optimize Outside of program optimization, applying learning algorithms to improve optimization procedures, either in terms of results achieved or runtime, is a well studied subject. Doppa et al. (2014) proposed imitation learning based methods to deal with structured output spaces, in a “Learning to search” framework. While this is similar in spirit to stochastic search, our setting differs in the crucial aspect of having a valid cost function instead of searching for one. More relevant is the recent literature on learning to optimize. Li & Malik (2016) and Andrychowicz et al. (2016) learn how to improve on first-order gradient descent algorithms, making use of neural networks. Our work is similar, as we aim to improve the optimization process. However, as opposed to the gradient descent that they learn on a continuous unconstrained space, our initial algorithm is an MCMC sampler on a discrete domain. Similarly, training a proposal distribution parameterized by a Neural Network was also proposed by Paige & Wood (2016) to accelerate inference in graphical models. Similar approaches were successfully employed in computer vision problems where data driven proposals allowed to make inference feasible (Jampani et al., 2015; Kulkarni et al., 2015; Zhu et al., 2000). Other approaches to speeding up MCMC inference include the work of Salimans et al. (2015), combining it with Variational inference. 3 LEARNING STOCHASTIC SUPER-OPTIMIZATION 3.1 STOCHASTIC SEARCH AS A PROGRAM OPTIMIZATION PROCEDURE Stoke (Schkufza et al., 2013) performs black-box optimization of a cost function on the space of programs, represented as a series of instructions. Each instruction is composed of an opcode, specifying what to execute, and some operands, specifying the corresponding registers. Each given input program \( \mathcal{T} \) defines a cost function. For a candidate program \( \mathcal{R} \) called *rewrite*, the goal is to optimize the following cost function: \[ \text{cost}(\mathcal{R}, \mathcal{T}) = \omega_c \times \text{eq}(\mathcal{R}, \mathcal{T}) + \omega_p \times \text{perf}(\mathcal{R}) \] The term \( \text{eq}(\mathcal{R}; \mathcal{T}) \) measures how well the outputs of the rewrite match the outputs of the reference program. This can be obtained either exactly by running a symbolic validator or approximately by running test cases. The term \( \text{perf}(\mathcal{R}) \) is a measure of the efficiency of the program. In this paper, we consider runtime to be the measure of this efficiency. It can be approximated by the sum of the latency of all the instructions in the program. Alternatively, runtime of the program on some test cases can be used. To find the optimum of this cost function, Stoke runs an MCMC sampler using the Metropolis (Metropolis et al., 1953) algorithm. This allows us to sample from the probability distribution induced by the cost function: \[ p(\mathcal{R}; \mathcal{T}) = \frac{1}{Z} \exp(-\text{cost}(\mathcal{R}, \mathcal{T})). \] The sampling is done by proposing random moves from a different proposal distribution: \[ \mathcal{R}' \sim q(\cdot|\mathcal{R}). \] The cost of the new modified program is evaluated and an acceptance criterion is computed. This acceptance criterion \[ \alpha(\mathcal{R}, \mathcal{T}) = \min \left(1, \frac{p(\mathcal{R}'; \mathcal{T})}{p(\mathcal{R}; \mathcal{T})}\right), \] is then used as the parameter of a Bernoulli distribution from which an accept/reject decision is sampled. If the move is accepted, the state of the optimizer is updated to \( \mathcal{R}' \). Otherwise, it remains in \( \mathcal{R} \). While the above procedure is only guaranteed to sample from the distribution \( p(\cdot; \mathcal{T}) \) in the limit if the proposal distribution \( q \) is symmetric (\( q(\mathcal{R}'|\mathcal{R}) = q(\mathcal{R}|\mathcal{R}') \) for all \( \mathcal{R}, \mathcal{R}' \)), it still allows us to perform efficient hill-climbing for non-symmetric proposal distributions. Moves leading to an improvement are always going to be accepted, while detrimental moves can still be accepted in order to avoid getting stuck in local minima. 3.2 LEARNING TO SEARCH We now describe our approach to improve stochastic search by learning the proposal distribution. We begin our description by defining the learning objective (section 3.2.1), followed by a parameterization of the proposal distribution (section 3.2.2), and finally the reinforcement learning framework to estimate the parameters of the proposal distribution (section 3.2.3). 3.2.1 OBJECTIVE FUNCTION Our goal is to optimize the cost function defined in equation (1). Given a fixed computational budget of \( T \) iterations to perform program super-optimization, we want to make moves that lead us to the lowest possible cost. As different programs have different runtimes and therefore different associated costs, we need to perform normalization. As normalized loss function, we use the ratio between the best rewrite found and the cost of the initial unoptimized program \( \mathcal{R}_0 \). Formally, the loss for a set of rewrites \( \{\mathcal{R}_t\}_{t=0..T} \) is defined as follows: \[ r(\{\mathcal{R}_t\}_{t=0..T}) = \left( \frac{\min_{t=0..T} \operatorname{cost}(\mathcal{R}_t, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right). \] Recall that our goal is to learn a proposal distribution. Given that our optimization procedure is stochastic, we will need to consider the expected cost as our loss. This expected loss is a function of the parameters \( \theta \) of our parametric proposal distribution \( q_\theta \): \[ \mathcal{L}(\theta) = \mathbb{E}_{\{\mathcal{R}_t\} \sim q_\theta} [r(\{\mathcal{R}_t\}_{t=0..T})]. \] 3.2.2 PARAMETERIZATION OF THE MOVE PROPOSAL DISTRIBUTION The proposal distribution (3) originally used in Stoke (Schkufza et al., 2013) takes the form of a hierarchical model. The type of the move is initially sampled from a probability distribution. Additional samples are drawn to specify, for example, the affected location in the programs ,the new operands or opcode to use. Which of these probability distribution get sampled depends on the type of move that was first sampled. The detailed structure of the proposal probability distribution can be found in Appendix B. Stoke uses uniform distributions for each of the elementary probability distributions the model samples from. This corresponds to a specific instantiation of the general stochastic search paradigm. In this work, we propose to learn those probability distributions so as to maximize the probability of reaching the best programs. The rest of the optimization scheme remains similar to the one of Schkufza et al. (2013). Our chosen parameterization of \( q \) is to keep the hierarchical structure of the original work of Schkufza et al. (2013), as detailed in Appendix B, and parameterize all the elementary probability distributions (over the positions in the programs, the instructions to propose or the arguments) independently. The set \( \theta \) of parameters for \( q_\theta \) will thus contain a set of parameters for each elementary probability distributions. A fixed proposal distribution is kept through the optimization of a given program, so the proposal distribution needs to be evaluated only once, at the beginning of the optimization and not at every iteration of MCMC. The stochastic computation graph corresponding to a run of the Metropolis algorithm is given in Figure 1. We have assumed the operation of evaluating the cost of a program to be a deterministic function, as we will not model the randomness of measuring performance. 3.2.3 LEARNING THE PROPOSAL DISTRIBUTION In order to learn the proposal distribution, we will use stochastic gradient descent on our loss function (6). We obtain the first order derivatives with regards to our proposal distribution parameters using the REINFORCE (Williams, 1992) estimator, also known as the likelihood ratio estimator (Glynn, 1990) or the score function estimator (Fu, 2006). This estimator relies on a rewriting of the gradient of the expectation. For an expectation with regards to a probability distribution \( x \sim f_\theta \), the REINFORCE estimator is: \[ \nabla_\theta \sum_x f(x; \theta) r(x) = \sum_x r(x) \nabla_\theta f(x; \theta) = \sum_x f(x; \theta) r(x) \nabla_\theta \log(f(x; \theta)), \] and provides an unbiased estimate of the gradient. ![Stochastic computation graph of the Metropolis algorithm used for program super-optimization.](page_324_613_900_579.png) Figure 1: Stochastic computation graph of the Metropolis algorithm used for program super-optimization. Round nodes are stochastic nodes and square ones are deterministic. Red arrows corresponds to computation done in the forward pass that needs to be learned while green arrows correspond to the backward pass. Full arrows represent deterministic computation and dashed arrows represent stochastic ones. The different steps of the forward pass are: (a) Based on features of the reference program, the proposal distribution \( q \) is computed. (b) A random move is sampled from the proposal distribution. (c) The score of the proposed rewrite is experimentally measured. (d) The acceptance criterion (4) for the move is computed. (e) The move is accepted with a probability equal to the acceptance criterion. (f) The cost is observed, corresponding to the best program obtained during the search. (g) Moves b to f are repeated T times. A helpful way to derive the gradients is to consider the execution traces of the search procedure under the formalism of stochastic computation graphs (Schulman et al., 2015). We introduce one “cost node” in the computation graphs at the end of each iteration of the sampler. The associated cost corresponds to the normalized difference between the best rewrite so far and the current rewrite after this step: \[ c_t = \min \left( 0, \left( \frac{\operatorname{cost}(\mathcal{R}_t, \mathcal{T}) - \min_{i=0..t-1} \operatorname{cost}(\mathcal{R}_i, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right) \right). \] The sum of all the cost nodes corresponds to the sum of all the improvements made when a new lowest cost was achieved. It can be shown that up to a constant term, this is equivalent to our objective function (5). As opposed to considering only a final cost node at the end of the \( T \) iterations, this has the advantage that moves which were not responsible for the improvements would not get assigned any credit. For each round of MCMC, the gradient with regards to the proposal distribution is computed using the REINFORCE estimator which is equal to \[ \widehat{\nabla_{\theta,i}} \mathcal{L}(\theta) = (\nabla_{\theta} \log q_{\theta}(\mathcal{R}_i|\mathcal{R}_{i-1})) \sum_{t>i} c_t. \] (9) As our proposal distribution remains fixed for the duration of a program optimization, these gradients needs to be summed over all the iterations to obtain the total contribution to the proposal distribution. Once this gradient is estimated, it becomes possible to run standard back-propagation with regards to the features on which the proposal distribution is based on, so as to learn the appropriate feature representation. 4 EXPERIMENTS 4.1 SETUP Implementation Our system is built on top of the Stoke super-optimizer from Schkufza et al. (2013). We instrumented the implementation of the Metropolis algorithm to allow sampling from parameterized proposal distributions instead of the uniform distributions previously used. Because the proposal distribution is only evaluated once per program optimisation, the impact on the optimization throughput is low, as indicated in Table 3. Our implementation also keeps track of the traces through the stochastic graph. Using the traces generated during the optimization, we can compute the estimator of our gradients, implemented using the Torch framework (Collobert et al., 2011). Datasets We validate the feasibility of our learning approach on two experiments. The first is based on the Hacker’s delight (Warren, 2002) corpus, a collection of twenty five bit-manipulation programs, used as benchmark in program synthesis (Gulwani et al., 2011; Jha et al., 2010; Schkufza et al., 2013). Those are short programs, all performing similar types of tasks. Some examples include identifying whether an integer is a power of two from its binary representation, counting the number of bits turned on in a register or computing the maximum of two integers. An exhaustive description of the tasks is given in Appendix C. Our second corpus of programs is automatically generated and is more diverse. Models The models we are learning are a set of simple elementary probabilities for the categorical distribution over the instructions and over the type of moves to perform. We learn the parameters of each separate distribution jointly, using a Softmax transformation to enforce that they are proper probability distributions. For the types of move where opcodes are chosen from a specific subset, the probabilities of each instruction are appropriately renormalized. We learn two different type of models and compare them with the baseline of uniform proposal distributions equivalent to Stoke. Our first model, henceforth denoted the bias, is not conditioned on any property of the programs to optimize. By learning this simple proposal distribution, it is only possible to capture a bias in the dataset. This can be understood as an optimal proposal distribution that Stoke should default to. The second model is a Multi Layer Perceptron (MLP), conditioned on the input program to optimize. For each input program, we generate a Bag-of-Words representation based on the opcodes of the program. This is embedded through a three hidden layer MLP with ReLU activation unit. The proposal distribution over the instructions and over the type of moves are each the result of passing the outputs of this embedding through a linear transformation, followed by a SoftMax. The optimization is performed by stochastic gradient descent, using the Adam (Kingma & Ba, 2015) optimizer. For each estimate of the gradient, we draw 100 samples for our estimator. The values of the hyperparameters used are given in Appendix A. The number of parameters of each model is given in Table 1. <table> <tr> <th>Model</th> <th># of parameters</th> </tr> <tr> <td>Uniform</td> <td>0</td> </tr> <tr> <td>Bias</td> <td>2912</td> </tr> <tr> <td>MLP</td> <td>1.4 \times 10^6</td> </tr> </table> Table 1: Size of the different models compared. Uniform corresponds to Stoke Schkufza et al. (2013). <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>57.01\%</td> <td>53.71\%</td> </tr> <tr> <td>Bias</td> <td>36.45\%</td> <td>31.82\%</td> </tr> <tr> <td>MLP</td> <td>35.96\%</td> <td>31.51\%</td> </tr> </table> Table 2: Final average relative score on the Hacker’s Delight benchmark. While all models improve with regards to the initial proposal distribution based on uniform sampling, the model conditioning on program features reach better performances. 4.2 Existing Programs In order to have a larger corpus than the twenty-five programs initially present in “Hacker’s Delight”, we generate various starting points for each optimization. This is accomplished by running Stoke with a cost function where \( \omega_p = 0 \) in (1), and keeping only the correct programs. Duplicate programs are filtered out. This allows us to create a larger dataset from which to learn. Examples of these programs at different level of optimization can be found in Appendix D. We divide this augmented Hacker’s Delight dataset into two sets. All the programs corresponding to even-numbered tasks are assigned to the first set, which we use for training. The programs corresponding to odd-numbered tasks are kept for separate evaluation, so as to evaluate the generalisation of our learnt proposal distribution. The optimization process is visible in Figure 2, which shows a clear decrease of the training loss and testing loss for both models. While simply using stochastic super-optimization allows to discover programs 40% more efficient on average, using a tuned proposal distribution yield even larger improvements, bringing the improvements up to 60%, as can be seen in Table2. Due to the similarity between the different tasks, conditioning on the program features does not bring any significant improvements. ![Two line plots showing training and testing loss over time for Bias and Multi-layer Perceptron](page_1012_1042_484_246.png) (a) Bias (b) Multi-layer Perceptron Figure 2: Proposal distribution training. All models learn to improve the performance of the stochastic optimization. Because the tasks are different between the training and testing dataset, the values between datasets can’t directly be compared as some tasks have more opportunity for optimization. It can however be noted that improvements on the training dataset generalise to the unseen tasks. In addition, to clearly demonstrate the practical consequences of our learning, we present in Figure 3 a superposition of score traces, sampled from the optimization of a program of the test set. Figure 3a corresponds to our initialisation, an uniform distribution as was used in the work of Schkufza et al. (2013). Figure 3d corresponds to our optimized version. It can be observed that, while the uniform proposal distribution was successfully decreasing the cost of the program, our learnt proposal distribution manages to achieve lower scores in a more robust manner and in less iterations. Even using only 100 iterations (Figure 3e), the learned model outperforms the uniform proposal distribution with 400 iterations (Figure 3c). (a) With Uniform proposal Optimization Traces (b) Scores after 200 iterations (c) Scores after 400 iterations (d) With Learned Bias Optimization Traces (e) Scores after 100 iterations (f) Scores after 200 iterations Figure 3: Distribution of the improvement achieved when optimising a training sample from the Hacker’s Delight dataset. The first column represent the evolution of the score during the optimization. The other columns represent the distribution of scores after a given number of iterations. (a) to (c) correspond to the uniform proposal distribution, (d) to (f) correspond to the learned bias. 4.3 AUTOMATICALLY GENERATED PROGRAMS While the previous experiments shows promising results on a set of programs of interest, the limited diversity of programs might have made the task too simple, as evidenced by the good performance of a blind model. Indeed, despite the data augmentation, only 25 different tasks were present, all variations of the same programs task having the same optimum. To evaluate our performance on a more challenging problem, we automatically synthesize a larger dataset of programs. Our methods to do so consists in running Stoke repeatedly with a constant cost function, for a large number of iterations. This leads to a fully random walk as every proposed programs will have the same cost, leading to a 50% chance of acceptance. We generate 600 of these programs, 300 that we use as a training set for the optimizer to learn over and 300 that we keep as a test set. The performance achieved on this more complex dataset is shown in Figure 4 and Table 4. (a) Bias (b) Multi-layer Perceptron Figure 4: Training of the proposal distribution on the automatically generated benchmark. <table> <tr> <th>Proposal distribution</th> <th>MCMC iterations throughput</th> </tr> <tr> <td>Uniform</td> <td>60 000 /second</td> </tr> <tr> <td>Categorical</td> <td>20 000 /second</td> </tr> </table> Table 3: Throughput of the proposal distribution estimated by timing MCMC for 10000 iterations <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>76.63%</td> <td>78.15 %</td> </tr> <tr> <td>Bias</td> <td>61.81%</td> <td>63.56%</td> </tr> <tr> <td>MLP</td> <td><b>60.13%</b></td> <td><b>62.27%</b></td> </tr> </table> Table 4: Final average relative score. The MLP conditioning on the features of the program perform better than the simple bias. Even the unconditioned bias performs significantly better than the Uniform proposal distribution. 5 CONCLUSION Within this paper, we have formulated the problem of optimizing the performance of a stochastic super-optimizer as a Machine Learning problem. We demonstrated that learning the proposal distribution of a MCMC sampler was feasible and lead to faster and higher quality improvements. Our approach is not limited to stochastic superoptimization and could be applied to other stochastic search problems. It is interesting to compare our method to the synthesis-style approaches that have been appearing recently in the Deep Learning community (Graves et al., 2014) that aim at learning algorithms directly using differentiable representations of programs. We find that the stochastic search-based approach yields a significant advantage compared to those types of approaches, as the resulting program can be run independently from the Neural Network that was used to discover them. Several improvements are possible to the presented methods. In mature domains such as Computer Vision, the representations of objects of interests have been widely studied and as a result are successful at capturing the information of each sample. In the domains of programs, obtaining informative representations remains a challenge. Our proposed approach ignores part of the structure of the program, notably temporal, due to the limited amount of existing data. The synthetic data having no structure, it wouldn’t be suitable to learn those representations from it. Gathering a larger dataset of frequently used programs so as to measure more accurately the practical performance of those methods seems the evident next step for the task of program synthesis. REFERENCES Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In NIPS, 2016. Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip HS Torr, and M Pawan Kumar. Adaptive neural compilation. In NIPS. 2016. Berkeley Churchill, Eric Schkufza, and Stefan Heule. Stoke. https://github.com/StanfordPL/stoke, 2016. Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In NIPS, 2011. Janardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. Hc-search: A learning framework for search-based structured prediction. JAIR, 2014. Michael C. Fu. Gradient estimation. Handbooks in Operations Research and Management Science. 2006. Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 1990. Torbjörn Granlund and Richard Kenner. Eliminating branches using a superoptimizer and the GNU C compiler. ACM SIGPLAN Notices, 1992. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014. Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkatesan. Synthesis of loop-free programs. In PLDI, 2011. Varun Jampani, Sebastian Nowozin, Matthew Loper, and Peter V Gehler. The informed sampler: A discriminative approach to bayesian inference in generative computer vision models. Computer Vision and Image Understanding, 2015. Susmit Jha, Sumit Gulwani, Sanjit A Seshia, and Ashish Tiwari. Oracle-guided component-based program synthesis. In International Conference on Software Engineering, 2010. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In CVPR, 2015. Ke Li and Jitendra Malik. Learning to optimize. CoRR, 2016. Henry Massalin. Superoptimizer: A look at the smallest program. In ACM SIGPLAN Notices, 1987. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 1953. Brookes Paige and Frank Wood. Inference networks for sequential Monte Carlo in graphical models. In ICML, 2016. Phitchaya Mangpo Phothilimthana, Aditya Thakur, Rastislav Bodik, and Dinakar Dhurjati. Scaling up superoptimization. In ACM SIGPLAN Notices, 2016. Tim Salimans, Diederik P Kingma, Max Welling, et al. Markov chain monte carlo and variational inference: Bridging the gap. In ICML, 2015. Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. SIGPLAN, 2013. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015. Henry S Warren. Hacker’s delight. 2002. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 1992. Wojciech Zaremba, Karol Kurach, and Rob Fergus. Learning to discover efficient mathematical identities. In NIPS, 2014. Song-Chun Zhu, Rong Zhang, and Zhuowen Tu. Integrating bottom-up/top-down for object recognition by data driven markov chain monte carlo. In CVPR, 2000. A HYPERPARAMETERS A.1 ARCHITECTURES The output size of 9 corresponds to the types of move. The output size of 2903 correspond to the number of possible instructions that Stoke can use during a rewrite. This is smaller that the 3874 that are possible to find in an original program. <table> <tr> <th>Outputs</th> <th>Bias (9)<br>SoftMax</th> <th>Bias (2903)<br>SoftMax</th> </tr> </table> Table 5: Architecture of the Bias <table> <tr> <th rowspan="2">Embedding</th> <td colspan="2">Linear (3874 → 100) + ReLU<br>Linear (100 → 300) + ReLU<br>Linear (300 → 300) + ReLU</td> </tr> <tr> <th>Outputs</th> <td>Linear (300 → 9)<br>SoftMax</td> <td>Linear (300 → 2903)<br>SoftMax</td> </tr> </table> Table 6: Architecture of the Multi Layer Perceptron A.2 TRAINING PARAMETERS All of our models are trained using the Adam (Kingma & Ba, 2015) optimizer, with its default hyper-parameters \( \beta_1 = 0.9, \beta_2 = 0.999, \epsilon = 10^{-8} \). We use minibatches of size 32. The learning rate were tuned by observing the evolution of the loss on the training datasets for the first iterations. The picked values are given in Table 7. Those learning rates are divided by the size of the minibatches. <table> <tr> <th></th> <th>Hacker’s Delight</th> <th>Synthetic</th> </tr> <tr> <th>Bias</th> <td>1</td> <td>10</td> </tr> <tr> <th>MLP</th> <td>0.01</td> <td>0.1</td> </tr> </table> Table 7: Values of the Learning rate used. B STRUCTURE OF THE PROPOSAL DISTRIBUTION The sampling process of a move is a hierarchy of sampling step. The easiest way to represent it is as a generative model for the program transformations. Depending on what type of move is sampled, different series of sampling steps have to be performed. For a given move, all the probabilities are sampled independently so the probability of proposing the move is the product of the probability of picking each of the sampling steps. The generative model is defined in Figure 5. It is going to be parameterized by the the parameters of each specific probability distribution it samples from. The default Stoke version uses uniform probabilities over all of those elementary distributions. def proposal(current_program): move_type = sample(categorical(all_move_type)) if move_type == 1: % Add empty Instruction pos = sample(categorical(all_positions(current_program))) return (ADD_NOP, pos) if move_type == 2: % Delete an Instruction pos = sample(categorical(all_positions(current_program))) return (DELETE, pos) if move_type == 3: % Instruction Transform pos = sample(categorical(all_positions(current_program))) instr = sample(categorical(set_of_all_instructions)) arity = nb_args(instr) for i = 1, arity: possible_args = possible_arguments(instr, i) % get one of the arguments that can be used as i-th % argument for the instruction 'instr'. operands[i] = sample(categorical(possible_args)) return (TRANSFORM, pos, instr, operands) if move_type == 4: % Opcode Transform pos = sample(categorical(all_positions(current_program))) args = arguments_at(current_program, pos) instr = sample(categorical(possible_instruction(args))) % get an instruction compatible with the arguments % that are in the program at line pos. return(OPCODE_TRANSFORM, pos, instr) if move_type == 5: % Opcode Width Transform pos = sample(categorical(all_positions(current_program))) curr_instr = instruction_at(current_program, pos) instr = sample(categorical(same_memonic_instr(curr_instr))) % get one instruction with the same mnemonic that the % instruction 'curr_instr'. return (OPCODE_TRANSFORM, pos, instr) if move_type == 6: % Operand transform pos = sample(categorical(all_positions(current-program))) curr_instr = instruction_at(current_program, pos) arg_to_mod = sample(categorical(args(curr_instr))) possible_args = possible_arguments(curr_instr, arg_to_mod) new_operand = sample(categorical(possible_args)) return (OPERAND_TRANSFORM, pos, arg_to_mod, new_operand) if move_type == 7: % Local swap transform block_idx = sample(categorical(all_blocks(current_program))) possible_pos = pos_in_block(current_program, block_idx) pos_1 = sample(categorical(possible_pos)) pos_2 = sample(categorical(possible_pos)) return (SWAP, pos_1, pos_2) if move_type == 8: % Global swap transform pos_1 = sample(categorical(all_positions(current_program))) pos_2 = sample(categorical(all_positions(current_program))) return (SWAP, pos_1, pos_2) if move_type == 9: % Rotate transform pos_1 = sample(categorical(all_positions(current_program))) pos_2 = sample(categorical(all_positions(current_program))) return (ROTATE, pos_1, pos_2) Figure 5: Generative Model of a Transformation. C H A C K E R ’ S D E L I G H T T A S K S The 25 tasks of the Hacker’s delight Warren (2002) datasets are the following: 1. Turn off the right-most one bit 2. Test whether an unsigned integer is of the form \( 2^{(n-1)} \) 3. Isolate the right-most one bit 4. Form a mask that identifies right-most one bit and trailing zeros 5. Right propagate right-most one bit 6. Turn on the right-most zero bit in a word 7. Isolate the right-most zero bit 8. Form a mask that identifies trailing zeros 9. Absolute value function 10. Test if the number of leading zeros of two words are the same 11. Test if the number of leading zeros of a word is strictly less than of another work 12. Test if the number of leading zeros of a word is less than of another work 13. Sign Function 14. Floor of average of two integers without overflowing 15. Ceil of average of two integers without overflowing 16. Compute max of two integers 17. Turn off the right-most contiguous string of one bits 18. Determine if an integer is a power of two 19. Exchanging two fields of the same integer according to some input 20. Next higher unsigned number with same number of one bits 21. Cycling through 3 values 22. Compute parity 23. Counting number of bits 24. Round up to next highest power of two 25. Compute higher order half of product of x and y Reference implementation of those programs were obtained from the examples directory of the stoke repository (Churchill et al., 2016). D EXAMPLES OF HACKER’S DELIGHT OPTIMISATION The first task of the Hacker’s Delight corpus consists in turning off the right-most one bit of a register. When compiling the code in Listing 6a, llvm generates the code shown in Listing 6b. A typical example of an equivalent version of the same program obtained by the data-augmentation procedure is shown in Listing 6c. Listing 6d contains the optimal version of this program. Note that such optimization are already feasible using the stoke system of Schkufza et al. (2013). #include <stdint.h> int32_t p01(int32_t x) { int32_t o1 = x - 1; return x & o1; } pushq %rbp movq %rsp , %rbp movl %edi , -0x4(%rbp ) movl -0x4(%rbp ), %edi subl $0x1 , %edi movl %edi , -0x8(%rbp ) movl -0x4(%rbp ), %edi andl -0x8(%rbp ), %edi movl %edi , %eax popq %rbp retq nop nop nop (a) Source. blsrl %edi , %esi sets %ch xorq %rax , %rax sarb $0x2 , %ch rorw $0x1 , %di subb $0x3 , %dil mull %ebp subb %ch , %dh rcrb $0x1 , %dil cmovbel %esi , %eax retq (b) Optimization starting point. blsrl %edi , %eax retq (c) Alternative equivalent program. (d) Optimal solution. Figure 6: Program at different stage of the optimization.
ABSTRACT Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization. 1 INTRODUCTION Considering the importance of computing to human society, it is not surprising that a very large body of research has gone into the study of the syntax and semantics of programs and programming languages. Code super-optimization is an extremely important problem in this context. Given a program or a snippet of source-code, super-optimization is the task of transforming it to a version that has the same input-output behaviour but can be executed on a target compute architecture more efficiently. Superoptimization provides a natural benchmark for evaluating representations of programs. As a task, it requires the decoupling of the semantics of the program from its superfluous properties, the exact implementation. In some sense, it is the natural analogue of the paraphrase problem in natural language processing where we want to change syntax without changing semantics. Decades of research has been done on the problem of code optimization resulting in the development of sophisticated rule-based transformation strategies that are used in compilers to allow them to perform code optimization. While modern compilers implement a large set of rewrite rules and are able to achieve impressive speed-ups, they fail to offer any guarantee of optimality, thus leaving room for further improvement. An alternative approach is to search over the space of all possible programs that are equivalent to the compiler output, and select the one that is the most efficient. If the search is carried out in a brute-force manner, we are guaranteed to achieve super-optimization. However, this approach quickly becomes computationally infeasible as the number of instructions and the length of the program grows. In order to efficiently perform super-optimization, recent approaches have started to use a stochastic search procedure, inspired by Markov Chain Monte Carlo (MCMC) sampling (Schkufza et al., 2013). Briefly, the search starts at an initial program, such as the compiler output. It iteratively suggests modifications to the program, where the probability of a modification is encoded in a proposal distribution. The modification is either accepted or rejected with a probability that is dependent on the improvement achieved. Under certain conditions on the proposal distribution, the above procedure can be shown, in the limit, to sample from a distribution over programs, where the probability of a program is related to its quality. In other words, the more efficient a program, the more times it is encountered, thereby enabling super-optimization. Using this approach, high-quality implementations of real programs such as the Montgomery multiplication kernel from the OpenSSL library were discovered. These implementations outperformed the output of the gcc compiler and even expert-handwritten assembly code. One of the main factors that governs the efficiency of the above stochastic search is the choice of the proposal distribution. Surprisingly, the state of the art method, Stoke (Schkufza et al., 2013), employs a proposal distribution that is neither learnt from past behaviour nor does it depend on the syntax or semantics of the program under consideration. We argue that this choice fails to fully exploit the power of stochastic search. For example, consider the case where we are interested in performing bitwise operations, as indicated by the compiler output. In this case, it is more likely that the optimal program will contain bitshifts than floating point opcodes. Yet, Stoke will assign an equal probability of use to both types of opcodes. In order to alleviate the aforementioned deficiency of Stoke, we build a reinforcement learning framework to estimate the proposal distribution for optimizing the source code under consideration. The score of the distribution is measured as the expected quality of the program obtained via stochastic search. Using training data, which consists of a set of input programs, the parameters are learnt via the REINFORCE algorithm (Williams, 1992). We demonstrate the efficacy of our approach on two datasets. The first is composed of programs from “Hacker’s Delight” (Warren, 2002). Due to the limited diversity of the training samples, we show that it is possible to learn a prior distribution (unconditioned on the input program) that outperforms the state of the art. The second dataset contains automatically generated programs that introduce diversity in the training samples. We show that, in this more challenging setting, we can learn a conditional distribution given the initial program that significantly outperforms Stoke. 2 RELATED WORKS Super-optimization The earliest approaches for super-optimization relied on brute-force search. By sequentially enumerating all programs in increasing length orders (Granlund & Kenner, 1992; Massalin, 1987), the shortest program meeting the specification is guaranteed to be found. As expected, this approach scales poorly to longer programs or to large instruction sets. The longest reported synthesized program was 12 instructions long, on a restricted instruction set (Massalin, 1987). Trading off completeness for efficiency, stochastic methods (Schkufza et al., 2013) reduced the number of programs to test by guiding the exploration of the space, using the observed quality of programs encountered as hints. In order to improve the size of solvable instances, Phothilimthana et al. (2016) combined stochastic optimizers with smart enumerative solvers. However, the reliance of stochastic methods on a generic unspecific exploratory policy made the optimization blind to the problem at hand. We propose to tackle this problem by learning the proposal distribution. Neural Computing Similar work was done in the restricted case of finding efficient implementation of computation of value of degree k polynomials (Zaremba et al., 2014). Programs were generated from a grammar, using a learnt policy to prioritise exploration. This particular approach of guided search looks promising to us, and is in spirit similar to our proposal, although applied on a very restricted case. Another approach to guide the exploration of the space of programs was to make use of the gradients of differentiable relaxation of programs. Bunel et al. (2016) attempted this by simulating program execution using Recurrent Neural Networks. However, this provided no guarantee that the network parameters were going to correspond to real programs. Additionally, this method only had the possibility of performing local, greedy moves, limiting the scope of possible transformations. On the contrary, our proposed approach operates directly on actual programs and is capable of accepting short-term detrimental moves. Learning to optimize Outside of program optimization, applying learning algorithms to improve optimization procedures, either in terms of results achieved or runtime, is a well studied subject. Doppa et al. (2014) proposed imitation learning based methods to deal with structured output spaces, in a “Learning to search” framework. While this is similar in spirit to stochastic search, our setting differs in the crucial aspect of having a valid cost function instead of searching for one. More relevant is the recent literature on learning to optimize. Li & Malik (2016) and Andrychowicz et al. (2016) learn how to improve on first-order gradient descent algorithms, making use of neural networks. Our work is similar, as we aim to improve the optimization process. However, as opposed to the gradient descent that they learn on a continuous unconstrained space, our initial algorithm is an MCMC sampler on a discrete domain. Similarly, training a proposal distribution parameterized by a Neural Network was also proposed by Paige & Wood (2016) to accelerate inference in graphical models. Similar approaches were successfully employed in computer vision problems where data driven proposals allowed to make inference feasible (Jampani et al., 2015; Kulkarni et al., 2015; Zhu et al., 2000). Other approaches to speeding up MCMC inference include the work of Salimans et al. (2015), combining it with Variational inference. 3 LEARNING STOCHASTIC SUPER-OPTIMIZATION 3.1 STOCHASTIC SEARCH AS A PROGRAM OPTIMIZATION PROCEDURE Stoke (Schkufza et al., 2013) performs black-box optimization of a cost function on the space of programs, represented as a series of instructions. Each instruction is composed of an opcode, specifying what to execute, and some operands, specifying the corresponding registers. Each given input program \( \mathcal{T} \) defines a cost function. For a candidate program \( \mathcal{R} \) called *rewrite*, the goal is to optimize the following cost function: \[ \text{cost}(\mathcal{R}, \mathcal{T}) = \omega_c \times \text{eq}(\mathcal{R}, \mathcal{T}) + \omega_p \times \text{perf}(\mathcal{R}) \] The term \( \text{eq}(\mathcal{R}; \mathcal{T}) \) measures how well the outputs of the rewrite match the outputs of the reference program. This can be obtained either exactly by running a symbolic validator or approximately by running test cases. The term \( \text{perf}(\mathcal{R}) \) is a measure of the efficiency of the program. In this paper, we consider runtime to be the measure of this efficiency. It can be approximated by the sum of the latency of all the instructions in the program. Alternatively, runtime of the program on some test cases can be used. To find the optimum of this cost function, Stoke runs an MCMC sampler using the Metropolis (Metropolis et al., 1953) algorithm. This allows us to sample from the probability distribution induced by the cost function: \[ p(\mathcal{R}; \mathcal{T}) = \frac{1}{Z} \exp(-\text{cost}(\mathcal{R}, \mathcal{T})). \] The sampling is done by proposing random moves from a different proposal distribution: \[ \mathcal{R}' \sim q(\cdot|\mathcal{R}). \] The cost of the new modified program is evaluated and an acceptance criterion is computed. This acceptance criterion \[ \alpha(\mathcal{R}, \mathcal{T}) = \min \left(1, \frac{p(\mathcal{R}'; \mathcal{T})}{p(\mathcal{R}; \mathcal{T})}\right), \] is then used as the parameter of a Bernoulli distribution from which an accept/reject decision is sampled. If the move is accepted, the state of the optimizer is updated to \( \mathcal{R}' \). Otherwise, it remains in \( \mathcal{R} \). While the above procedure is only guaranteed to sample from the distribution \( p(\cdot; \mathcal{T}) \) in the limit if the proposal distribution \( q \) is symmetric (\( q(\mathcal{R}'|\mathcal{R}) = q(\mathcal{R}|\mathcal{R}') \) for all \( \mathcal{R}, \mathcal{R}' \)), it still allows us to perform efficient hill-climbing for non-symmetric proposal distributions. Moves leading to an improvement are always going to be accepted, while detrimental moves can still be accepted in order to avoid getting stuck in local minima. 3.2 LEARNING TO SEARCH We now describe our approach to improve stochastic search by learning the proposal distribution. We begin our description by defining the learning objective (section 3.2.1), followed by a parameterization of the proposal distribution (section 3.2.2), and finally the reinforcement learning framework to estimate the parameters of the proposal distribution (section 3.2.3). 3.2.1 OBJECTIVE FUNCTION Our goal is to optimize the cost function defined in equation (1). Given a fixed computational budget of \( T \) iterations to perform program super-optimization, we want to make moves that lead us to the lowest possible cost. As different programs have different runtimes and therefore different associated costs, we need to perform normalization. As normalized loss function, we use the ratio between the best rewrite found and the cost of the initial unoptimized program \( \mathcal{R}_0 \). Formally, the loss for a set of rewrites \( \{\mathcal{R}_t\}_{t=0..T} \) is defined as follows: \[ r(\{\mathcal{R}_t\}_{t=0..T}) = \left( \frac{\min_{t=0..T} \operatorname{cost}(\mathcal{R}_t, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right). \] Recall that our goal is to learn a proposal distribution. Given that our optimization procedure is stochastic, we will need to consider the expected cost as our loss. This expected loss is a function of the parameters \( \theta \) of our parametric proposal distribution \( q_\theta \): \[ \mathcal{L}(\theta) = \mathbb{E}_{\{\mathcal{R}_t\} \sim q_\theta} [r(\{\mathcal{R}_t\}_{t=0..T})]. \] 3.2.2 PARAMETERIZATION OF THE MOVE PROPOSAL DISTRIBUTION The proposal distribution (3) originally used in Stoke (Schkufza et al., 2013) takes the form of a hierarchical model. The type of the move is initially sampled from a probability distribution. Additional samples are drawn to specify, for example, the affected location in the programs ,the new operands or opcode to use. Which of these probability distribution get sampled depends on the type of move that was first sampled. The detailed structure of the proposal probability distribution can be found in Appendix B. Stoke uses uniform distributions for each of the elementary probability distributions the model samples from. This corresponds to a specific instantiation of the general stochastic search paradigm. In this work, we propose to learn those probability distributions so as to maximize the probability of reaching the best programs. The rest of the optimization scheme remains similar to the one of Schkufza et al. (2013). Our chosen parameterization of \( q \) is to keep the hierarchical structure of the original work of Schkufza et al. (2013), as detailed in Appendix B, and parameterize all the elementary probability distributions (over the positions in the programs, the instructions to propose or the arguments) independently. The set \( \theta \) of parameters for \( q_\theta \) will thus contain a set of parameters for each elementary probability distributions. A fixed proposal distribution is kept through the optimization of a given program, so the proposal distribution needs to be evaluated only once, at the beginning of the optimization and not at every iteration of MCMC. The stochastic computation graph corresponding to a run of the Metropolis algorithm is given in Figure 1. We have assumed the operation of evaluating the cost of a program to be a deterministic function, as we will not model the randomness of measuring performance. 3.2.3 LEARNING THE PROPOSAL DISTRIBUTION In order to learn the proposal distribution, we will use stochastic gradient descent on our loss function (6). We obtain the first order derivatives with regards to our proposal distribution parameters using the REINFORCE (Williams, 1992) estimator, also known as the likelihood ratio estimator (Glynn, 1990) or the score function estimator (Fu, 2006). This estimator relies on a rewriting of the gradient of the expectation. For an expectation with regards to a probability distribution \( x \sim f_\theta \), the REINFORCE estimator is: \[ \nabla_\theta \sum_x f(x; \theta) r(x) = \sum_x r(x) \nabla_\theta f(x; \theta) = \sum_x f(x; \theta) r(x) \nabla_\theta \log(f(x; \theta)), \] and provides an unbiased estimate of the gradient. ![Stochastic computation graph of the Metropolis algorithm used for program super-optimization.](page_324_613_900_579.png) Figure 1: Stochastic computation graph of the Metropolis algorithm used for program super-optimization. Round nodes are stochastic nodes and square ones are deterministic. Red arrows corresponds to computation done in the forward pass that needs to be learned while green arrows correspond to the backward pass. Full arrows represent deterministic computation and dashed arrows represent stochastic ones. The different steps of the forward pass are: (a) Based on features of the reference program, the proposal distribution \( q \) is computed. (b) A random move is sampled from the proposal distribution. (c) The score of the proposed rewrite is experimentally measured. (d) The acceptance criterion (4) for the move is computed. (e) The move is accepted with a probability equal to the acceptance criterion. (f) The cost is observed, corresponding to the best program obtained during the search. (g) Moves b to f are repeated T times. A helpful way to derive the gradients is to consider the execution traces of the search procedure under the formalism of stochastic computation graphs (Schulman et al., 2015). We introduce one “cost node” in the computation graphs at the end of each iteration of the sampler. The associated cost corresponds to the normalized difference between the best rewrite so far and the current rewrite after this step: \[ c_t = \min \left( 0, \left( \frac{\operatorname{cost}(\mathcal{R}_t, \mathcal{T}) - \min_{i=0..t-1} \operatorname{cost}(\mathcal{R}_i, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right) \right). \] The sum of all the cost nodes corresponds to the sum of all the improvements made when a new lowest cost was achieved. It can be shown that up to a constant term, this is equivalent to our objective function (5). As opposed to considering only a final cost node at the end of the \( T \) iterations, this has the advantage that moves which were not responsible for the improvements would not get assigned any credit. For each round of MCMC, the gradient with regards to the proposal distribution is computed using the REINFORCE estimator which is equal to \[ \widehat{\nabla_{\theta,i}} \mathcal{L}(\theta) = (\nabla_{\theta} \log q_{\theta}(\mathcal{R}_i|\mathcal{R}_{i-1})) \sum_{t>i} c_t. \] (9) As our proposal distribution remains fixed for the duration of a program optimization, these gradients needs to be summed over all the iterations to obtain the total contribution to the proposal distribution. Once this gradient is estimated, it becomes possible to run standard back-propagation with regards to the features on which the proposal distribution is based on, so as to learn the appropriate feature representation. 4 EXPERIMENTS 4.1 SETUP Implementation Our system is built on top of the Stoke super-optimizer from Schkufza et al. (2013). We instrumented the implementation of the Metropolis algorithm to allow sampling from parameterized proposal distributions instead of the uniform distributions previously used. Because the proposal distribution is only evaluated once per program optimisation, the impact on the optimization throughput is low, as indicated in Table 3. Our implementation also keeps track of the traces through the stochastic graph. Using the traces generated during the optimization, we can compute the estimator of our gradients, implemented using the Torch framework (Collobert et al., 2011). Datasets We validate the feasibility of our learning approach on two experiments. The first is based on the Hacker’s delight (Warren, 2002) corpus, a collection of twenty five bit-manipulation programs, used as benchmark in program synthesis (Gulwani et al., 2011; Jha et al., 2010; Schkufza et al., 2013). Those are short programs, all performing similar types of tasks. Some examples include identifying whether an integer is a power of two from its binary representation, counting the number of bits turned on in a register or computing the maximum of two integers. An exhaustive description of the tasks is given in Appendix C. Our second corpus of programs is automatically generated and is more diverse. Models The models we are learning are a set of simple elementary probabilities for the categorical distribution over the instructions and over the type of moves to perform. We learn the parameters of each separate distribution jointly, using a Softmax transformation to enforce that they are proper probability distributions. For the types of move where opcodes are chosen from a specific subset, the probabilities of each instruction are appropriately renormalized. We learn two different type of models and compare them with the baseline of uniform proposal distributions equivalent to Stoke. Our first model, henceforth denoted the bias, is not conditioned on any property of the programs to optimize. By learning this simple proposal distribution, it is only possible to capture a bias in the dataset. This can be understood as an optimal proposal distribution that Stoke should default to. The second model is a Multi Layer Perceptron (MLP), conditioned on the input program to optimize. For each input program, we generate a Bag-of-Words representation based on the opcodes of the program. This is embedded through a three hidden layer MLP with ReLU activation unit. The proposal distribution over the instructions and over the type of moves are each the result of passing the outputs of this embedding through a linear transformation, followed by a SoftMax. The optimization is performed by stochastic gradient descent, using the Adam (Kingma & Ba, 2015) optimizer. For each estimate of the gradient, we draw 100 samples for our estimator. The values of the hyperparameters used are given in Appendix A. The number of parameters of each model is given in Table 1. <table> <tr> <th>Model</th> <th># of parameters</th> </tr> <tr> <td>Uniform</td> <td>0</td> </tr> <tr> <td>Bias</td> <td>2912</td> </tr> <tr> <td>MLP</td> <td>1.4 \times 10^6</td> </tr> </table> Table 1: Size of the different models compared. Uniform corresponds to Stoke Schkufza et al. (2013). <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>57.01\%</td> <td>53.71\%</td> </tr> <tr> <td>Bias</td> <td>36.45\%</td> <td>31.82\%</td> </tr> <tr> <td>MLP</td> <td>35.96\%</td> <td>31.51\%</td> </tr> </table> Table 2: Final average relative score on the Hacker’s Delight benchmark. While all models improve with regards to the initial proposal distribution based on uniform sampling, the model conditioning on program features reach better performances. 4.2 Existing Programs In order to have a larger corpus than the twenty-five programs initially present in “Hacker’s Delight”, we generate various starting points for each optimization. This is accomplished by running Stoke with a cost function where \( \omega_p = 0 \) in (1), and keeping only the correct programs. Duplicate programs are filtered out. This allows us to create a larger dataset from which to learn. Examples of these programs at different level of optimization can be found in Appendix D. We divide this augmented Hacker’s Delight dataset into two sets. All the programs corresponding to even-numbered tasks are assigned to the first set, which we use for training. The programs corresponding to odd-numbered tasks are kept for separate evaluation, so as to evaluate the generalisation of our learnt proposal distribution. The optimization process is visible in Figure 2, which shows a clear decrease of the training loss and testing loss for both models. While simply using stochastic super-optimization allows to discover programs 40% more efficient on average, using a tuned proposal distribution yield even larger improvements, bringing the improvements up to 60%, as can be seen in Table2. Due to the similarity between the different tasks, conditioning on the program features does not bring any significant improvements. ![Two line plots showing training and testing loss over time for Bias and Multi-layer Perceptron](page_1012_1042_484_246.png) (a) Bias (b) Multi-layer Perceptron Figure 2: Proposal distribution training. All models learn to improve the performance of the stochastic optimization. Because the tasks are different between the training and testing dataset, the values between datasets can’t directly be compared as some tasks have more opportunity for optimization. It can however be noted that improvements on the training dataset generalise to the unseen tasks. In addition, to clearly demonstrate the practical consequences of our learning, we present in Figure 3 a superposition of score traces, sampled from the optimization of a program of the test set. Figure 3a corresponds to our initialisation, an uniform distribution as was used in the work of Schkufza et al. (2013). Figure 3d corresponds to our optimized version. It can be observed that, while the uniform proposal distribution was successfully decreasing the cost of the program, our learnt proposal distribution manages to achieve lower scores in a more robust manner and in less iterations. Even using only 100 iterations (Figure 3e), the learned model outperforms the uniform proposal distribution with 400 iterations (Figure 3c). (a) With Uniform proposal Optimization Traces (b) Scores after 200 iterations (c) Scores after 400 iterations (d) With Learned Bias Optimization Traces (e) Scores after 100 iterations (f) Scores after 200 iterations Figure 3: Distribution of the improvement achieved when optimising a training sample from the Hacker’s Delight dataset. The first column represent the evolution of the score during the optimization. The other columns represent the distribution of scores after a given number of iterations. (a) to (c) correspond to the uniform proposal distribution, (d) to (f) correspond to the learned bias. 4.3 AUTOMATICALLY GENERATED PROGRAMS While the previous experiments shows promising results on a set of programs of interest, the limited diversity of programs might have made the task too simple, as evidenced by the good performance of a blind model. Indeed, despite the data augmentation, only 25 different tasks were present, all variations of the same programs task having the same optimum. To evaluate our performance on a more challenging problem, we automatically synthesize a larger dataset of programs. Our methods to do so consists in running Stoke repeatedly with a constant cost function, for a large number of iterations. This leads to a fully random walk as every proposed programs will have the same cost, leading to a 50% chance of acceptance. We generate 600 of these programs, 300 that we use as a training set for the optimizer to learn over and 300 that we keep as a test set. The performance achieved on this more complex dataset is shown in Figure 4 and Table 4. (a) Bias (b) Multi-layer Perceptron Figure 4: Training of the proposal distribution on the automatically generated benchmark. <table> <tr> <th>Proposal distribution</th> <th>MCMC iterations throughput</th> </tr> <tr> <td>Uniform</td> <td>60 000 /second</td> </tr> <tr> <td>Categorical</td> <td>20 000 /second</td> </tr> </table> Table 3: Throughput of the proposal distribution estimated by timing MCMC for 10000 iterations <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>76.63%</td> <td>78.15 %</td> </tr> <tr> <td>Bias</td> <td>61.81%</td> <td>63.56%</td> </tr> <tr> <td>MLP</td> <td><b>60.13%</b></td> <td><b>62.27%</b></td> </tr> </table> Table 4: Final average relative score. The MLP conditioning on the features of the program perform better than the simple bias. Even the unconditioned bias performs significantly better than the Uniform proposal distribution. 5 CONCLUSION Within this paper, we have formulated the problem of optimizing the performance of a stochastic super-optimizer as a Machine Learning problem. We demonstrated that learning the proposal distribution of a MCMC sampler was feasible and lead to faster and higher quality improvements. Our approach is not limited to stochastic superoptimization and could be applied to other stochastic search problems. It is interesting to compare our method to the synthesis-style approaches that have been appearing recently in the Deep Learning community (Graves et al., 2014) that aim at learning algorithms directly using differentiable representations of programs. We find that the stochastic search-based approach yields a significant advantage compared to those types of approaches, as the resulting program can be run independently from the Neural Network that was used to discover them. Several improvements are possible to the presented methods. In mature domains such as Computer Vision, the representations of objects of interests have been widely studied and as a result are successful at capturing the information of each sample. In the domains of programs, obtaining informative representations remains a challenge. Our proposed approach ignores part of the structure of the program, notably temporal, due to the limited amount of existing data. The synthetic data having no structure, it wouldn’t be suitable to learn those representations from it. Gathering a larger dataset of frequently used programs so as to measure more accurately the practical performance of those methods seems the evident next step for the task of program synthesis. #include <stdint.h> int32_t p01(int32_t x) { int32_t o1 = x - 1; return x & o1; } pushq %rbp movq %rsp , %rbp movl %edi , -0x4(%rbp ) movl -0x4(%rbp ), %edi subl $0x1 , %edi movl %edi , -0x8(%rbp ) movl -0x4(%rbp ), %edi andl -0x8(%rbp ), %edi movl %edi , %eax popq %rbp retq nop nop nop (a) Source. blsrl %edi , %esi sets %ch xorq %rax , %rax sarb $0x2 , %ch rorw $0x1 , %di subb $0x3 , %dil mull %ebp subb %ch , %dh rcrb $0x1 , %dil cmovbel %esi , %eax retq (b) Optimization starting point. blsrl %edi , %eax retq (c) Alternative equivalent program. (d) Optimal solution. Figure 6: Program at different stage of the optimization.
accept
Accept (Poster)
7
0ec08b71ce5890a3a5db686f3063ae9c962ba05e
iclr
2,017
"Variational Recurrent Adversarial Deep Domain Adaptation\n\nSanjay Purushotham*, Wilka Carvalho*, T(...TRUNCATED)
"ABSTRACT\n\nWe study the problem of learning domain invariant representations for time series data (...TRUNCATED)
accept
Accept (Poster)
5.666667
10b4cd8cb62528021c9e44c0c67a161d5b25e958
iclr
2,017
"Beyond Fine Tuning: A Modular Approach to Learning on Small Data\n\nAryk Anderson†\nEastern Washi(...TRUNCATED)
"ABSTRACT\n\nIn this paper we present a technique to train neural network models on small amounts of(...TRUNCATED)
reject
Reject
5.333333
13cb1b3f52dcef286c4a91dd6efeca87b51c3eee
iclr
2,017
"IMPROVING SAMPLING FROM GENERATIVE AUTOENCODERS WITH MARKOV CHAINS\n\nAntonia Creswell, Kai Arulkum(...TRUNCATED)
"ABSTRACT\n\nWe focus on generative autoencoders, such as variational or adversarial autoencoders, w(...TRUNCATED)
reject
Reject
3
172bc8f7c4fc6267c1ca379d59e79cebed1ea041
iclr
2,017
"LEARNING TO ACT BY PREDICTING THE FUTURE\n\nAlexey Dosovitskiy\nIntel Labs\n\nVladlen Koltun\nIntel(...TRUNCATED)
"ABSTRACT\n\nWe present an approach to sensorimotor control in immersive environments. Our approach (...TRUNCATED)
accept
Accept (Oral)
7.666667
2181e7db97a41562e65e57635f8ce2c288b8f627
iclr
2,017
"SUBMODULAR SUM-PRODUCT NETWORKS FOR SCENE UNDERSTANDING\n\nAbram L. Friesen & Pedro Domingos\nDepar(...TRUNCATED)
"ABSTRACT\n\nSum-product networks (SPNs) are an expressive class of deep probabilistic models in whi(...TRUNCATED)
reject
Reject
4.333333
2534f8950f3e7b86128b066ab17cfd2e5e5dc2b7
iclr
2,017
"FUZZY PARAPHRASES IN LEARNING WORD REPRESENTATIONS WITH A LEXICON\n\nYuanzhi Ke & Masafumi Hagiwara(...TRUNCATED)
"ABSTRACT\n\nA synonym of a polysemous word is usually only the paraphrase of one sense among many. (...TRUNCATED)
reject
Reject
4.666667
26b04b28e8bc3b0be8985d2b2659f6854f390fcd
iclr
2,017
"STEERABLE CNNs\n\nTaco S. Cohen\nUniversity of Amsterdam\[email protected]\n\nMax Welling\nUniversi(...TRUNCATED)
"ABSTRACT\n\nIt has long been recognized that the invariance and equivariance properties of a repres(...TRUNCATED)
accept
Accept (Poster)
7
2a7f35322805dc14c7d3da7f2803284dfe72c92d
iclr
2,017
"HOLStep: A Machine Learning Dataset for Higher-Order Logic Theorem Proving\n\nCezary Kaliszyk\nUniv(...TRUNCATED)
"ABSTRACT\n\nLarge computer-understandable proofs consist of millions of intermediate logical steps.(...TRUNCATED)
accept
Accept (Poster)
7
End of preview. Expand in Data Studio

Popper Reviews — Private Prediction Subset

Dataset Summary

This repository exposes an 80/20 train/test split tailored for acceptance prediction tasks. Each example contains:

  • paper_text: OCR’d manuscript text.
  • anonymized_paper_text: the same text with the author block removed (starts at the abstract).
  • decision_label: normalized accept/reject outcome.
  • decision_text: original decision string when available.
  • average_review_score: mean of numeric reviewer ratings extracted from the Popper review JSON files.

Source corpora: Popper’s ICLR, TMLR, and Nature review dumps. Only papers with an explicit accept/reject decision are included. Reference lists are removed from anonymized_paper_text to focus on the manuscript narrative.

Splits

Split Records
train 1 884
test 472

Splits are stratified with an 80/20 ratio using a fixed random seed (42).

Usage

from datasets import load_dataset

data = load_dataset("popper-spiralworks/prediction_task", split="train", token=token)
print(data[0]["decision_label"], data[0]["average_review_score"])

Processing Notes

  • OCR text comes from DeepSeek-OCR via Popper (metadata.backend = deepseek when available).
  • Average scores are computed by parsing the numeric prefix of each reviewer rating field.
  • Non-numeric or missing ratings are ignored during averaging.
  • Additional review metadata and reviewer comments are available in the public dataset sumuks/research_papers_with_reviews_ocr.

Attribution

When using this dataset, please credit the original venues (ICLR, TMLR, Nature) and cite the Popper project. Access to this repository is restricted to the Popper Spiralworks collaboration.

Downloads last month
16