text
stringlengths 13.9k
38.7k
| labels
class label 4
classes | id
stringlengths 10
10
|
|---|---|---|
# L EARNING VECTOR FIELDS OF DIFFERENTIAL EQUATIONS ## ON MANIFOLDS WITH GEOMETRICALLY CONSTRAINED # ## OPERATOR VALUED KERNELS **Daning Huang** Department of Aerospace Engineering The Pennsylvania State University University Park, PA 16802, USA [email protected] **John Harlim** Department of Mathematics, Department of Meteorology and Atmospheric Science Institute for Computational and Data Sciences The Pennsylvania State University University Park, PA 16802, USA [email protected] A BSTRACT **Hanyang He** Department of Electrical Engineering The Pennsylvania State University University Park, PA 16802, USA [email protected] **Yan Li** Department of Electrical Engineering The Pennsylvania State University University Park, PA 16802, USA [email protected] We address the problem of learning ordinary differential equations (ODEs) on manifolds. Existing machine learning methods, particularly those using neural networks, often struggle with high computational demands. To overcome this issue, we introduce a geometrically constrained operator-valued kernel that allows us to represent vector fields on tangent bundles of smooth manifolds. The construction of the kernel imposes the geometric constraints that are estimated from the data and ensures the computational feasibility for learning high dimensional systems of ODEs. Once the vector fields are estimated, e.g., by the kernel ridge regression, we need an ODE solver that guarantees the solution to stay on (or close to) the manifold. To overcome this issue, we propose a geometry-preserving ODE solver that approximates the exponential maps corresponding to the ODE solutions. We deduce a theoretical error bound for the proposed solver that guarantees the approximate solutions to lie on the manifold in the limit of large data. We verify the effectiveness of the proposed approach on high-dimensional dynamical systems, including the cavity flow problem, the beating and travelling waves in Kuramoto-Sivashinsky equations, and the reaction-diffusion dynamics. 1 I NTRODUCTION In this paper, we consider the problem of learning ODE whose solutions lie on a manifold, which arises from a wide range of applications, from mechanical multibody systems to electrical circuit simulation and power systems (see the references in Ascher & Petzold (1998); Kunkel (2006)), where the system of ODEs on manifolds is formulated based on Differential-Algebraic Equations (Rheinboldt, 1984). One of the main challenges in this problem is that the underlying manifold (geometric) constraints are not explicitly known and need to be uncovered from the data. A popular solution to this problem is to employ a nonlinear dimensionality reduction approach, such as autoencoders, to represent the geometrical constraint of the dynamics, i.e., the manifold. A typical strategy is to learn a low-dimensional latent space to represent the original data and learn the dynamics in the latent space; the dynamics are represented using, e.g., a neural-network (NN) based discrete-time mapping (Linot & Graham, 2020), recurrent neural network (Maulik et al., 2021; Vlachas et al., 2022), sequence-to-sequence mapping (Wu et al., 2024), Latent Dynamics 1 Network (Regazzoni et al., 2024), and SINDy (Fukami et al., 2021; Lin et al., 2024). The main pitfall of this class of methods is that the latent space only provides a global parametrization of the manifold, whose dimension is typically larger than the intrinsic dimension of the manifold, and the dynamics are not guaranteed to learn the exponential maps (for discrete-time model) or vector fields (for continuous-time model) of the manifold. As a result, the predicted trajectories may deviate from the manifold, limiting the long-term prediction accuracy, as we shall see in several numerical examples with NN baselines. Furthermore, the computational cost to train such NN-based nonlinear estimators is known to be expensive. In our numerical comparison with the proposed linear method, we found that while their testing times are comparable, the training time for NN models is over 800 times slower on a simple example with two ambient dimensions. **Contribution.** This motivates us to develop a linear estimator of the vector fields on the tangent bundles of smooth manifolds. Motivated by SINDy algorithm (Brunton et al., 2016), we construct a geometrically constrained dictionary to represent the unknown vector fields where these constraints will be approximated from the available point cloud data induced by the observed time series of the dynamical systems. Since such a dictionary is subjected to the curse of dimensionality, we employ the standard “kernel trick” to mitigate this issue. Unlike previous works in non-manifold setting with scalar-valued kernels (Baddoo et al., 2022; Yang et al., 2024) and kriging/Gaussian process (Glaz et al., 2010), the geometrical constraints give rise to an operator-valued kernel that leverages the intrinsic dimension of the manifold to enable practical implementation. To numerically integrate the system of ODEs with the estimated vector fields, we devise an ODE solver that guarantees the solutions to be on the manifolds in the limit of large data. We will demonstrate the effectiveness of this approach numerically on several high-dimensional test problems, including the cavity flow problem, the beating and travelling waves in Kuramoto-Sivashinsky equations, and reaction-diffusion dynamics. **Paper organization.** The remainder of this paper is organized as follows: In Section 2, we discuss the geometrically constrained dictionary, extending the SINDy approach and generalizing it to an operator-valued kernel to mitigate high dimensional problems. In Section 3, we introduce a geometry-preserving time integration scheme, provide an illuminating example that motivate this integrator, and discuss the convergence properties. In Section 4, we discuss closely related approaches that will be used as a baseline to quantify the performance of the proposed approaches documented in Section 5. In Section 6, we give a brief summary. We supplement the paper with five appendices that report the detailed technical numerical tools needed in the algorithm, computational complexity, the theoretical proofs, and additional numerical results. 2 G EOMETRICALLY C ONSTRAINED D ICTIONARY Consider dynamical systems governed by a system of ODEs, **x** ˙ _=_ **f** ( **x** ), **x** _∈_ R _[n]_, (1) where the vector field **f** : R _[n]_ _→_ R _[n]_ . The Sparse Identification of Nonlinear Dynamics (SINDy) approach (Wang et al., 2011; Brunton et al., 2016) is to approximate components of **f** _=_ ( _f_ [1],..., _f_ _[n]_ ) by a sparse regression on a set of appropriate basis functions. Typical choices of basis functions can be polynomials and/or trigonometric functions. An example of polynomial dictionary is, _θ_ ( **x** ) _=_ �1 **x** _[⊤]_ ( **x** [2] ) _[⊤]_ ... � _∈_ R _[m]_ where **x** _[j]_ : _=_ � _x_ 1 _[j]_ [1] _[x]_ _[ j]_ 2 [2] _[···]_ _[x]_ _[ j]_ _n_ _[n]_ [:] _[ ∀]_ _[j][ =][ j]_ 1 _[+][ j]_ 2 _[+]_ [...] _[+][ j]_ _n_ � . Given a set of labeled training data { **x** _i_, ˙ **x** _i_ } _i_ _=_ 1,..., _N_, where the subscript index denotes temporal information, **x** _i_ : _=_ **x** ( _t_ _i_ ) and ˙ **x** _i_ : _=_ ˙ **x** ( _t_ _i_ ), the SINDy approach is to approximate _f_ _[k]_ ( **x** ) _≈_ _f_ _ϵ_ _[k]_ [(] **[x]** [;] [ ˆ] _[ξ]_ _[k]_ [) :] _[=][ θ]_ [(] **[x]** [)] _[ξ]_ [ˆ] _[k]_ [with coefficients] [ ˆ] _[ξ]_ _[k]_ _[∈]_ [R] _[m]_ [ ob-] tained from solving the following sparse regression problem, Ξˆ _=_ argmin Ξ _N_ � _∥_ **x** ˙ _i_ _−_ **f** _ϵ_ ( **x** _i_ ; Ξ) _∥_ [2] _+_ _λ∥_ Ξ _∥_ 1 . (2) _i_ _=_ 1 where _λ >_ 0 is a sparsity parameter, **f** _ϵ_ _=_ ( _f_ _ϵ_ [1] [,...,] _[ f]_ _[ n]_ _ϵ_ [), and] [ ˆ][Ξ] _[ =]_ [ ((] _[ξ]_ [1] [)] _[⊤]_ [,...,] [(] _[ξ]_ _[n]_ [)] _[⊤]_ [)] _[⊤]_ _[∈]_ [R] _[nm]_ [; by default] _∥·∥_ means _ℓ_ 2 norm. There are two key issues with this approach as it stands. First, the method is 2 sensitive to the choice of dictionary. If the space spanned by the dictionary does not encompass the underlying function, the estimated vector field will not be accurate when evaluated on new sample data from the same distribution. Second, this method is computationally impractical as _n_ increases since the size of the dictionary increases (exponentially) as a function of _n_ . Particularly, if the dictionary consists of monomials of degree up to _p_, then the size of the dictionary is _m ∝_ _p_ _[n][−]_ [1] . Let us now focus on the first issue for a class of dynamics where the solutions lie on a _d_ dimensional Riemannian sub-manifold _M ⊂_ R _[n]_ . In this context, the vector field **f** _∈_ X ( _M_ ) is a map **f** : _M →_ _T M_ that identifies the state **x** _∈_ _M_ to a vector in the tangent space **f** ( **x** ) _∈_ _T_ **x** _M_ . Denote the bases of the tangent space _T_ **x** _i_ _M_ _=_ _[∼]_ R _[d]_ and the normal space as columns of **T** _i_ _∈_ R _[n][×][d]_ and **N** _i_ _∈_ R _[n][×]_ [(] _[n][−][d]_ [)], respectively. We note that these basis vectors can be identified from point cloud data using the local SVD technique (Donoho & Grimes, 2003; Zhang & Zha, 2004) or higher-order methods (Jiang et al., 2024). See Appendix A for the details. For the remainder of this paper, we denote **T** [ˆ] _i_ and **N** [ˆ] _i_ to be the point cloud approximation to **T** _i_ and **N** _i_, respectively. Let the matrix _P_ ( **x** ) : R _[n]_ _→_ _T_ **x** _M ⊂_ R _[n]_ be an orthogonal projection to the local tangent space at **x** . One can show that _P_ ( **x** _i_ ) _=_ **T** _i_ **T** _[⊤]_ _i_ [, where columns of] **[ T]** _[i]_ _[ ∈]_ [R] _[n][×][d]_ [ forms a set of orthonormal vectors] that span _T_ **x** _i_ _M_ . With this background, since **f** ( **x** ) _∈_ _T_ **x** _M_, it is clear that _P_ ( **x** ) **f** ( **x** ) _=_ **f** ( **x** ) under the _n−_ dimensional Euclidean inner product. Practically, when the manifold is unknown, one can approximate _P_ ( **x** _i_ ) by _P_ [ˆ] ( **x** _i_ ) _=_ **T** [ˆ] _i_ **T** [ˆ] _[⊤]_ _i_ [. Based on this information, we propose the following] modification on the SINDy dictionary for modeling the vector field, **f** ( **x** ) _≈_ **f** _ϵ_ ( **x** ; Ξ [ˆ] ) _=_ _P_ [ˆ] ( **x** )Θ( **x** ) Ξ [ˆ], (3) where Θ ( **x** ) _∈_ R _[n][×][nm]_ is a block diagonal matrix with _θ_ ( **x** ) as the diagonal block component, and the coefficients Ξ [ˆ] _∈_ R _[nm]_ are obtained by fitting the model to the observed vector field, ˙ **x** _i_ . In practice, when the available data are only the time series _X =_ { **x** _i_ } _i_ _[N]_ _=_ 1 [, one needs to approximate] the derivatives. We consider approximating ˙ **x** _i_ with **y** _i_ _=_ **T** [ˆ] _i_ **T** [ˆ] _[⊤]_ _i_ [(] **[x]** _[i]_ _[+]_ [1] _[ −]_ **[x]** _[i]_ [)/][∆] _[t]_ [.] As we mentioned before, another challenge with this dictionary is that it is subjected to the curse of dimensionality, that the number of candidate functions grows exponentially and eventually becomes computationally intractable as the dimension of states increases. To mitigate this problem, we propose an operator-valued kernel deduced from the dictionary in (3) that allows the vector field to lie on the tangent bundle in the limit of large data with a compact model having a rank equal to the intrinsic dimension of the manifold. In the remainder of this section, we first use the kernel trick to motivate a Geometrically constrained Multivariate Kernel Ridge Regression (GMKRR) model in the ambient space. Then we formalize the GMKRR model rigorously using the Reproducing Kernel Hilbert Space (RKHS) with a family of operator-valued kernels. Lastly, we manipulate the GMKRR model into the intrinsic space to enable practical computational implementation. 2.1 K ERNELIZATION OF THE GEOMETRICALLY CONSTRAINED DICTIONARY While kernel regression with _ℓ_ _q_ regularization (0 _< q ≤_ 1) has been studied extensively (see Shi et al., 2019, and the references therein), it is computationally much simpler to employ _ℓ_ 2 regularization, which will be the focus in this paper. Specifically, we focus on the following problem modifying (2) as the primal form, Ξˆ _=_ argmin Ξ �� **y** _−_ **Ψ** Ξ�� 2 _+_ _λ∥_ Ξ _∥_ 2, (4) where **y** _=_ [ **y** _[⊤]_ 1 [,] **[y]** _[⊤]_ 2 [,] _[···]_ [,] **[y]** _[⊤]_ _N_ []] _[⊤]_ _[∈]_ [R] _[nN]_ [ with] **[ y]** _[i]_ _[ =]_ [ ˆ] **[T]** _[i]_ [ ˆ] **[T]** _[⊤]_ _i_ [(] **[x]** _[i]_ _[+]_ [1] _[ −]_ **[x]** _[i]_ [)/] [∆] _[t]_ [, and] **[ Ψ]** _[ =]_ [ [] _**[ψ]**_ 1 _[⊤]_ [,] _**[ψ]**_ _[⊤]_ 2 [,] _[···]_ [,] _**[ψ]**_ _[⊤]_ _N_ []] _[⊤]_ _[∈]_ R _[nN]_ _[×][nm]_ with _**ψ**_ _i_ _=_ _**ψ**_ ( **x** _i_ ) _=_ _P_ [ˆ] ( **x** _i_ )Θ( **x** _i_ ) _∈_ R _[n][×][nm]_ . Next, introduce the dual variable _**α**_ _∈_ R _[nN]_ so that Ξ _=_ **Ψ** _[⊤]_ _**α**_, and the dual form of (4) is _**α**_ ˆ _=_ argmin _**α**_ �� **y** _−_ **ΨΨ** _⊤_ _**α**_ �� 2 _+_ _λ_ �� **Ψ** _⊤_ _**α**_ �� 2 _≡_ argmin _**α**_ �� **y** _−_ **K** _**α**_ �� 2 _+_ _λ_ _**α**_ _⊤_ **K** _**α**_ (5) where the gram matrix **K** _=_ **ΨΨ** _[⊤]_ _∈_ R _[nN]_ _[×][nN]_ and its ( _i_, _j_ )th block is **K** _i j_ _=_ _**ψ**_ ( **x** _i_ ) _**ψ**_ ( **x** _j_ ) _[⊤]_ _≡_ _k_ ( **x** _i_, **x** _j_ ) _∈_ R _[n][×][n]_ . The solution to the dual form is _**α**_ _[∗]_ _=_ ( _λ_ **I** _+_ **K** ) _[−]_ [1] **y**, and the predictive model for a new input **x** is given by, **f** _ϵ_ ( **x** ) _=_ _**ψ**_ ( **x** )Ξ _=_ _**ψ**_ ( **x** ) **Ψ** _[⊤]_ _**α**_ _=_ **k** ( **x** )( _λ_ **I** _+_ **K** ) _[−]_ [1] **y** (6) 3 where **k** ( **x** ) _=_ [ _k_ ( **x**, **x** 1 ), _k_ ( **x**, **x** 2 ), _···_, _k_ ( **x**, **x** _N_ )]. Since _**ψ**_ ( **x** ) _=_ _P_ [ˆ] ( **x** )Θ( **x** ) _∈_ R _[n][×][nm]_, we have _k_ ( **x**, **x** _[′]_ ) _=_ _**ψ**_ ( **x** ) _**ψ**_ ( **x** _[′]_ ) _[⊤]_ _=_ _P_ [ˆ] ( **x** )Θ( **x** )Θ( **x** _[′]_ ) _[⊤]_ _P_ [ˆ] ( **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) _P_ [ˆ] ( **x** ) _P_ [ˆ] ( **x** _[′]_ ) (7) where _ρ_ ( **x**, **x** _[′]_ ) _=_ _θ_ ( **x** ) _θ_ ( **x** _[′]_ ) _[⊤]_ _∈_ R, and the last equality is because Θ ( **x** ) Θ ( **x** _[′]_ ) _[⊤]_ _=_ diag[ _ρ_ ( **x**, **x** _[′]_ ), _···_, _ρ_ ( **x**, **x** _[′]_ )] _= ρ_ ( **x**, **x** _[′]_ ) **I** _∈_ R _[n][×][n]_ . Up to this point the regression problem with (3) is converted to a GMKRR problem. The geometrically-constrained function _k_ in (7) is used as a matrix-valued kernel and constructed from a finite set of candidate functions defined in the ambient space. 2.2 T HE RKHS OF THE INTRINSIC GMKRR MODEL In the following, we generalize the GMKRR model to a family of matrix kernel functions that may include infinitely many candidate functions via the construction of a X ( _M_ )-valued RKHS _H_ . **Definition 2.1.** _Let_ _X_ _be a non-empty set,_ _W_ _a separable Hilbert space with inner product_ _〈·_, _·〉_ _, and_ _L_ ( _W_ ) _a Banach space of bounded linear operators on_ _W_ _. A function_ _k_ : _X × X �→_ _L_ ( _W_ ) _is SPD if_ _(1) for any pair_ ( **x**, **x** _[′]_ ) _∈_ _X × X_ _,_ _k_ ( **x**, **x** _[′]_ ) _[∗]_ _= k_ ( **x** _[′]_, **x** ) _, and (2) for any finite set of points_ { **x** _i_ } _i_ _[N]_ _=_ 1 _[in]_ _[ X]_ _[ and]_ { **f** _i_ } _i_ _[N]_ _=_ 1 _[in][ W][,]_ [ �] _i_ _[N]_, _j_ _=_ 1 � **f** _i_, _k_ ( **x** _i_, **x** _j_ ) **f** _j_ � _≥_ 0 _. The function k is an operator-valued kernel on X and W ._ **Definition 2.2.** _Following the notation from the previous definition, for each_ **x** _∈_ _X_ _and_ **f**, **g** _∈_ _W_ _, define_ _k_ **x** **f** ( **x** _[′]_ ) _= k_ ( **x**, **x** _[′]_ ) **f** _for all_ **x** _[′]_ _∈_ _X_ . _For_ **f** _[′]_ _=_ [�] _i_ _[N]_ _=_ 1 _[k]_ **[x]** _[i]_ **[f]** _[i]_ _[ and]_ **[ g]** _[′]_ _[ =]_ [ �] _i_ _[N]_ _=_ 1 _[k]_ **[x]** _[′]_ _i_ **[g]** _[i]_ _[, define the inner]_ � **f** _i_, _k_ ( **x** _i_, **x** _[′]_ _j_ [)] **[g]** _[j]_ �. _Then_ _H =_ span{ _k_ **x** **f** _|_ **x** _∈_ _X_, **f** _∈_ _W_ } _forms an RKHS with_ _product,_ � **f** _[′]_, **g** _[′]_ [�] _H_ _[=]_ [ �] _i_ _[N]_, _j_ _=_ 1 _reproducing kernel k. The RKHS has the reproducing property that_ � **f** ( **x** ), **g** � _W_ _[=]_ � **f** ( _·_ ), _k_ ( _·_, **x** ) **g** � _H_ _[.]_ **Definition 2.3.** _The inner product_ _〈·_, _·〉_ _H_ _also induces the RKHS norm,_ _∥_ **f** _[′]_ _∥_ _H_ _=_ � **Definition 2.3.** _The inner product_ _〈·_, _·〉_ _H_ _also induces the RKHS norm,_ _∥_ **f** _[′]_ _∥_ _H_ _=_ � _〈_ **f** _[′]_, **f** _[′]_ _〉_ _H_ _for all_ **f** _[′]_ _=_ � _iN=_ 1 _[k]_ **[x]** _[i]_ **[f]** _[i]_ _[. When]_ _[ W]_ _[ is a]_ _[ n]_ _[-dimensional Euclidean space,]_ �� **f** _′_ �� _H_ _[=]_ ~~_�_~~ **f** _[⊤]_ **Kf** _, where_ **f** _=_ [ **f** _[⊤]_ 1 [,] **[f]** _[⊤]_ 2 [,] _[···]_ [,] **[f]** _[⊤]_ _N_ []] _[⊤]_ _and_ **K** _∈_ R _[nN]_ _[×][nN]_ _with the_ ( _i_, _j_ ) _th block_ **K** _i j_ _= k_ ( **x** _i_, **x** _j_ ) _._ **Lemma 2.1.** _Consider a function_ _k_ : _M × M �→_ _L_ ( X ( _M_ )) _, defined as_ _k_ ( **x**, **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) _P_ [ˆ] ( **x** ) _P_ [ˆ] ( **x** _[′]_ ), _where ρ_ : R _[n]_ _×_ R _[n]_ _�→_ R _is a scalar-valued kernel. Then k is an operator-valued kernel._ See Appendix D for the proof of Lemma 2.1. The operator-valued kernel _k_ forms the desired X ( _M_ )-valued RKHS, denoted as _H_ _M_ . In practice, we can use any SPD kernels such as the squared exponential (SE) kernel _ρ_ ( **x**, **x** _[′]_ ) _=_ exp � _−_ �� **x** _−_ **x** _′_ �� 2 / _γ_ � or the Matérn kernels (see Appendix E.1). The function in (7) is a special case of the operator-valued kernel on _M_ and X ( _M_ ). Subsequently, the GMKRR model is reformulated via _H_ _M_ . Given a dataset { ( **x** _i_, **y** _i_ ) } _i_ _[N]_ _=_ 1 [, the unknown vector] field **f** _∈_ _H_ _M_ is parametrized as **f** ( **x** ) _=_ [�] _i_ _[N]_ _=_ 1 _[k]_ [(] **[x]** _[i]_ [,] **[x]** [)] _**[α]**_ _[i]_ _[ ≡]_ **[k]** [(] **[x]** [)] _**[α]**_ [,] [ where] _**[ α]**_ _[ =]_ [ [] _**[α]**_ 1 _[⊤]_ [,] _**[α]**_ _[⊤]_ 2 [,] _[···]_ [,] _**[α]**_ _[⊤]_ _N_ []] _[⊤]_ [is] determined by minimizing the following objective function, **f** _[⊤]_ **Kf** _, where_ **f** _=_ [ **f** _[⊤]_ 1 [,] **[f]** _[⊤]_ 2 [,] _[···]_ [,] **[f]** _[⊤]_ _N_ []] _[⊤]_ _N_ _J_ ( **f** ) _=_ � _i_ _=_ 1 �� **y** _i_ _−_ **f** ( **x** _i_ )�� 2 _+_ _λ∥_ **f** _∥_ 2 _H_ _M_ _[≡]_ �� **y** _−_ **K** _**α**_ �� 2 _+_ _λ_ _**α**_ _⊤_ **K** _**α**_, which is the same optimization problem solved in the previous section, and the solution is given by (6) . However, now the new formulation admits a family of operator-valued kernels, that may involve an infinite set of candidate functions. 2.3 C ONVERSION TO INTRINSIC SPACE Subsequently, we formulate the GMKRR model (6) so that the predictive model is effectively defined in the intrinsic space and becomes computationally tractable to train and evaluate. Note that _P_ [ˆ] ( **x** ) _=_ **T** [ˆ] **x** **T** [ˆ] _[⊤]_ **x** [, the operator-valued kernel] _[ k]_ [ in ambient space can be rewritten as] _k_ ( **x**, **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) _P_ [ˆ] ( **x** ) _P_ [ˆ] ( **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) **T** [ˆ] **x** _O_ **xx** _′_ **T** [ˆ] _[⊤]_ **x** _[′]_ _[ ≡]_ **[T]** [ˆ] **[x]** _[r]_ [(] **[x]** [,] **[x]** _[′]_ [)ˆ] **[T]** _[⊤]_ **x** _[′]_ [,] where _O_ **xx** _′_ _=_ **T** [ˆ] _[⊤]_ **x** **[T]** [ˆ] **x** _[′]_ _[ ∈]_ [R] _[d]_ _[×][d]_ [, and] _r_ ( **x**, **x** _[′]_ ) _= ρ_ ( **x**, **x** _[′]_ ) **T** [ˆ] _[⊤]_ **x** **[T]** [ˆ] **x** _[′]_ _[ =][ ρ]_ [(] **[x]** [,] **[x]** _[′]_ [)] _[O]_ **xx** _[′]_ _[ ∈]_ [R] _[d]_ _[×][d]_ [.] (8) 4 Using (8), the gram matrix **K** in ambient space is decomposed as **K** _= T_ **R** _T_ _[⊤]_, where _T ∈_ R _[nN]_ _[×][dN]_ is a block diagonal matrix with diagonal block entries **T** [ˆ] 1, **T** [ˆ] 2,..., **T** [ˆ] _N_, and **R** _∈_ R _[dN]_ _[×][dN]_ is a _N × N_ block matrix with the ( _i_, _j_ )th block **R** _i j_ _= r_ ( **x** _i_, **x** _j_ ) _= ρ_ ( **x** _i_, **x** _j_ ) _O_ **x** _i_ **x** _j_ . Similarly, **k** ( **x** ) _=_ **T** [ˆ] **x** [ _r_ ( **x**, **x** 1 ), _r_ ( **x**, **x** 2 ), _···_, _r_ ( **x**, **x** _N_ )] _T_ _[⊤]_ _≡_ **T** [ˆ] **x** **r** ( **x** ) _T_ _[⊤]_ . (9) Using the above decompositions, the GMKRR formulation (6) in ambient space is converted to, **f** _ϵ_ ( **x** ) _=_ **k** ( **x** )( _λ_ **I** _+_ **K** ) _[−]_ [1] **y** _=_ **T** [ˆ] **x** **r** ( **x** ) _T_ _[⊤]_ [�] _λ_ **I** _+_ _T_ **R** _T_ _[⊤]_ [�] _[−]_ [1] **y** _=_ **T** [ˆ] **x** **r** ( **x** )( _λ_ **I** _+_ **R** ) _[−]_ [1] _T_ _[⊤]_ **y** (10) where the Woodbury identity is used in the last equality. In the intrinsic GMKRR formulation (10), the matrix ( _λ_ **I** _+_ **R** ) is of dimension _dN ×_ _dN_ and computationally tractable to invert, especially if _d_ is small; this is regardless of the ambient dimension _n_ . Furthermore, the term **T** [ˆ] **x** guarantees that the vector fields lie on the local tangent space of the underlying manifold at **x** in the limit of large data. Lastly, we briefly discuss the intrinsic GMKRR model from an RKHS point of view. First, it can be proved that the function _r_ in (8) is an operator-valued kernel on _M_ and _L_ ( R _[d]_ ) (the proof is similar to that of Lemma 2.1); _r_ is referred to as the intrinsic operator-valued kernel. Then, _r_ induces an RKHS and the corresponding GMKRR model in the intrinsic space. The GMKRR is effectively applied to a modified dataset { ( **x** _i_, ˜ **y** _i_ _=_ **T** [ˆ] _[⊤]_ **x** _i_ **[y]** _[i]_ [)] [}] _i_ _[N]_ _=_ 1 [, where the modified label] [ ˜] **[y]** _[i]_ [ is the] vector field expressed in the local tangent space at **x** _i_ . In the kernel _r_ ( **x**, **x** _[′]_ ), if (1) _ρ_ is chosen to be the Diffusion Map kernel and (2) the pairs of data points ( **x**, **x** _[′]_ ) are sufficiently close so that _O_ **xx** _′_ _=_ **T** [ˆ] _[⊤]_ **x** **[T]** [ˆ] **x** _[′]_ [ is always orthogonal, then the RKHS induced by] _[ r]_ [ is a subset of] _[ L]_ [2] [(] [X] [(] _[M]_ [)) that is] spanned by smooth eigenvector-fields of the Connection Laplacians (Singer & Wu, 2012). 3 G EOMETRY -P RESERVING T IME I NTEGRATOR While standard ODE solvers such as Runge-Kutta methods often empirically produce solutions that are close enough to the manifold (i.e., a manifold invariant scheme) for sufficiently small time step, it is well known that the invariant manifold property is only valid when the solvers are employed on a special class of manifolds (Calvo et al., 1996). Various ODE solvers on manifolds have been proposed in literature, see Hairer (2011); Crouch & Grossman (1993) for general vector fields and Leimkuhler & Patrick (1996) for Hamiltonian systems. In this section, we will illustrate this issue on a simple example, propose a normal correction (NC) to the classical explicit Euler scheme which we call Euler+NC in the remainder of this paper, and provide a convergence study. This approach can be viewed as a realization of the local coordinate approach (see Section III.2 in Hairer, 2011) with local parameterization being estimated by GMLS. The proposed normal correction approximates all of the higher-order terms in an exponential map, exp **x** _i_ ( **f** ( **x** _i_ ) ∆ _t_ ), including the second fundamental form, and is computationally more attractive than the classical Taylor’s method that requires the derivatives of the estimated vector fields. 3.1 A MOTIVATING EXAMPLE (a) Parameterized manifolds. (b) Predictions at a long time period. Figure 1: Dynamics on a series of 1D manifolds. Consider a scalar ODE: _θ_ [˙] _=_ [3] 2 _[−]_ [cos] [(] _[θ]_ [),] _[ θ]_ [(0)] _[ =]_ [ 0, whose solution has a period of 2] _[π]_ [, and embed the] solution in a 2D ambient space by ( _x_ 1, _x_ 2 ) _=_ ( _r_ ( _θ_ ) cos ( _θ_ ), _r_ ( _θ_ ) sin ( _θ_ )), where _r_ ( _θ_ ) _=_ 1 _+_ _D_ cos ( _K θ_ ). The 2D embedding is illustrated in Fig. 1a for _K =_ 3 and a series of _D_ values, where the neighboring points separate apart by one step size ∆ _t =_ 0 . 04. When _D =_ 0, the 1D manifold is a “simple” unit circle. As _D_ increases, the manifold becomes more distorted, which may pose a challenge in solving the dynamics. 5
Idea Generation Category:
| 0Conceptual Integration
|
OwpLQrpdwE
|
# P ROXY D ENOISING FOR S OURCE -F REE D OMAIN A DAPTATION **Song Tang** **[1,2,3]** **, Wenxin Su** **[1]** **, Yan Gan** **[4]** **, Mao Ye** **[5,*]** **, Jianwei Zhang** **[2]** **& Xiatian Zhu** **[6,]** _[∗]_ 1 University of Shanghai for Science and Technology, 2 Universität Hamburg, 3 ComOriginMat Inc. 4 Chongqing University, 5 University of Electronic Science and Technology of China, 6 University of Surrey [email protected], {suwenxin43,cvlab.uestc}@gmail.com, [email protected] A BSTRACT Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to an unlabeled target domain with no access to the source data. Inspired by the success of large Vision-Language (ViL) models in many applications, the latest research has validated ViL’s benefit for SFDA by using their predictions as pseudo supervision. However, we observe that ViL’s supervision could be noisy and inaccurate at an unknown rate, introducing additional negative effects during adaption. To address this thus-far ignored challenge, we introduce a novel _**Pro**_ _xy_ _**De**_ _noising_ ( **ProDe** ) approach. The key idea is to leverage the ViL model as a proxy to facilitate the adaptation process towards the latent domain-invariant space. We design a proxy denoising mechanism to correct ViL’s predictions, grounded on a proxy confidence theory that models the dynamic effect of proxy’s divergence against the domain-invariant space during adaptation. To capitalize on the corrected proxy, we derive a mutual knowledge distilling regularization. Extensive experiments show that ProDe significantly outperforms current state-ofthe-art alternatives under the conventional closed set setting and more challenging open set, partial set, generalized SFDA, multi-target, multi-source, and test-time settings. Our code and data are available at [https://github.com/tntek/](https://github.com/tntek/source-free-domain-adaptation) [source-free-domain-adaptation.](https://github.com/tntek/source-free-domain-adaptation) 1 I NTRODUCTION Unsupervised Domain Adaptation (UDA) uses well-annotated source data and unannotated target data concurrently to achieve cross-domain transfer. However, this data access requirement raises increasing concerns about safety and privacy. Thus, there is a call for restricted access to source domain training data, leading to a more practical but challenging transfer learning setting, Source-Free Domain Adaptation (SFDA) (Li et al., 2020a; Xia et al., 2021; Roy et al., 2022). In the absence from the source domain, cross-domain distribution matching approaches are no longer applicable (Ganin & Lempitsky, 2015; Kang et al., 2019). Self-supervised learning then comes into play by generating and mining auxiliary information to enable unsupervised adaptation in two main routes. _The first_ makes SFDA as a special case of UDA by explicitly creating a pseudo source domain, enabling UDA methods such as adversarial learning (Xia et al., 2021; Kurmi et al., 2021) or minimizing domain shift (Tian et al., 2022; Kundu et al., 2022). _The second_ further refines generated supervision from the source model (Lao et al., 2021; Wang et al., 2022a; Huang et al., 2021) or target data (Yang et al., 2022; Tang et al., 2022), as the constructed pseudo source domain may be noisy. These methods all perform alignment without any guidance from the target feature space to the unknown domain-invariant feature space. There has been growing interest in leveraging pre-trained Vision-Language (ViL) models (e.g., CLIP (Radford et al., 2021)) for transfer learning challenges. This is because ViL models were trained with a massive amount of diverse vision-language data, encompassing rich knowledge potentially useful for many downstream tasks. For instance, Ge et al. (2022); Lai et al. (2023); Singha et al. (2023) disentangle domain and category information in the visual features of the ViL model _∗_ Corresponding author 1 by learning domain-specific textual or visual prompts. ViL models have also been used to address the SFDA problem (Tang et al., 2024c; Xiao et al., 2024). They treat the ViL model’s predictions as ground truth which would be noisy in many unknown cases, finally harming their performance. To address the limitation mentioned above, in this paper, we propose a new **Pro** xy **De** noising ( **ProDe** ) ap- Source domain Proxy ViL space proach for SFDA. In contrast to (Tang et al., 2024c; Xiao et al., 2024), we consider the ViL model/space Proxy error as a _noisy_ proxy of the latent domain-invariant space [1], with a need to be denoised. At the absence of any good reference models for measuring the noisy Latent domain‐ degree with the already strong ViL model’s pre- In‐training model invariant space dictions, we exploit _the dynamics of domain adap-_ Adapting path Adapting direction Proxy error correction _tation process_, starting at the source model space Aligning to proxy space Desired adapting direction and terminating presumably in the latent domaininvariant space. In particular, this takes into account Figure 1: Conceptual illustration of ProDe. the proxy’s divergence against the domain-invariant We align the adapting direction with the despace (Fig. 1). Specifically, we model approximately sired trajectory by leveraging a proxy space the effect of ViL model’s prediction error on domain that approximates the latent domain-invariant adaption by formulating a proxy confidence theory, space. This process incorporates direction in relation to the discrepancy between the source do- adjustments based on proxy error correction, main and the current under-adaptation model. This implementing proxy denoising, and finally leads to a novel proxy denoising mechanism for ViL achieves enhanced model adaptation. prediction correction. To capitalize on the corrected ViL predictions more effectively, a mutual knowledge distilling regularization is further designed. Figure 1: Conceptual illustration of ProDe. We align the adapting direction with the desired trajectory by leveraging a proxy space that approximates the latent domain-invariant space. This process incorporates direction adjustments based on proxy error correction, implementing proxy denoising, and finally achieves enhanced model adaptation. Our **contributions** are summarized as follows: **(1)** We for the first time investigate the inaccurate predictions of ViL models in the context of SFDA. **(2)** We formulate a novel ProDe method that reliably corrects the ViL model’s predictions under the guidance of a proxy confidence theory. A mutual knowledge distilling regularization is introduced for better capitalizing on refined proxy predictions. **(3)** Extensive experiments on open benchmarks show that our ProDe significantly outperforms previous alternatives in closed-set settings, as well as the more challenging partial-set, open-set, and generalized SFDA, multi-target, multi-source and test-time settings. 2 R ELATED W ORK **Source-Free Domain Adaptation** One main challenge with SFDA is lack of supervision during model adaptation. To overcome this, current methods are broadly divided into three categories. The _firstcategory_ involves converting SFDA to conventional UDA by introducing a pseudo-source domain. This can be achieved by building the pseudo-source domain through generative models (Tian et al., 2022; Li et al., 2020b) or by extracting a subset similar to the distribution of sources from the target domain (Du et al., 2023). The _second category_ involves mining auxiliary information from the pre-trained source model to assist in aligning the feature distribution from the target domain to the source domain. Commonly used auxiliary factors include multi-hypothesis (Lao et al., 2021), prototypes (Zhou et al., 2024), source distribution estimation (Ding et al., 2022), or hard samples (Li et al., 2021). The _last category_ focuses on the target domain and creates additional constraints to correct the semantic noise in model transferring. In practice, domain-aware gradient control (Yang et al., 2021b), data geometry such as the intrinsic neighborhood structure (Tang et al., 2021) and target data manifold (Tang et al., 2022; Tang et al., 2024a), have been exploited to generate high-quality pseudo-labels (Liang et al., 2020; Chen et al., 2022b) or inject assistance in an unsupervised fashion (Yang et al., 2021a). These methods refine auxiliary information from domain-specific knowledge, such as the source model and unlabeled target data, without resorting to external knowledge sources, such as pre-trained multimodal foundation models. 1 The issue of noisy predictions is evidenced by the inferior zero-shot performance of the ViL model, e.g., CLIP, on the target domains (see Tab. 4). Here, “domain invariant space" refers to an ideal latent embedding space where the mapped features from different domains align with the same probability distribution. 2 Figure 2: **Left:** Dynamics of effect of ViL model’s prediction error (or proxy error) during alignment. (a) In the initial adaptation phase, it is acceptable to overlook the proxy errors. However, as the in-training model approaches the proxy space, these errors grow to be more noticeable, leading to continuous decline in the reliability of ViL predictions as shown in (b) and (c). **Right:** Our ProDe capitalizes on the corrected proxy, involving a mutual knowledge distilling regularization and a proxy denoising mechanism imposing refinement on the ViL logits. **Vision-Language Models** ViL models, such as CLIP (Radford et al., 2021) and GLIP (Li et al., 2022), have shown promise in various tasks (Liang et al., 2023; Wang et al., 2022c) due to their ability to capture modality invariant features. There are two main lines of research. The _first line_ aims to improve their performance. For instance, text-prompt learning (Zhou et al., 2022; Ge et al., 2022) and visual-prompt learning (Wang et al., 2023; Jia et al., 2022) were adopted, using learnable prompts related to application scenarios. Data efficiency of these models can be improved by repurposing (Andonian et al., 2022) or removing noisy data (Wang et al., 2021b). The _second line_ focuses on using ViL models as external knowledge to boost downstream tasks. Three strategies are involved: Plain fusion (Liu et al., 2024), knowledge distillation (Pei et al., 2023) and information entropy regulating (Cha et al., 2022). Beyond latest ViL based SFDA models (Tang et al., 2024c; Xiao et al., 2024), we uniquely tackle the challenge of mitigating the noise of ViL’s supervision. 3 M ETHODOLOGY 3.1 P ROBLEM F ORMULATION We start with a labeled source domain and an unlabeled target domain, handling the same _C_ categories. Let _X_ _S_ and _Y_ _S_ be the source samples and labels. The target samples and truth target labels are denoted as _X_ _T_ = _{_ _**x**_ _i_ _}_ _[n]_ _i_ =1 [and] _[ Y]_ _[T]_ [ =] _[{][y]_ _[i]_ _[}]_ _[n]_ _i_ =1 [, respectively, where] _[ n]_ [ is the sample number. SFDA aims] to learn a target model _θ_ _t_ : _X_ _T_ _→Y_ _T_ given (1) the pre-trained source model _θ_ _s_ : _X_ _S_ _→Y_ _S_, (2) the unlabeled target data _X_ _T_ . In addition, we leverage a ViL model _θ_ _v_ that produces noise supervision. To address noisy ViL supervision, we exploit the dynamics of domain adaptation process. As shown in Fig. 2 (a), we deal with three spaces: source domain _D_ _S_ (i.e., source image embedding space), domain-invariant space _D_ _I_, and ViL space _D_ _V_ (the best possible proxy w.r.t _D_ _I_ ). In this context, _D_ _I_ typically refers to an _ideal, unknown latent embedding space_ that is domain generalized. We want to align the in-training model _D_ _T_ _[t]_ [from] _[ D]_ _[S]_ [ to] _[ D]_ _[I]_ [ as] _[ t][ ∈]_ [[0] _[ ∼]_ _[T]_ []] _[ ≫]_ [0][.] Without access to _D_ _I_, we propose to perform _**proxy alignment**_ by aligning _D_ _T_ _[t]_ [towards] _[ D]_ _[V]_ [. We] denote the discrepancy between _D_ _I_ and _D_ _V_ as _**proxy error**_ _**e**_ _VI_, reflecting ViL’s prediction errors. We then transform the task of minimizing the errors of ViL predictions to control the proxy error by establishing a proxy confidence theory. 3.2 P ROXY C ONFIDENCE T HEORY Understanding the impact of proxy errors on domain adaptation is critical. To account for the dynamics of domain adaptation, as demonstrated in Fig. 2 (a), we consider two typical situations of the proxy alignment process. We denote the distance of _D_ _T_ _[t]_ [to] _[ D]_ _[V]_ [ and] _[ D]_ _[I]_ [ as] _**[ d]**_ _[t]_ _V_ [and] _**[ d]**_ _[t]_ _I_ [,] 3 respectively, and note that the distinction between _D_ _V_ and _D_ _I_, i.e., the proxy error _**e**_ _VI_, is a space-to-space distance in the vector form. To ease understanding, we note two cases: - **Case1:** When _D_ _T_ _[t]_ [is way far from] _[ D]_ _[V]_ [, e.g., at the beginning of adaptation (] _[t]_ [ = 0] [), it is held] that _**d**_ [0] _I_ _[≈]_ _**[d]**_ [0] _V_ _[≫]_ _**[e]**_ _[VI]_ [. This implies that aligning to] _[ D]_ _[I]_ [ or] _[ D]_ _[V]_ [ is equivalent. Consequently,] the proxy errors _**e**_ _VI_ can be ignored, that is, the ViL prediction can be deemed trustworthy. - **Case2:** When _D_ _T_ _[t]_ [approaches] _[ D]_ _[V]_ [, e.g., the later phase in the adaptation (] _[t]_ [ =] _[ U][ ≫]_ [0] [),] tackling the proxy errors becomes increasingly crucial; Also, the distance relationship evolves to the equation that _**d**_ _[U]_ _I_ [=] _**[ d]**_ _[U]_ _V_ [+] _**[ e]**_ _[VI]_ [ (according to the vector geometric property] that _**u**_, _**v**_, and _**u**_ + _**v**_ form a triangle, where _**u**_ and _**v**_ are two sides). At this moment, ViL predictions become less reliable. The proxy errors dynamically affect the proxy alignment, as reflected in the relative relationship between _**d**_ _[t]_ _V_ [and] _**[ d]**_ _[t]_ _I_ [defined as:] _I_ _[|]_ _V_ [+] _**[ e]**_ _[VI]_ _[|]_ _η_ _t_ = _[|]_ _**[d]**_ _[t]_ _[|]_ _**[d]**_ _[t]_ _|_ _**d**_ _[t]_ _V_ _[|]_ [ =] _|_ _**d**_ _[t]_ _V_ _[|]_ [ +] _[|]_ _**[e]**_ _[VI]_ _[|]_ = 1 + _[|]_ _**[e]**_ _[VI]_ _[|]_ _|_ _**d**_ _[t]_ _V_ _[|]_ _|_ _**d**_ _[t]_ _V_ _[|]_ [+] _**[ e]**_ _[VI]_ _[|]_ _V_ _[|]_ [ +] _[|]_ _**[e]**_ _[VI]_ _[|]_ _≤_ _[|]_ _**[d]**_ _[t]_ _|_ _**d**_ _[t]_ _V_ _[|]_ _|_ _**d**_ _[t]_ _V_ _[|]_ (1) _|_ _**d**_ _[t]_ _V_ _[|][,]_ where _η_ _t_ quantifies the _error impact degree_, _| · |_ means the absolute value (length) of a distance vector. During proxy alignment, the quantity _|_ _**e**_ _VI_ _|/|_ _**d**_ _[t]_ _V_ _[|]_ [ in Eq. (][1][) gradually increases from a very small] value (e.g., Case 1) to bigger ones (e.g., Case 2), leading to increase in impact degree _η_ _t_ from 1. With this dynamics, as shown in Fig. 2 (b), the variance of ViL prediction gradually increases, implying a progressive decrease in the reliability of ViL prediction. At any time _t_, we treat the ViL predictions that approximate a Gaussian distribution _N_ ( _θ_ _v_ ( _x_ _i_ ) _, δ_ _t_ ) with the mean _θ_ _v_ ( _x_ _i_ ) and the prediction variance _δ_ _t_ _∝_ _η_ _t_ (Fig. 2 (c)). This is because, we consider the ViL predictions to be influenced by various sources of noise and uncertainty, which justifies the Gaussian approximation according to the _Central Limit Theorem_ (Chow & Teicher, 1988). Given that _**e**_ _VI_ is unknown, we cannot formulate these dynamics explicitly. We thus approximate this problem by quantifying the prediction variance with the varying confidence of ViL predictions. This conversion can be expressed in the form of a probability distribution with proxy confidence as: _N_ ( _θ_ _v_ ( _x_ _i_ ) _, δ_ _t_ ) = _⇒_ _P_ � _G_ _P_ ( _V_ ) = _True, t_ � _P_ ( _V_ ) _,_ (2) where _P_ ( _V_ ) is the probability distribution of the proxy space _D_ _V_ ; _G_ _P_ ( _V_ ) stands for a random event that the sampling result (i.e., a ViL prediction) from _P_ ( _V_ ) is confident; _P_ � _G_ _P_ ( _V_ ) = _True, t_ � is denoted as the _proxy confidence_, indicating the probability of the event _G_ _P_ ( _V_ ) being true at a time _t_ . This confidence will decreases progressively, as the ViL prediction reliability reduces relatively against the ability of the in-training model. By framing the ViL prediction as a probabilistic event, we can leverage the concept of proxy confidence, _P_ � _G_ _P_ ( _V_ ) = _True, t_ �, to quantify the reliability of ViL predictions at any point during adaptation. This facilitates the measurement about the impact of proxy errors. Specifically, we formulate the _proxy confidence theory_ as in **Theorem 1** (see proof in Appendix A). **Theorem 1** _We note that the source domain (_ _D_ _S_ _), the domain-invariant space (_ _D_ _I_ _), the proxy space_ _(_ _D_ _V_ _) and the in-training model (_ _D_ _T_ _[t]_ _[) follow the probability distributions]_ _[ P]_ [(] _[S]_ [)] _[,]_ _[ P]_ [(] _[I]_ [)] _[,]_ _[ P]_ [(] _[V]_ [)] _[ and]_ _P_ ( _T_ _[t]_ ) _, respectively, where_ _S_ _,_ _I_ _,_ _V_ _and_ _T_ _[t]_ _are corresponding random variables. With our proxy_ _alignment idea (see Sec.3.1), the proxy confidence can be expressed as:_ _P_ � _G_ _P_ ( _V_ ) = _True, t_ � _∝_ _[P]_ [(] _[T]_ _[ t]_ [)] (3) _P_ ( _S_ ) _[.]_ This theorem tells that _the effect of ViL prediction errors on domain adaption can be approximately_ _estimated by contrasting the distributions of the source model and the current in-training model._ 3.3 C APITALIZING ON THE C ORRECTED P ROXY **Overview** To better leverage the corrected proxy, we propose a novel ProDe method featured with two designs: (1) A proxy denoising mechanism, refining the original ViL predictions at the logit level, and (2) a mutual knowledge distilling regularization, encouraging extraction of useful knowledge from the ViL model _θ_ _v_ to the in-training target model _θ_ _t_, as shown in Fig. 2 (d). 4 **Proxy denoising** This module aims to denoise the ViL predictions. By **Theorem** 1 (Eq. (3)), we further convert the ViL space’s probability distribution with proxy confidence (i.e., Eq. (2)) into _P_ ( _T_ _t_ ) log � _P_ ( _S_ ) _[P]_ [(] _[V]_ [)] � = log _P_ ( _V_ ) _−_ �log _P_ ( _S_ ) _−_ log _P_ ( _T_ _[t]_ )� _,_ (4) where the latter two items form an adjustment used to correct for the first item (i.e., ViL prediction). Under this formula, we realize our denoising mechanism as: _**p**_ _[′]_ _i_ [= softmax (] _[θ]_ _[v]_ [(] _**[x]**_ _[i]_ _[,]_ _**[ v]**_ [)] _[ −]_ _[ω]_ [[] _[θ]_ _[s]_ [(] _**[x]**_ _[i]_ [)] _[ −]_ _[θ]_ _[t]_ [(] _**[x]**_ _[i]_ [)])] _[,]_ (5) where _θ_ _v_ _/θ_ _s_ _/θ_ _t_ () apply the ViL/source/target model to get the corresponding logits, and the hyperparameter _ω_ specifies the correction strength. The output _**p**_ _[′]_ _i_ [is a denoised ViL prediction.] **Mutual knowledge distilling** This component aims to distill useful knowledge of the ViL model to our target model. This is achieved by designing two loss terms: _L_ Ref � ~~�~~ � ~~�~~ _C_ � ¯ � _q_ _c_ log ¯ _q_ _c_ _c_ =1 E _**x**_ _i_ _∈X_ _t_ � _−_ _β_ _C_ (6) �1� _c_ = _y_ _i_ _[′]_ �log _p_ _i,c_ _,_ _c_ =1 _L_ ProDe = min _θ_ _t_ _,_ _**v**_ _[α]_ _L_ Apt � ~~�~~ � ~~�~~ _C_ _−_ E _**x**_ _i_ _∈X_ _t_ **MI** � _**p**_ _i_ _[′]_ _[,]_ _**[ p]**_ _i_ � + _γ_ � _q_ ¯ _c_ log ¯ _q_ _c_ � _c_ =1 � _−_ E _**x**_ _i_ _∈X_ _t_ **MI** � _**p**_ _i_ _[′]_ _[,]_ _**[ p]**_ _i_ � + _γ_ The first term _L_ Apt adapts both the target model and the learnable prompt of ViL model by maximizing the unbiased mutual information **MI** ( _·, ·_ ) (Ji et al., 2019) between the denoised ViL prediction _**p**_ _[′]_ _i_ [and] the target prediction _**p**_ _i_ = softmax( _θ_ _t_ ( _**x**_ _i_ )) . This design is motivated by that despite massive (often noisy) training data used, the ViL model (e.g., CLIP) don’t always outperform a speical expert model such as the supervised source model. There are three reasons: (1) ViL models are generalists, while source domain models are specialized. (2) ViL models may include irrelevant data, whereas source domain models use curated, relevant data. (3) ViL models might overlook task-specific features that are captured by source domain models. To avoid solution collapse (Ghasedi Dizaji et al., 2017), we use a common category balance constraint (Yang et al., 2021a) where ¯ _q_ _c_ = _n_ [1] � _ni_ =1 _[p]_ _[i,c]_ [ is the average] likelihood of class _c_ over _n_ training samples by the target model, across a total of _C_ categories. The second term _L_ Ref refers to a typical pseudo labeling strategy where a classification objective is applied, with the pseudo label _y_ _i_ _[′]_ [obtained by the denoised ViL predictions and] [ 1][[] _[c]_ [ =] _[ y]_ _i_ _[′]_ []] [ denotes an] indicator function. Note that as the training proceeds, the ViL predictions become less reliable and useful whilst the negative effect of _**e**_ _VI_ would grow in a relative sense. That means our proposed denoising could get more important across adaptation. We provide the model training procedure in Appendix B. 4 E XPERIMENTS **Datasets** We evaluate four widely used domain adaptation benchmarks. Among them, **Office-** **31** (Saenko et al., 2010) and **Office-Home** (Venkateswara et al., 2017) are small-scaled and mediumscale datasets, respectively, whilst **VisDA** (Peng et al., 2017) and **DomainNet-126** (Saito et al., 2019) are both challenging large-scale datasets. Their details are provided in Appendix C. **Settings** We consider a variety of SFDA settings: (1) closed-set, (2) partial-set (initialized in SHOT (Liang et al., 2020)), (3) open-set (initialized in SHOT (Liang et al., 2020)), (4) generalized SFDA (Yang et al., 2021b), (5) multi-target (SF-MTDA, detailed in (Kumar et al., 2023)), (6) multisource (SF-MSDA, detailed in (Ahmed et al., 2021)), and (7) test-time adaptation (TTA) (Wang et al., 2021a). More details are given in Appendix D. 4.1 C OMPETITORS To evaluate ProDe, we select 30 related comparisons divided into four groups. _(1) The first_ includes 2 base models involved in the SFDA problem: The source model (termed Source) and CLIP zero-shot (termed CLIP) (Radford et al., 2021). _(2) The second_ includes 7 current state-of-the-art domain adaptation methods with ViL model (adopting CLIP in practice), covering UDA and SFDA settings: DAPL-R (Ge et al., 2022), PADCLIP-R (Lai et al., 2023), ADCLIP-R (Singha et al., 2023), PDAR (Bai et al., 2024), DAMP-R (Du et al., 2024), DIFO-R (Tang et al., 2024c) and DIFO-V (Tang 5
Idea Generation Category:
| 2Direct Enhancement
|
FIj9IEPCKr
|
# RB-M ODULATION : T RAINING -F REE S TYLIZATION - USING R EFERENCE BASED M ODULATION **Litu Rout** [1] _[,]_ [2] _[∗]_ **Yujia Chen** [1] **Nataniel Ruiz** [1] **Abhishek Kumar** [3] **Constantine Caramanis** [2] **Sanjay Shakkottai** [2] **Wen-Sheng Chu** [1] 1 Google 2 UT Austin 3 Google DeepMind _{_ litu.rout,constantine,sanjay.shakkottai _}_ @utexas.edu _{_ liturout,yujiachen,natanielruiz,abhishk,wschu _}_ @google.com A BSTRACT We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution for training-free personalization of diffusion models. Existing trainingfree approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted content leakage from reference style images, and (c) effective composition of style and content. RB-Modulation is built on a novel stochastic optimal controller where a style descriptor encodes the desired attributes through a terminal cost. The resulting drift not only overcomes the difficulties above, but also ensures high fidelity to the reference style and adheres to the given text prompt. We also introduce a cross-attention-based feature aggregation scheme that allows RB-Modulation to decouple content and style from the reference image. With theoretical justification and empirical evidence, our test-time optimization framework demonstrates precise extraction and control of _content_ and _style_ in a training-free manner. Further, our method allows a seamless composition of content and style, which marks a departure from the dependency on external adapters or ControlNets. See project [page https://rb-modulation.github.io/ for code and further details.](https://rb-modulation.github.io/) 1 I NTRODUCTION Text-to-image (T2I) generative models (Ramesh et al., 2021; Rombach et al., 2022; Saharia et al., 2022) have excelled in crafting visually appealing images from text prompts. These T2I models are increasingly employed in creative endeavors such as visual arts (Xu et al., 2024), gaming (Pearce et al., 2023), personalized image synthesis (Ruiz et al., 2023; Huang et al., 2024a; Hu et al., 2021; Shah et al., 2023), stylized rendering (Sohn et al., 2023; Hertz et al., 2023; Wang et al., 2024a; Jeong et al., 2024), and image inversion or editing (Ulyanov et al., 2018; Delbracio & Milanfar, 2023; Rout et al., 2023b; 2024; Mokady et al., 2023). Content creators often need precise control over both the _content_ and the _style_ of generated images to match their vision. While the content of an image can be conveyed through text, articulating an artist’s unique style – characterized by distinct brushstrokes, color palette, material, and texture – is substantially more nuanced. This has led to research on personalization through visual prompting (Sohn et al., 2023; Hertz et al., 2023; Wang et al., 2024a). Recent studies have focused on finetuning pre-trained T2I models to learn style from a set of reference images (Gal et al., 2022; Ruiz et al., 2023; Sohn et al., 2023; Hu et al., 2021). This involves optimizing the model’s text embeddings, model weights, or both, using the denoising diffusion loss. However, these methods demand substantial computational resources for training or finetuning large-scale foundation models, thus making them expensive to adapt to new, unseen styles. Furthermore, these methods often depend on human-curated images of the same style, which is less practical and can compromise quality when only a single reference image is available. In training-free **stylization**, recent methods (Hertz et al., 2023; Wang et al., 2024a; Jeong et al., 2024) manipulate keys and values within the attention layers using just one reference style image. These methods face challenges in both extracting the style from the reference style image and accurately transferring the style to a target content image. For instance, during the DDIM inversion step (Song et al., 2021a) utilized by StyleAligned (Hertz et al., 2023), fine-grained details tend to be compromised. To mitigate this issue, InstantStyle (Wang et al., 2024a) incorporates features from _∗_ This work was done during an internship at Google. 1 Reference style Reference content A guitar A piano A butterfly Figure 1: Given a single reference image (rounded rectangle), our method **RB-Modulation** offers a plug-and-play solution for (a) stylization, and (b) content-style composition with various prompts while maintaining sample diversity and prompt alignment. For instance, given a reference style image (e.g., “melting golden 3d rendering style”) and content image (e.g., “a dog”), our method adheres to the desired prompts without leaking contents (e.g., flower) from the reference style image and without being restricted to the fixed pose or layout of the reference dog image. the reference style image into specific layers of a previously trained IP-Adapter (Ye et al., 2023). However, identifying the exact layer for feature injection in a model is complex and not universally applicable across models. Also, feature injection can cause content leakage from the style image into the generated content. Moving on to content-style **composition**, InstantStyle (Wang et al., 2024a) employs a ControlNet (Zhang et al., 2023) (an additionally trained network) to preserve image layout, which inadvertently limits its diversity. We introduce Reference-Based Modulation (RB-Modulation), a novel approach for stylization and composition that eliminates the need for training or finetuning diffusion models ( _e.g_ . ControlNet (Zhang et al., 2023) or adapters (Ye et al., 2023; Hu et al., 2021)). Our work reveals that the reverse dynamics in diffusion models can be formulated as stochastic optimal control problem. By incorporating style features into the controller’s terminal cost, we modulate the drift field in diffusion model’s reverse dynamics, enabling training-free personalization. Unlike conventional attention processors that often leak content from the reference style image, we propose to enhance the image fidelity via an Attention Feature Aggregation (AFA) module that decouples content from reference style image. We demonstrate the effectiveness of our method in stylization (Hertz et al., 2023; Wang et al., 2024a; Jeong et al., 2024) and style+content composition, as illustrated in Figure 1(a) and (b), respectively. Our experiments show that RB-Modulation outperforms current SoTA methods (Hertz et al., 2023; Wang et al., 2024a) in terms of human preference and prompt-alignment metrics. **Our contributions are summarized as follows:** _•_ We present reference-based modulation (RB-Modulation), a novel stochastic optimal control based test-time optimization framework that enables training-free, personalized style and content control, with a new Attention Feature Aggregation (AFA) module to maintain high fidelity to the reference image while adhering to the given prompt ( _§_ 4). _•_ We provide theoretical justifications connecting optimal control and reverse diffusion dynamics. We leverage this connection to incorporate desired attributes ( _e.g_ ., style) in our controller’s terminal cost and personalize T2I models in a training-free manner ( _§_ 5). _•_ We perform extensive experiments covering stylization and content-style composition, demonstrating superior performance over SoTA methods in human preference metrics ( _§_ 6). 2 R ELATED W ORK **Personalization of T2I models:** T2I generative models (Rombach et al., 2022; Podell et al., 2023; Pernias et al., 2024) can now generate high quality images from text prompts. Their text-following 2 ability has unlocked new avenues in personalized content creation, including text-guided image editing (Mokady et al., 2023; Rout et al., 2024), solving inverse problems (Rout et al., 2023b; 2024), concept-driven generation (Ruiz et al., 2023; Tewel et al., 2023; Kumari et al., 2023; Chen et al., 2024), personalized outpainting (Tang et al., 2023), identity-preservation (Ruiz et al., 2024; Huang et al., 2024a; Wang et al., 2024b), and stylized synthesis (Sohn et al., 2023; Wang et al., 2024a; Hertz et al., 2023; Shah et al., 2023). To tailor T2I models for a specific style ( _e.g_ ., painting) or content ( _e.g_ ., object), existing methods follow one of two recipes: (1) full finetuning (FT) or parameter efficient finetuning (PEFT) and (2) training-free, which we discuss below. **Finetuning T2I models for personalization:** FT (Ruiz et al., 2023; Everaert et al., 2023) and PEFT (Kumari et al., 2023; Hu et al., 2021; Sohn et al., 2023; Shah et al., 2023) methods excel at capturing style or object details when the underlying T2I model can be finetuned on a few (typically 4) reference images for few thousand iterations. PARASOL (Tarr´es et al., 2024) requires supervised data via a cross-modal search to train both the denoising U-Net and a projector network. Diff-NST (Ruta et al., 2023) trains the attention processor by targeting the ‘V’ values within the denoising U-Net. The curation of supervised data and resource-intensive finetuning for every style or content makes these methods challenging for practical usage. **Training-free methods for personalization:** Training-free personalization methods are preferable to finetuning methods given the vastly faster time of execution. In **StyleAligned** (Hertz et al., 2023), a reference style image and a text prompt describing the style are used to extract style features via DDIM inversion (Song et al., 2021a). Target queries and keys are then normalized using adaptive instance normalization (Huang & Belongie, 2017) based on reference counterparts. Finally, reference image keys and values are merged with DDIM-inverted latents in self-attention layers, which tends to leak content information from the reference style image (Figure 2). Moreover, the need for textual description in the DDIM inversion step can degrade its performance. **DiffusionDisen-** **tanglement** (Wu et al., 2023) aims to reduce the approximation error in DDIM inversion by jointly minimizing a perceptual loss and a directional CLIP loss, which is prone to content leakage (Wang et al., 2024a). **Swapping Self-Attention (SSA)** (Jeong et al., 2024) addresses these limitations by replacing the target keys and values in self-attention layers with those from a reference style image. It still relies on DDIM inversion to cache keys and values of the reference style, which tends to compromise fine-grained details (Wang et al., 2024a). Both StyleAligned (Hertz et al., 2023) and SSA (Jeong et al., 2024) require two reverse processes to share their attention layer features and thus demand significant memory. **InstantStyle** (Wang et al., 2024a) injects reference style features into specific cross-attention layers of IP-Adapter (Ye et al., 2023), addressing two key limitations: DDIM inversion and memory-intensive reverse processes. However, pinpointing the exact layers for feature injection is complex, and may not generalize to other models. In addition, when composing style and content, InstantStyle (Wang et al., 2024a) relies on ControlNet (Zhang et al., 2023), which can limit the diversity of generated images to fixed layouts and deviate from the prompt. **Optimal Control:** Stochastic optimal control finds wide applications in diverse fields such as molecular dynamics (Holdijk et al., 2024), economics (Fleming & Rishel, 2012), non-convex optimization (Chaudhari et al., 2018), robotics (Theodorou et al., 2011), and mean-field games (Carmona et al., 2018) Despite its extensive use, and recent works on its connections to diffusion based generative models (Berner et al., 2024; Tzen & Raginsky, 2019; Chen et al., 2023), it has been less explored in training-free personalization. In this paper, we introduce a novel test-time optimization framework leveraging the main concepts from optimal control to achieve training-free personalization. A key aspect of optimal control is designing a controller to guide a stochastic process towards a desired terminal condition (Fleming & Rishel, 2012). This aligns with our goal of training-free personalization, as we target a specific style or content at the end of the reverse diffusion process, which can be incorporated in the controller’s terminal condition. RB-Modulation overcomes several challenges encountered by SoTA methods (Hertz et al., 2023; Jeong et al., 2024; Wang et al., 2024a). Since RB-Modulation does not require DDIM inversion, it retains fine-grained details unlike StyleAligned (Hertz et al., 2023). Using a stochastic controller to refine the trajectory of a single reverse process, it overcomes the limitation of coupled reverse processes (Hertz et al., 2023). By incorporating a style descriptor in our controller’s terminal cost, it eliminates the dependency on Adapters (Ye et al., 2023; Hu et al., 2021) or ControlNets (Zhang et al., 2023) by InstantStyle (Wang et al., 2024a). 3 3 P RELIMINARIES **Diffusion models** consist of two stochastic processes: (a) _noising process_, modeled by a Stochastic Differential Equation (SDE) known as forward-SDE: d _X_ _t_ = _f_ ( _X_ _t_ _, t_ ) d _t_ + _g_ ( _X_ _t_ _, t_ ) d _W_ _t_ _, X_ 0 _∼_ _p_ 0, and (b) _denoising process_, modeled by the time-reversal of forward-SDE under mild regularity conditions (Anderson, 1982), also known as reverse-SDE: d _X_ _t_ = � _f_ ( _X_ _t_ _, t_ ) _−_ _g_ [2] ( _X_ _t_ _, t_ ) _∇_ log _p_ ( _X_ _t_ _, t_ )� d _t_ + _g_ ( _X_ _t_ _, t_ ) d _W_ _t_ _,_ _X_ 1 _∼N_ (0 _,_ I _d_ ) _._ (1) Here, _W_ = ( _W_ _t_ ) _t≥_ 0 is standard Brownian motion in a filtered probability space, (Ω _, F,_ ( _F_ _t_ ) _t≥_ 0 _, P_ ), _p_ ( _·, t_ ) denotes the marginal density of _p_ at time _t_, and _∇_ log _p_ _t_ ( _·_ ) the corresponding score function. _f_ ( _X_ _t_ _, t_ ) and _g_ ( _X_ _t_ _, t_ ) are called drift and volatility, respectively. A popular choice of _f_ ( _X_ _t_ _, t_ ) = _−X_ _t_ and _g_ ( _X_ _t_ _, t_ ) = _√_ 2 corresponds to the well-known forward Ornstein Uhlenbeck (OU) process. For T2I generation, the reverse-SDE (1) is simulated using a neural network _s_ ( **x** _t_ _, t_ ; _θ_ ) (Hyv¨arinen & Dayan, 2005; Vincent, 2011) to approximate _∇_ **x** log _p_ ( **x** _t_ _, t_ ). Importantly, to accelerate the sampling process in practice (Song et al., 2021a; Karras et al., 2022; Zhang & Chen, 2022), the reverse-SDE (1) shares the same path measure with a probability flow ODE: d _X_ _t_ = � _f_ ( _X_ _t_ _, t_ ) _−_ [1] 2 _[g]_ [2] [(] _[X]_ _[t]_ _[, t]_ [)] _[∇]_ [log] _[ p]_ [(] _[X]_ _[t]_ _[, t]_ [)] � d _t_, where _X_ 1 _∼N_ (0 _,_ I _d_ ). **Personalized diffusion models** either fully finetune _θ_ of _s_ ( **x** _t_ _, t_ ; _θ_ ) (Ruiz et al., 2023; Everaert et al., 2023), or train a parameter-efficient adapter ∆ _θ_ for _s_ ( **x** _t_ _, t_ ; _θ_ + ∆ _θ_ ) on reference style images (Hu et al., 2021; Sohn et al., 2023; Shah et al., 2023). Our method does not finetune _θ_ or train ∆ _θ_ . Instead, we derive a new drift field through a stochastic control that _modulates_ the reverse-SDE (1). 4 M ETHOD **Personalization using optimal control:** Normalize time _t_ by the total number of diffusion steps _T_ such that 0 _≤_ _t ≤_ 1. Let us denote by _u_ : R _[d]_ _×_ [0 _,_ 1] _→_ R _[d]_ a controller from the admissible set of controls _U ⊆_ R _[d]_, _X_ _t_ _[u]_ _∈_ R _[d]_ a state variable, _ℓ_ : R _[d]_ _×_ R _[d]_ _×_ [0 _,_ 1] _→_ R the transient cost, and _h_ : R _[d]_ _→_ R the terminal cost of the reverse process ( _X_ _t_ _[u]_ [)] [0] _t_ =1 [. We show in] _[ §]_ [5][ that] training-free personalization can be formulated as a control problem where the drift of the standard reverse-SDE (1) is modified via RB-modulation: 0 min _ℓ_ ( _X_ _t_ _[u]_ _[, u]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [)] _[, t]_ [) d] _[t]_ [ +] _[ γh]_ [(] _[X]_ 0 _[u]_ [)]] _[,]_ where (2) _u∈U_ [E][[] � 1 d _X_ _t_ _[u]_ [=] � _f_ ( _X_ _t_ _[u]_ _[, t]_ [)] _[ −]_ _[g]_ [2] [(] _[X]_ _t_ _[u]_ _[, t]_ [)] _[∇]_ [log] _[ p]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [) +] _[ u]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [)] � d _t_ + _g_ ( _X_ _t_ _[u]_ _[, t]_ [)d] _[W]_ _[t]_ _[, X]_ 1 _[u]_ _[∼N]_ [ (0] _[,]_ [ I] _[d]_ [)] _[ .]_ Importantly, the terminal cost _h_ ( _·_ ), weighted by _γ_, captures the discrepancy in feature space between the styles of the reference image and the generated image. The resulting controller _u_ ( _·, t_ ) modulates the drift over time to satisfy this terminal cost. We derive the solution to this optimal control problem through the Hamilton-Jacobi-Bellman (HJB) equation (Fleming & Rishel, 2012); refer to Appendix A for details. Our proposed RB-Modulation **Algorithm 1** has two key components: (a) stochastic optimal controller and (b) attention feature aggregation. Below, we discuss each in turn. **(a) Stochastic Optimal Controller (SOC):** We show that the reverse dynamics in diffusion models can be framed as a stochastic optimal control problem with a quadratic terminal cost (theoretical analysis in _§_ 5). For personalization using a reference style image _X_ 0 _[f]_ [=] **[ z]** [0] [, we use a Contrastive] Style Descriptor (CSD) (Somepalli et al., 2024) to extract style features Ψ( _X_ 0 _[f]_ [)][. Since the score] functions _s_ ( **x** _t_ _, t_ ; _θ_ ) _≈∇_ log _p_ ( _X_ _t_ _, t_ ) are available from pre-trained diffusion models (Podell et al., 2023; Pernias et al., 2024), our goal is to add a correction term _u_ ( _·, t_ ) to modulate the reverseSDE and minimize the overall cost (2). We approximate _X_ 0 _[u]_ [with its conditional expectation using] Tweedie’s formula (Efron, 2011; Rout et al., 2023b; 2024). Finally, we incorporate the style features into our controller’s terminal cost as: _h_ ( _X_ 0 _[u]_ [) =] _[ ∥]_ [Ψ(] _[X]_ 0 _[f]_ [)] _[ −]_ [Ψ(][E][ [] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [])] _[∥]_ [2] 2 [.] Our theoretical results ( _§_ 5) suggest that the optimal controller can be obtained by solving the HJB equation and letting _γ →∞_ . In practice, this translates to dropping the transient cost _ℓ_ ( _X_ _t_ _[u]_ _[, u]_ [(] _[X]_ _t_ _[u]_ _[, t]_ [)] _[, t]_ [)][ and solving (][2][) with only the terminal constraint,] _[ i.e]_ [.,] min 0 [)] _[ −]_ [Ψ(][E][ [] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [])] _[∥]_ [2] 2 _[.]_ (3) _u∈U_ _[∥]_ [Ψ(] _[X]_ _[f]_ 4 Thus, we solve (3) to find the optimal control _u_ and use this controller in the reverse dynamics (2) to update the current state from _X_ _t_ _[u]_ [to] _[ X]_ _t_ _[u]_ _−_ ∆ _t_ [(recall that time flows backwards in the reverse-SDE (][1][)).] Our implementation of (3) is given in **Algorithm 1**, which follows from our theoretical insights. **Implementation challenge:** For smaller models (Rombach et al., 2022), we can directly solve our control problem (3). However, for larger models (Podell et al., 2023; Pernias et al., 2024), the control objective (3) requires back propagation through the score network with tentatively billions of parameters. This significantly increases time and memory complexity (Rout et al., 2023b; 2024). We propose a test-time proximal gradient descent approach to address this challenge. The key ingredient of our **Algorithm 1** is to find the previous state _X_ _t−_ ∆ _t_ by modulating the current state _X_ _t_ based on an optimal controller _u_ _[∗]_ . The optimal controller _u_ _[∗]_ is obtained by minimizing the discrepancy in style between _X_ [¯] 0 _[u]_ [:=][ E][[] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [=] **[ x]** _[t]_ []][, obtained using our controlled reverse-SDE (][3][), and] the reference style image **z** 0 . Motivated by this interpretation, an alternate **Algorithm 2** avoids back propagation through _X_ ¯ 0 _[u]_ [in the terminal cost. Instead of forcing] _s_ ( **x** _t_ _, t_ ; _θ_ ) by introducing a dummy variable **[ x]** [0] [to be decided by the dynamics of the reverse-SDE as] **x** 0, which serves as a proxy for in **Algorithm 1**, we allow it to be only approximately faithful to the dynamics. This is implemented by adding a proximal penalty, _i.e_ . **x** _[∗]_ 0 [= arg min] **x** 0 _∈_ R _[d]_ _[∥]_ [Ψ(] _[X]_ 0 _[f]_ [)] _[ −]_ [Ψ(] **[x]** [0] [)] _[∥]_ 2 [2] [+] _[ λ][∥]_ **[x]** [0] _[−]_ [E][ [] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ []] _[∥]_ [2] 2 [,] where the hyper-parameter _λ_ controls the faithfulness of the reverse dynamics. This penalty assumes that with a small step-size in (3), **x** _[∗]_ 0 [and][ E][[] _[X]_ 0 _[u]_ _[|][X]_ _t_ _[u]_ [=] **[ x]** _[t]_ []][ will be close. Thus,] **[ Algorithm][ 2]** [ enables] personalization of large-scale foundation models, _matching the speed of training-free methods and_ _obtaining 5-20X speedup over training-based methods_ ; see Table 4 in Appendix B.2 for details. While prior works (Chung et al., 2023; Zhu et al., 2023; He et al., 2024) have used a proximal sampler in related settings, their underlying generative model is not personalized. We believe that this is an important reason why our method results in a significant speedup while satisfying the terminal constraints. Our paper takes the first step in personalizing the underlying generative model via a novel attention processor as discussed below. **(b) Attention Feature Aggregation (AFA):** Let _d_ denote the dimension of the latent variable _X_ _t_, _n_ _q_ the embedding dimension for query _Q_, and _n_ _h_ the output dimension of the hidden layer. Transformer-based diffusion models (Rombach et al., 2022; Podell et al., 2023; Pernias et al., 2024) consist of self-attention and cross-attention layers operating on latent embedding **x** _t_ _∈_ R _[d][×][n]_ _[h]_ . Within the attention module Attention( _Q, K, V_ ), **x** _t_ is projected into queries _Q ∈_ R _[d][×][n]_ _[q]_, keys _K ∈_ R _[d][×][n]_ _[q]_, and values _V ∈_ R _[d][×][n]_ _[h]_ using linear projections. Through _Q_, _K_, and _V_, attention layers capture global context and improve long-range dependencies within **x** _t_ . To incorporate a reference image ( _e.g_ ., style or content) while retaining alignment with the prompt, we introduce the Attention Feature Aggregation (AFA) module. Given a prompt **p**, a reference style image _I_ _s_, and a reference content image _I_ _c_, we first extract the embeddings using CLIP text encoder (Radford et al., 2021) and CSD image encoder (Somepalli et al., 2024). These embeddings are projected into keys and values using linear projection. We denote by _K_ _p_ and _V_ _p_ the keys and values from **p**, _K_ _s_ and _V_ _s_ from _I_ _s_, _K_ _c_ and _V_ _c_ from _I_ _c_ (used only in content-style composition). The query _Q_, derived from a linear projection of **x** _t_, remains consistent in the AFA module. To maintain consistency between text and style, we compose the keys and values of both text and style in our attention mechanism. The final output of the AFA module is given by _AFA_ = Avg ( _A_ _text_ _, A_ _style_ _, A_ _text_ + _style_ ) _, A_ _text_ = Attention( _Q,_ [ _K_ ; _K_ _p_ ] _,_ [ _V_ ; _V_ _p_ ]) _,_ _A_ _style_ = Attention( _Q,_ [ _K_ ; _K_ _s_ ] _,_ [ _V_ ; _V_ _s_ ]) _, A_ _text_ + _style_ = Attention( _Q,_ [ _K_ ; _K_ _p_ ; _K_ _s_ ] _,_ [ _V_ ; _V_ _p_ ; _V_ _s_ ]) _,_ where [ _K_ ; _K_ _p_ ] _∈_ R [2] _[d][×][n]_ _[q]_ indicates concatenation of _K_ with _K_ _p_ along the number of tokens dimension. For style-content composition, we process the content image _I_ _c_ in the same way as the reference style image _I_ _s_, and obtain another set of attention outputs: _AFA_ = Avg ( _A_ _text_ _, A_ _style_ _, A_ _content_ _, A_ _content_ + _style_ ) _,_ _A_ _content_ = Attention( _Q,_ [ _K_ ; _K_ _c_ ] _,_ [ _V_ ; _V_ _c_ ]) _, A_ _content_ + _style_ = Attention( _Q,_ [ _K_ ; _K_ _s_ ; _K_ _c_ ] _,_ [ _V_ ; _V_ _s_ ; _V_ _c_ ]) _._ Importantly, the AFA module is computationally tractable as it only requires the computation of a multi-head attention, which is widely used in practice (Podell et al., 2023). **Disentangling content and style.** In stylization (content described by text; style illustrated by a reference style image), prior works (Hertz et al., 2023; Wang et al., 2024a) inject the entire reference style image _I_ _s_ that does not disentangle content and style. However, our AFA module injects 5
Idea Generation Category:
| 0Conceptual Integration
|
bnINPG5A32
|
# P REDICTION R ISK AND E STIMATION R ISK OF THE R IDGELESS L EAST S QUARES E STIMATOR UNDER G ENERAL A SSUMPTIONS ON R EGRESSION E RRORS **Sungyoon Lee** Department of Computer Science Hanyang University [email protected] **Sokbae Lee** Department of Economics Columbia University [email protected] A BSTRACT In recent years, there has been a significant growth in research focusing on minimum _ℓ_ 2 norm (ridgeless) interpolation least squares estimators. However, the majority of these analyses have been limited to an unrealistic regression error structure, assuming independent and identically distributed errors with zero mean and common variance. In this paper, we explore prediction risk as well as estimation risk under more general regression error assumptions, highlighting the benefits of overparameterization in a more realistic setting that allows for clustered or serial dependence. Notably, we establish that the estimation difficulties associated with the variance components of both risks can be summarized through the trace of the variance-covariance matrix of the regression errors. Our findings suggest that the benefits of overparameterization can be extended to time series, panel and grouped data. 1 I NTRODUCTION Recent years have witnessed a fast growing body of work that analyzes minimum _ℓ_ 2 norm (ridgeless) interpolation least squares estimators (see, e.g., Bartlett et al., 2020; Hastie et al., 2022; Tsigler & Bartlett, 2023, and references therein). Researchers in this field were inspired by the ability of deep neural networks to accurately predict noisy training data with perfect fits, a phenomenon known as “double descent” or “benign overfitting” (e.g., Belkin et al., 2018; 2019; 2020; Zou et al., 2021; Mei & Montanari, 2022, among many others). They discovered that to achieve this phenomenon, overparameterization is critical. In the setting of linear regression, we have the training data _{_ ( _x_ _i_ _, y_ _i_ ) _∈_ R _[p]_ _×_ R : _i_ = 1 _, · · ·, n}_, where the outcome variable _y_ _i_ is generated from _y_ _i_ = _x_ _[⊤]_ _i_ _[β]_ [ +] _[ ε]_ _[i]_ _[, i]_ [ = 1] _[, . . ., n,]_ _x_ _i_ is a vector of features (or regressors), _β_ is a vector of unknown parameters, and _ε_ _i_ is a regression error. Here, _n_ is the sample size of the training data and _p_ is the dimension of the parameter vector _β_ . In the literature, the main object for the theoretical analyses has been mainly on the out-of-sample prediction risk. That is, for the ridge or interpolation estimator _β_ [ˆ], the literature has focused on E ( _x_ _[⊤]_ 0 _[β]_ [ˆ] _[ −]_ _[x]_ _[⊤]_ 0 _[β]_ [)] [2] _[ |][ x]_ [1] _[, . . ., x]_ _[n]_ _,_ � � where _x_ 0 is a test observation that is identically distributed as _x_ _i_ but independent of the training data. For example, Dobriban & Wager (2018); Wu & Xu (2020); Richards et al. (2021); Hastie et al. (2022) analyzed the predictive risk of ridge(less) regression and obtained exact asymptotic expressions under the assumption that _p/n_ converges to some constant as both _p_ and _n_ go to infinity. Overall, they found the double descent behavior of the ridgeless least squares estimator in terms of the prediction risk. Bartlett et al. (2020); Kobak et al. (2020); Tsigler & Bartlett (2023) characterized the phenomenon of benign overfitting in a different setting. 1 To the best of our knowledge, a vast majority of the theoretical analyses have been confined to a simple data generating process, namely, the observations are independent and identically distributed (i.i.d.), and the regression errors have mean zero, have the common variance, and are independent of the feature vectors. That is, ( _y_ _i_ _, x_ _[⊤]_ _i_ [)] _[⊤]_ _[∼]_ [i.i.d. with][ E][[] _[ε]_ _[i]_ [] = 0][,][ E][[] _[ε]_ [2] _i_ [] =] _[ σ]_ [2] _[ <][ ∞]_ [and] _[ ε]_ _[i]_ [is independent of] _[ x]_ _[i]_ [.] (1) This assumption, although convenient, is likely to be unrealistic in various real-world examples. For instance, Liao et al. (2023) adopted high-dimensional linear models to examine the double descent phenomenon in economic forecasts. In their applications, the outcome variables include S&P firms’ earnings, U.S. equity premium, U.S. unemployment rate, and countries’ GDP growth rate. As in their applications, economic forecasts are associated with time series or panel data. As a result, it is improbable that (1) holds in these applications. As another example, Spiess et al. (2023) examined the performance of high-dimensional synthetic control estimators with many control units. The outcome variable in their application is the state-level smoking rates in the Abadie et al. (2010) dataset. Considering the geographical aspects of the U.S. states, it is unlikely that the regression errors underlying the synthetic control estimators adhere to (1). In short, it is desirable to go beyond the simple but unrealistic regression error assumption given in (1). |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 0)|train error (c = 0)|||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)|train error (c = 0)| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 1/4)<br>train error (c = 2/4)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|train error (c = 3/4)<br>test error (c = 0)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|test error (c = 1/4)<br>test error (c = 2/4)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|test error (c = 3/4)||||||| |0.9<br>0.8<br>0.7<br>0.6<br>0.5<br>0.4 train error (c = 0)<br>train error (c = 1/4)<br>0.3 train error (c = 2/4)<br>train error (c = 3/4)<br>0.2 test error (c = 0)<br>test error (c = 1/4)<br>0.1 test error (c = 2/4)<br>test error (c = 3/4)<br>0.0<br>0 500 1000 1500 2000|test error (c = 3/4)|||||500 2000|500 2000| Figure 1: Comparison of in-sample and out-of-sample mean squared error (MSE) across various degrees of clustered noise. The vertical line indicates _p_ = _n_ (= 1 _,_ 415). To further motivate, we start with our own real-data example from American Community Survey (ACS) 2018, extracted from IPUMS USA (Ruggles et al., 2024). The ACS is an ongoing annual survey by the US Census Bureau that provides key information about the US population. To have a relatively homogeneous population, the sample extract is restricted to white males residing in California with at least a bachelor’s degree. We consider a demographic group defined by their age, the type of degree, and the field of degree. Then, we compute the average of log hourly wages for each age-degree-field group, treat each group average as the outcome variable, and predict group wages by various group-level regression models where the regressors are constructed using the indicator variables of age, degree, and field as well as their interactions. We consider 7 specifications ranging from 209 to 2,182 regressors. To understand the role of non-i.i.d. regressor errors, we add the artificial noise to the training sample. See Appendix A for details regarding how to generate the artificial noise. In the experiment, the constant _c_ varies such that _c_ = 0 corresponds to no clustered dependence across observations but as a positive _c_ gets larger, the noise has a larger share of clustered errors but the variance of the overall regression errors remains the same regardless of the value of _c_ . Figure 1 shows the in-sample (train) vs. out-of-sample (test) mean squared error (MSE) for various values of _c ∈{_ 0 _,_ 0 _._ 25 _,_ 0 _._ 5 _,_ 0 _._ 75 _}_ . It can be seen that the experimental results are almost identical across different values of _c_ especially when _p > n_, suggesting that the double descent phenomenon might be universal for various degrees of clustered dependence, provided that the overall variance of the regression errors remains the same. It is our main goal to provide a firm foundation for this empirical phenomenon. To do so, we articulate the following research questions: - How to analyze the out-of-sample prediction risk of the ridgeless least squares estimator under _general_ assumptions on the regression errors? 2 - Why does _not_ the prediction risk seem to be affected by the degrees of dependence across observations? To delve into the prediction risk, suppose that Σ := E[ _x_ 0 _x_ _[⊤]_ 0 []][ is finite and positive definite. Then,] E ( _x_ _[⊤]_ 0 _[β]_ [ˆ] _[ −]_ _[x]_ _[⊤]_ 0 _[β]_ [)] [2] _[ |][ x]_ [1] _[, . . ., x]_ _[n]_ = E ( _β_ [ˆ] _−_ _β_ ) _[⊤]_ Σ( _β_ [ˆ] _−_ _β_ ) _| x_ 1 _, . . ., x_ _n_ _._ � � � � If Σ = _I_ (i.e., the case of isotropic features), where _I_ is the identity matrix, the mean squared error of the estimator defined by E[ _∥β_ [ˆ] _−_ _β∥_ [2] ], where _∥· ∥_ is the usual Euclidean norm, is the same as the expectation of the prediction risk defined above. However, if Σ _̸_ = _I_, the link between the two quantities is less intimate. One may regard the prediction risk as the Σ-weighted mean squared error of the estimator; whereas E[ _∥β_ [ˆ] _−_ _β∥_ [2] ] can be viewed as an “unweighted” version, even if Σ _̸_ = _I_ . In other words, regardless of the variance-covariance structure of the feature vector, E[ _∥β_ [ˆ] _−_ _β∥_ [2] ] treats each component of _β_ “equally.” The mean squared error of the estimator is arguably one of the most standard criteria to evaluate the quality of the estimator in statistics. For instance, in the celebrated work by James & Stein (1961), the mean squared error criterion is used to show that the sample mean vector is not necessarily optimal even for standard normal vectors (so-called “Stein’s paradox”). Many follow-up papers used the same criterion; e.g., Hansen (2016) compared the mean-squared error of ordinary least squares, James–Stein, and Lasso estimators in an underparameterized regime. Both Σ-weighted and unweighted versions of the mean squared error are interesting objects to study. For example, Dobriban & Wager (2018) called the former “predictive risk” and the latter “estimation risk” in high-dimensional linear models; Berthier et al. (2020) called the former “generalization error” and the latter “reconstruction error” in the context of stochastic gradient descent for the least squares problem using the noiseless linear model. In this paper, we analyze both weighted and unweighted mean squared errors of the ridgeless estimator under general assumptions on the data-generating processes, not to mention anisotropic features. Furthermore, our focus is on the finite-sample analysis, that is, both _p_ and _n_ are fixed but _p > n_ . Although most of the existing papers consider the simple setting as in (1), our work is not the first paper to consider more general regression errors in the overparameterized regime. Chinot et al. (2022); Chinot & Lerasle (2023) analyzed minimum norm interpolation estimators as well as regularized empirical risk minimizers in linear models without any conditions on the regression errors. Specifically, Chinot & Lerasle (2023) showed that, with high probability, without assumption on the regression errors, for the minimum norm interpolation estimator, ( _β_ [ˆ] _−_ _β_ ) _[⊤]_ Σ( _β_ [ˆ] _−_ _β_ ) is bounded from above by � _∥β∥_ [2] [ �] _i≥c·n_ _[λ]_ _[i]_ [(Σ)] _[ ∨]_ [�] _[n]_ _i_ =1 _[ε]_ _i_ [2] � _/n_, where _c_ is an absolute constant and _λ_ _i_ (Σ) is the eigenvalues of Σ in descending order. Chinot & Lerasle (2023) also obtained the bounds on the estimation error ( _β_ [ˆ] _−_ _β_ ) _[⊤]_ ( _β_ [ˆ] _−_ _β_ ). Our work is distinct and complements these papers in the sense that we allow for a general variance-covariance matrix of the regression errors. The main motivation of not making any assumptions on _ε_ _i_ in Chinot et al. (2022) and Chinot & Lerasle (2023) is to allow for potentially adversarial errors. We aim to allow for a general variance-covariance matrix of the regression errors to accommodate time series and clustered data, which are common in applications. See, e.g., Hansen (2022) for a textbook treatment (see Chapter 14 for time series and Section 4.21 for clustered data). The main contribution of this paper is that we provide _exact finite-sample_ characterization of the variance component of the prediction and estimation risks under the assumption that _X_ = [ _x_ 1 _, x_ 2 _, · · ·, x_ _n_ ] _[⊤]_ is _left-spherical_ (e.g., _x_ _i_ ’s can be i.i.d. normal with mean zero but more general); _ε_ _i_ ’s _can be correlated and have non-identical variances_ ; and _ε_ _i_ ’s are independent of _x_ _i_ ’s. Specifically, the variance term can be factorized into a product between two terms: one term depends only on the _trace_ of the variance-covariance matrix, say Ω, of _ε_ _i_ ’s; the other term is solely determined by the distribution of _x_ _i_ ’s. Interestingly, we find that although Ω may contain non-zero off-diagonal elements, only the trace of Ω matters, as hinted by Figure 1, and further demonstrate our finding via numerical experiments. In addition, we obtain exact finite-sample expression for the bias terms when the regression coefficients follow the random-effects hypothesis (Dobriban & Wager, 2018). Our finite-sample findings offer a distinct viewpoint on the prediction and estimation risks, contrasting with the asymptotic inverse relationship (for optimally chosen ridge estimators) between the predictive and estimation risks uncovered by Dobriban & Wager (2018). Finally, we connect our findings to the existing results on the prediction risk (e.g., Hastie et al., 2022) by considering the asymptotic behavior of estimation risk. Remarkably, our findings stand in sharp contrast 3 to the well-established results in econometrics. In the latter, unlike in our framework, one of the key objectives is to estimate the variance-covariance matrix, denoted by _V_ LS, of the asymptotic distribution of the least squares estimators. In this context, the off-diagonal elements of Ω _do_ affect _V_ LS, implying that any consistent estimator of _V_ LS must account for these off-diagonal components. One of the limitations of our theoretical analysis is that the design matrix _X_ is assumed to be leftspherical, although it is more general than i.i.d. normal with mean zero. We not only view this as a convenient assumption but also expect that our findings will hold at least approximately even if _X_ does not follow the left-spherical distribution. It is a topic for future research to formally investigate this conjecture. 2 T HE F RAMEWORK UNDER G ENERAL A SSUMPTIONS ON R EGRESSION E RRORS We first describe the minimum _ℓ_ 2 norm (ridgeless) interpolation least squares estimator in the overparameterized case ( _p > n_ ). Our goal is to understand the generalization ability of overparameterized models trained with gradient-based optimization (e.g., gradient descent) Gunasekar et al. (2017). Define _y_ := [ _y_ 1 _, y_ 2 _, · · ·, y_ _n_ ] _[⊤]_ _∈_ R _[n]_ _,_ _ε_ := [ _ε_ 1 _, ε_ 2 _, · · ·, ε_ _n_ ] _[⊤]_ _∈_ R _[n]_ _,_ _X_ _[⊤]_ := [ _x_ 1 _, x_ 2 _, · · ·, x_ _n_ ] _∈_ R _[p][×][n]_ _,_ so that _y_ = _Xβ_ + _ε_ . The estimator we consider is _β_ ˆ := arg min _b∈_ R _[p]_ _[ {∥][b][∥]_ [:] _[ Xb]_ [ =] _[ y][}]_ [ = (] _[X]_ _[⊤]_ _[X]_ [)] _[†]_ _[X]_ _[⊤]_ _[y]_ [ =] _[ X]_ _[†]_ _[y,]_ where _A_ _[†]_ denotes the Moore–Penrose inverse of a matrix _A_ . The main object of interest in this paper is the prediction and estimation risks of _β_ [ˆ] under the data scenario such that the regression error _ε_ _i_ may _not_ be i.i.d. Formally, we make the following assumptions. **Assumption 2.1.** (i) _y_ = _Xβ_ + _ε_, where _ε_ is independent of _X_, and E[ _ε_ ] = 0. (ii) Ω:= E[ _εε_ _[⊤]_ ] is finite and positive definite (but not necessarily spherical). We emphasize that Assumption 2.1 is more general than the standard assumption in the literature on benign overfitting that typically assumes that Ω _≡_ _σ_ [2] _I_ . Assumption 2.1 allows for non-identical variances across the elements of _ε_ because the diagonal elements of Ω can be different among each other. Furthermore, it allows for non-zero off-diagonal elements in Ω. It is difficult to assume that the regression errors are independent among each other with time series or clustered data; thus, in these settings, it is important to allow for general Ω = _̸_ _σ_ [2] _I_ . Below we present a couple of such examples. **Example 2.1** (Time Series - AR(1) Errors) **.** Suppose that the regressor error follows an autoregressive process: _ε_ _i_ = _ρε_ _i−_ 1 + _η_ _i_ _,_ (2) where _ρ ∈_ ( _−_ 1 _,_ 1) is an autoregressive parameter, _η_ _i_ is independent and identically distributed with mean zero and variance _σ_ [2] (0 _< σ_ [2] _< ∞_ ) and is independent of _X_ . Then, the ( _i, j_ ) element of Ω is _σ_ [2] Ω _ij_ = 1 _−_ _ρ_ [2] _[ ρ]_ _[|][i][−][j][|]_ _[.]_ Note that Ω _ij_ _̸_ = 0 as long as _ρ ̸_ = 0. **Example 2.2** (Panel and Grouped Data - Clustered Errors) **.** Suppose that regression errors are mutually independent across clusters but they can be arbitrarily correlated within the same cluster. For instance, students in the same school may affect each other and also have the same teachers; thus it would be difficult to assume independence across student test scores within the same school. However, it might be reasonable that student test scores are independent across different schools. For 4 example, assume that (i) if the regression error _ε_ _i_ belongs to cluster _g_, where _g_ = 1 _, . . ., G_ and _G_ is the number of clusters, E[ _ε_ [2] _i_ [] =] _[ σ]_ _g_ [2] [for some constant] _[ σ]_ _g_ [2] _[>]_ [ 0][ that can vary over] _[ g]_ [; (ii) if the] regression errors _ε_ _i_ and _ε_ _j_ ( _i ̸_ = _j_ ) belong to the same cluster _g_, E[ _ε_ _i_ _ε_ _j_ ] = _ρ_ _g_ for some constant _ρ_ _g_ _̸_ = 0 that can be different across _g_ ; and (iii) if the regression errors _ε_ _i_ and _ε_ _j_ ( _i ̸_ = _j_ ) do not belong to the same cluster, E[ _ε_ _i_ _ε_ _j_ ] = 0. Then, Ω is block diagonal with possibly non-identical blocks. For vector _a_ and square matrix _A_, let _∥a∥_ [2] _A_ [:=] _[ a]_ _[⊤]_ _[Aa]_ [. Conditional on] _[ X]_ [ and given] _[ A]_ [, we define] Bias _A_ ( _β_ [ˆ] _| X_ ) := _∥_ E[ _β_ [ˆ] _| X_ ] _−_ _β∥_ _A_ and Var _A_ ( _β_ [ˆ] _| X_ ) := Tr(Cov( _β_ [ˆ] _| X_ ) _A_ ) _,_ and we write Var = Var _I_ and Bias = Bias _I_ for the sake of brevity in notation. The mean squared prediction error for an unseen test observation _x_ 0 with the positive definite covariance matrix Σ := E[ _x_ 0 _x_ _[⊤]_ 0 []][ (assuming that] _[ x]_ [0] [is independent of the training data] _[ X]_ [) and the] mean squared estimation error of _β_ [ˆ] conditional on _X_ can be written as: _R_ _P_ ( _β_ [ˆ] _| X_ ) := E�( _x_ _[⊤]_ 0 _[β]_ [ˆ] _[ −]_ _[x]_ _[⊤]_ 0 _[β]_ [)] [2] _[ |][ X]_ � = [Bias Σ ( _β_ [ˆ] _| X_ )] [2] + Var Σ ( _β_ [ˆ] _| X_ ) _,_ _R_ _E_ ( _β_ [ˆ] _| X_ ) := E� _∥β_ [ˆ] _−_ _β∥_ [2] _| X_ � = [Bias( _β_ [ˆ] _| X_ )] [2] + Var( _β_ [ˆ] _| X_ ) _._ In what follows, we obtain exact finite-sample expressions for prediction and estimation risks: _R_ _P_ ( _β_ [ˆ] ) := E _X_ [ _R_ _P_ ( _β_ [ˆ] _| X_ )] and _R_ _E_ ( _β_ [ˆ] ) := E _X_ [ _R_ _E_ ( _β_ [ˆ] _| X_ )] _._ We first analyze the variance terms for both risks and then study the bias terms. 3 T HE V ARIANCE C OMPONENTS OF P REDICTION AND E STIMATION R ISKS 3.1 T HE VARIANCE COMPONENT OF PREDICTION RISK We rewrite the variance component of prediction risk as follows: Var Σ ( _β_ [ˆ] _| X_ ) = Tr(Cov( _β_ [ˆ] _| X_ )Σ) = Tr( _X_ _[†]_ Ω _X_ _[†⊤]_ Σ) = _∥SX_ _[†]_ _T_ _∥_ _F_ [2] _[,]_ (3) where positive definite symmetric matrices _S_ := Σ [1] _[/]_ [2] and _T_ := Ω [1] _[/]_ [2] are the square root matrices of the positive definite matrices Σ and Ω, respectively. To compute the above Frobenius norm of the matrix _SX_ _[†]_ _T_, we need to compute the alignment of the right-singular vectors of _B_ := _SX_ _[†]_ _∈_ R _[p][×][n]_ with the left-eigenvectors of _T ∈_ R _[n][×][n]_ . Here, _B_ is a random matrix while _T_ is fixed. Therefore, we need the distribution of the right-singular vectors of the random matrix _B_ . Perhaps surprisingly, to compute the _expected_ variance E _X_ [Var Σ ( _β_ [ˆ] _| X_ )], it turns out that we do not need the distribution of the singular vectors if we make a minimal assumption (the _left-spherical_ _symmetry_ of _X_ ) which is weaker than the assumption that _{x_ _i_ _}_ _[n]_ _i_ =1 [is i.i.d. normal with][ E][[] _[x]_ [1] [] = 0][.] **Definition 3.1** (Left-Spherical Symmetry (Dawid, 1977; 1978; 1981; Gupta & Nagar, 1999)) **.** A random matrix _Z_ or its distribution is called to be _left-spherical_ if _OZ_ and _Z_ have the same distribution ( _OZ_ = _d_ _Z_ ) for any fixed orthogonal matrix _O ∈_ _O_ ( _n_ ) := _{A ∈_ R _n×n_ : _AA_ _⊤_ = _A_ _⊤_ _A_ = _I}_ . **Assumption 3.2.** The design matrix _X_ is left-spherical. For the isotropic error case (Ω= _I_ ), we have E _X_ [Var Σ ( _β_ [ˆ] _| X_ )] = E _X_ [Tr(( _X_ _[⊤]_ _X_ ) _[†]_ Σ)] directly from (3) since _X_ _[†]_ _X_ _[†⊤]_ = ( _X_ _[⊤]_ _X_ ) _[†]_ . Moreover, for the arbitrary error, the left-spherical symmetry of _X_ plays a critical role to _factor out_ the same E _X_ [Tr(( _X_ _[⊤]_ _X_ ) _[†]_ Σ)] and the trace of the variancecovariance matrix of the regression errors, Tr(Ω), from the variance after the expectation over _X_ . **Lemma 3.3.** _For a subset S ⊂_ R _[m][×][m]_ _satisfying C_ _[−]_ [1] _∈S for all C ∈S, if matrix-valued random_ _variables Z and AZ have the same distribution measure µ_ _Z_ _for any A ∈S, then we have_ E _Z_ [ _f_ ( _Z_ )] = E _Z_ [ _f_ ( _AZ_ )] = E _Z_ [E _A_ _′_ _∼ν_ [ _f_ ( _A_ _[′]_ _Z_ )]] _for any function f ∈_ _L_ [1] ( _µ_ _Z_ ) _and any probability density function ν on S._ **Theorem 3.4.** _Let Assumptions 2.1, and 3.2 hold. Then, we have_ E _X_ [Var Σ ( _β_ [ˆ] _| X_ )] = _n_ [1] [Tr(Ω)][E] _[X]_ [[Tr((] _[X]_ _[⊤]_ _[X]_ [)] _[†]_ [Σ)]] _[.]_ 5
Idea Generation Category:
| 3Other
|
AsAy7CROLs
|
# O N Q UANTIZING N EURAL R EPRESENTATION FOR V ARIABLE -R ATE V IDEO C ODING **Junqi Shi, Zhujia Chen, Hanfei Li, Qi Zhao, Ming Lu** _[∗]_ **, Tong Chen, Zhan Ma** School of Electronic Science and Engineering, Nanjing University _{_ junqishi,zhujiachen,hanfei ~~l~~ i,qizhao _}_ @smail.nju.edu.cn, _{_ minglu,chentong,mazhan _}_ @nju.edu.cn A BSTRACT This work introduces NeuroQuant, a novel post-training quantization (PTQ) approach tailored to non-generalized Implicit Neural Representations for variablerate Video Coding (INR-VC). Unlike existing methods that require extensive weight retraining for each target bitrate, we hypothesize that variable-rate coding can be achieved by adjusting quantization parameters (QPs) of pre-trained weights. Our study reveals that traditional quantization methods, which assume inter-layer independence, are ineffective for non-generalized INR-VC models due to significant dependencies across layers. To address this, we redefine variablerate INR-VC as a mixed-precision quantization problem and establish a theoretical framework for sensitivity criteria aimed at simplified, fine-grained rate control. Additionally, we propose network-wise calibration and channel-wise quantization strategies to minimize quantization-induced errors, arriving at a unified formula for representation-oriented PTQ calibration. Our experimental evaluations demonstrate that NeuroQuant significantly outperforms existing techniques in varying bitwidth quantization and compression efficiency, accelerating encoding by up to eight times and enabling quantization down to INT2 with minimal reconstruction loss. This work introduces variable-rate INR-VC for the first time and lays a theoretical foundation for future research in rate-distortion optimization, advancing the field of video coding technology. The materials will be avail[able at https://github.com/Eric-qi/NeuroQuant.](https://github.com/Eric-qi/NeuroQuant) 1 I NTRODUCTION Implicit Neural Representations (INRs) (Sitzmann et al., 2020; Chen et al., 2021a) have recently introduced a new approach to video coding. They focus on learning a mapping from coordinates, like frame indices, to pixel values, such as colors. This represents a significant departure from the widely used variational autoencoder (VAE)-based frameworks (Lu et al., 2019; Li et al., 2021a; Lu et al., 2024), which rely on generalized models trained on large datasets to create compact representations for various input signals. Instead, INR-based video coding (INR-VC) encodes each video as a unique neural network through end-to-end training, removing the need for extensive datasets. By using specific, non-generalized network weights for each video, INR-VC provides a tailored video coding method that has shown promising results (Chen et al., 2023; Kwan et al., 2024a). INR-VC typically focuses on two main objectives: 1) **Representation**, where a neural network models the target video with a minimized distortion, and 2) **Compression**, where the network’s weights are compressed to lower the bitrate. Many prominent methods adopt a consistent precision (quantization bitwidth) for all weights before lossless entropy coding, meaning the video bitrate depends solely on the number of learnable weights. Consequently, independent weight training is needed for each target bitrate, making the process very time-consuming. For example, encoding a 1080p video with 600 frames at a specific bitrate can take up to 10 hours. To address this inefficiency, we consider how bitrate is managed in pretrained INR-VC model, which is proportional to the sum of the bitwidth of each weight. Inspired by generalized codecs (Sullivan _∗_ Corresponding Author 1 **Train** ~~**`GPU`**~~ ~~**`GPU`**~~ ~~**`GPU`**~~ **Fixed QP** **Variable QP** ~~**W**~~ **e** ~~**i**~~ **g** ~~**hts**~~ **Decode** ~~**W**~~ **e** ~~**i**~~ **g** ~~**ht**~~ **s** **Decode** Figure 1: **Left** : Typical INR-VCs assume a consistent bitwidth and require separate weight training with varying quantities for each target rate. **Right** : The proposed NeuroQuant achieves variable rate by modifying the corresponding QPs, significantly reducing training costs. et al., 2012; Li et al., 2023) that adjust quantization parameters (QPs) (Wang & Kwong, 2008) to control bitrate, we pose the hypothesis: _Can variable-rate INR-VC be achieved by modifying the QP_ _of post-training weights_, thus eliminating the need for repeated model training for each target rate? In the context of weight quantization, this can be approached by: 1) allocating quantization bitwidth to match the target bitrate, and 2) calibrating QPs to preserve reconstruction fidelity. However, directly adopting a consistent quantization bitwidth cannot support fine-grained rate control, e.g., only seven options from INT2 to INT8 are available. Additionally, existing mixedprecision quantization methods (Nagel et al., 2021; Chen et al., 2021b), primarily designed for general-purpose neural networks, encounter two key problems when applied to non-generalized INR-VCs. First, mixed-precision algorithms (Dong et al., 2019; 2020; Chen et al., 2021b) typically assume inter-layer independence with tolerable approximation errors. This assumption breaks down in non-generalized INR-VCs, where layers exhibit significant dependencies. Second, popular layerwise calibration methods [1] (Nagel et al., 2020; Li et al., 2021b) also rely on inter-layer independence and aims at generalizing the network, making them unsuitable for INR-VC. Therefore, a dedicated quantization methodology tailored for variable-rate INR-VC is necessary. In this work, we explore, for the first time, the post-training quantization (PTQ) of weights in nongeneralized INR-VCs. Building on both empirical and theoretical insights, we propose NeuroQuant, a state-of-the-art PTQ approach for INR-VC that enables variable-rate coding without complex retraining. Our contributions tackle key challenges through the following research questions: 1. **How to realize variable bitrate** (Sec. 3.1): We redefine variable-rate coding as a mixed-precision quantization problem. By theoretically demonstrating that the assumption of inter-layer independence (Dong et al., 2020; Guan et al., 2024) does not apply to non-generalized models, we highlight the necessity of incorporating weight perturbation directionality and off-diagonal Hessian information for sensitivity assessment in quantizing INR-VC. Additionally, we introduce the Hessian-Vector product to simplify computations by eliminating the need for explicit Hessian calculations. 2. **How to ensure reconstruction quality** (Sec. 3.2): We enhance reconstruction quality by calibrating the QPs on the corresponding video-specific weights. Through second-order analysis, we derive a unified formula for MSE-oriented calibration across varying granularities. By considering significant cross-layer dependencies and the diverse distribution of weights, we conduct network-wise calibration and channel-wise quantization to minimize reconstruction loss. 3. **How NeuroQuant performs** (Sec. 4): We benchmark proposed NeuroQuant across various architectures against existing quantization techniques, achieving state-of-the-art results. For variable-rate coding, NeuroQuant outperforms competitors while reducing encoding time by 80%. Moreover, NeuroQuant is able to quantize weights down to INT2 without notable performance degradation. 4. **How to advance INR-VC** (Sec. 3.3): We revisit INR-VC through the lens of variational inference, proposing that the success of NeuroQuant stems from resolving the mismatch between the representation and compression. We also suggest that rate-distortion (R-D) optimization is also applicable to INR-VC and has the potential to achieve improved performance. 1 To avoid ambiguity, we use the term _calibration_ to describe the process of optimizing QPs, though some literature refers to this as _reconstruction_ . In this paper, _reconstruction_ refers to the decoded video from INR-VC system. And for simplicity, layer calibration also stands for block calibration. 2 2 P RELIMINARIES **Basic Notations.** We follow popular notations used in neural network. Vectors are denoted by lowercase bold letters, while matrices (or tensors) are denoted by uppercase bold letters. For instance, _**W**_ refers to the weight tensor, and _**w**_ is its flattened version. The superscript of _**w**_ [(] _[l]_ [)] indicates the layer index. For a convolutional or a fully-connected layer, we mark input and output vectors by _**x**_ and _**z**_ . Given a feedforward neural network with _n_ layers, the forward process is expressed as _**x**_ [(] _[l]_ [+1)] = _h_ ( _**z**_ [(] _[l]_ [)] ) = _h_ ( _**w**_ [(] _[l]_ [)] _**x**_ [(] _[l]_ [)] ) _,_ 1 _≤_ _l ≤_ _n,_ (1) where _h_ ( _·_ ) denotes the activation function. For simplicity, we omit the additive bias, merging it into the activation. In the following, the notation _|| · ||_ represents the Frobenius norm. Suppose _**x**_ is sampled from the dataset _X_, then the overall task loss is expressed as E _**x**_ _∼X_ [ _L_ ( _**w**_ _,_ _**x**_ )]. **INR-based Video Coding.** INR-VC operates on the principle that a target video can be encoded into learned weights through end-to-end training. For each frame _V_ _t_ in an RGB video sequence V = _{V_ _t_ _}_ _[T]_ _t_ =1 _[∈]_ [R] _[T][ ×]_ [3] _[×][H][×][W]_ [, INR-VC assumes the existence of an implicit continuous mapping] _F_ : [0 _,_ 1] _[d]_ _[in]_ _→_ R _[d]_ _[out]_ in the real-world system such that _V_ _t_ = _F ◦_ _t_ . According to the Universal Approximation Theorem (Hanin, 2019; Park et al., 2021), the unknown _F_ can be approximated by a neural network _D_ of finite length _L_ _D_ . The estimated _V_ [ˆ] _t_ is then expressed as: ˆ _V_ _t_ = _D ◦E_ ( _t_ ) = _U_ _L_ _◦_ _h ◦_ _U_ _L−_ 1 _◦· · · ◦_ _h ◦_ _U_ 1 _◦E_ ( _t_ ) _,_ (2) where _D_ consists of cascaded upsampling layers _U_, and _E_ ( _·_ ) is an embedding of the timestamp _t_ . Typically, index-based INR-VCs (Chen et al., 2021a) employ a fixed Positional Encoding function or a learnable grid (Lee et al., 2023) as _E_ ( _·_ ), while content-based INR-VCs (Chen et al., 2023; Zhao et al., 2023) utilize a learnable encoder. The encoding of INR-VC involves training the learnable weights _**w**_ and subsequently compressing _**w**_ into a bitstream using quantization and entropy coding techniques. While existing INR-VC works primarily focus on minimizing distortion during the training stage, video coding is fundamentally a R-D trade-off. **Post-Training Quantization.** PTQ offers a push-button solution to quantize pretrained models without weights training. It contrasts with Quantization-Aware Training (QAT), which involves both weight optimization and quantization during training, leading to huge training cost. PTQ is generally a two-step process: 1) initializing QPs (e.g., steps) with allocated bitwidth and weight distribution statistics; 2) calibrating QPs to reduce quantization-induced loss. PTQ typically employs uniform affine transformation to map continuous _w ∈_ R to fixed-point integers ˆ _w_ . Traditional methods aim to minimize quantization error _||w_ ˆ _−_ _w||_ . However, an increasing number of explorations (Stock et al., 2020; Nagel et al., 2020; Hubara et al., 2021) suggest that this approach can yield sub-optimal results, as the parameter space error does not equivalently reflect task loss. To analyze quantizationinduced loss degradation, AdaRound (Nagel et al., 2020) interprets quantization error as weight perturbation, i.e., ˆ _**w**_ = _**w**_ + ∆ _**w**_ . The loss degradation can be approximated using Taylor series: E[ _L_ ( _**w**_ + ∆ _**w**_ _,_ _**x**_ ) _−L_ ( _**w**_ _,_ _**x**_ )] _≈_ ∆ _**w**_ _[T]_ _·_ _**g**_ [(] _**[w]**_ [)] + [1] (3) 2 [∆] _**[w]**_ _[T]_ _[ ·]_ _**[ H]**_ [(] _[w]_ [)] _[ ·]_ [ ∆] _**[w]**_ _[,]_ where _**g**_ [(] _**[w]**_ [)] = E[ _∇_ _**w**_ _L_ ] and _**H**_ [(] _**[w]**_ [)] = E[ _∇_ [2] _**w**_ _[L]_ []][ represent expected gradient and the second-] order Hessian matrix, respectively. For well-converged weights, gradients tend to be close to 0. AdaRound further assumes inter-layer independence, leading to a diagonal Hessian matrix optimization. BRECQ (Li et al., 2021b) extends AdaRound’s layer-wise calibration to block granularity based on inter-block independence. However, these methods can significantly degrade the performance of non-generalized INR-VCs, which exhibit significant dependencies among layers. **Mixed-Precision Quantization.** Mixed-precision quantization facilitates fine-grained rate control in INR-VCs, with bit allocation being crucial due to the varying levels of redundancy across layers and their different contributions to overall performance. However, determining optimal bitwidth assignments presents a significant challenge because of the extensive search space. For a network with _N_ layers and _M_ candidate bitwidths per layer, exhaustive combinatorial searches exhibit exponential time complexity of _O_ ( _M_ _[N]_ ). To address it, various strategies have been explored, including search-based reinforcement learning Wang et al. (2019); Lou et al. (2019), neural architecture search (Wu et al., 2016), and Hessian-based criteria (Dong et al., 2019; 2020). Despite these efforts, they often prove impractical for INR-VCs, as the search costs may surpass those of retraining a model. Furthermore, many existing criteria lack a robust theoretical basis for their optimality, rendering them less reliable in INR-VC systems. 3 |101<br>102<br>103|Col2|Col3|HN|eRV|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |101<br>102<br>103||||||||| |101<br>102<br>103||||||||| (a) Layer-wise sensitivity Ω (b) 2-th loss landscape (c) 6-th loss landscpae Figure 2: Examples of quantizing layers in sequence. (a) Different layers exhibits varying sensitivities. (b) Lower Ω means flatter loss landscape. (c) Higher Ω is otherwise, and the loss landscape shows pronounced directivity, indicating the necessity of considering the direction of ∆ _**w**_ . 3 M ETHODOLOGY We introduce the proposed NeuroQuant for high-performance variable-rate INR-VC as follows: **Problem 1** (NeuroQuant) **.** _Given learned video-specific weights, the objective of NeuroQuant is to_ _achieve different R-D trade-offs by quantizing post-training weights with variable QPs. This can be_ _formulated as a rate-constrained optimization process:_ arg min E[ _L_ ( _Q_ ( _**w**_ ) _, Q_ ( _**e**_ )) _−L_ ( _**w**_ _,_ _**e**_ )] (4) _s.t._ _L_ � _Param_ ( _**w**_ [(] _[l]_ [)] ) _· b_ [(] _**w**_ _[l]_ [)] [+] _l_ =1 _T_ � _Param_ ( _**e**_ [(] _[t]_ [)] ) _· b_ _**e**_ = _R ± ϵ,_ (5) _t_ =1 _where R represents the target bitrate,_ _**e**_ _denotes the embedding, Param_ ( _·_ ) _indicates the number of_ _parameters, and b denotes the bitwidth._ We decouple this problem into three sub-problems: 1)Sec. 3.1: The rate-constrained term in Eq. 5 is defined as a mixed-precision bit assignment problem, accounting for fine-grained rate control and varying layer sensitivity; 2) Sec. 3.2: The objective in Eq. 4 is interpreted as QP calibration problem, focusing on calibration and quantization granularity of non-generalized INR-VC; 3) Sec. 3.3: We revisit the entire problem from the perspective of variational inference to provide a broader theoretical grounding. 3.1 H OW TO REALIZE VARIABLE BITRATE **Sensitivity Criterion.** The core concept of mixed-precision quantization is to allocate higher precision (e.g., greater bitwidth) to sensitive layers while reducing precision in insensitive ones. Sensitivity can be intuitively understood through the flatness of the loss landscape (Li et al., 2018), as illustrated in Fig. 2. A flatter landscape, indicating lower sensitivity, corresponds to smaller loss changes with weight perturbations, whereas a sharper landscape indicates otherwise. Sensitivity essentially captures the curvature of the loss function, often described using second-order information, particularly the Hessian matrix _**H**_ [(] _[w]_ [)] . _**H**_ [(] _**[w]**_ [)] defines how perturbations in weights affect task loss. For instance, HAWQ (Dong et al., 2019) uses the top Hessian eigenvalue as a sensitivity criterion, while HAWQ-V2 (Dong et al., 2020) demonstrates that the trace offers a better measure. However, these criteria rely on two key assumptions: 1) **Layer Independence** : Layers are mutually independent, allowing _**H**_ [(] _[w]_ [)] to be treated as diagonal. 2) **Isotropy** : The loss function is directionally uniform under weight perturbations ∆ _**w**_, meaning only _**H**_ [(] _[w]_ [)] is considered, ignoring ∆ _**w**_ . While these assumptions may hold for general-purpose networks, they break down in the context of non-generalized INR-VC, where significant inter-layer dependencies (Fig. 3(c)) and anisotropic behavior (Fig. 2(c)) exist. The following toy examples demonstrate why relying solely on diagonal information from _**H**_ is suboptimal. 4 **Example 1** (Inter-Layer Dependencies) **.** _Consider three functions, F_ 1 = 4 _x_ [2] + _y_ [2] _, F_ 2 = 4 _x_ [2] +2 _y_ [2] _,_ _and F_ 3 = 4 _x_ [2] + 2 _y_ [2] + 5 _xy. Their corresponding Hessians are given as:_ 8 5 _,_ _**H**_ [(] _[F]_ [3] [)] = 5 4 _._ (6) � � � 8 0 _**H**_ [(] _[F]_ [1] [)] = 0 2 � 8 0 _,_ _**H**_ [(] _[F]_ [2] [)] = 0 4 � � _All three functions share the same top eigenvalue (_ 8 _), yet F_ 2 _and F_ 3 _are clearly more sensitive than_ _F_ 1 _. Although F_ 2 _and F_ 3 _have the same trace (_ 12 _), F_ 3 _exhibits greater sensitivity due to the presence_ _of off-diagonal terms (i.e.,_ 5 _xy)._ This demonstrates that inter-layer dependencies are overlooked when relying solely on diagonal information (e.g., eigenvalues or traces). Off-diagonal terms are essential to accurately capture sensitivity, highlighting the need to consider the full Hessian matrix. The story does not end there. **Example 2** (Weight Perturbation Directions) **.** _Assuming a perturbation_ [∆ _x,_ ∆ _y_ ] _applied to H_ [(] _[F]_ [3] [)] _from above, the increase in loss is approximately proportional to_ _F_ 3 ( _x_ + ∆ _x, y_ + ∆ _y_ ) _−F_ 3 ( _x, y_ ) _≈_ [∆ _x,_ ∆ _y_ ] _**H**_ [∆ _x,_ ∆ _y_ ] _[T]_ = 8∆ _x_ [2] + 4∆ _y_ [2] + 10∆ _x_ ∆ _y._ (7) _Now, consider two cases: 1) Lower perturbation:_ [∆ _x,_ ∆ _y_ ] = [0 _._ 1 _,_ 0 _._ 1] _; 2) Higher perturbation:_ [∆ _x,_ ∆ _y_ ] = [0 _._ 2 _, −_ 0 _._ 2] _. The increases in task loss are_ 0 _._ 22 _and_ 0 _._ 08 _, respectively. Surprisingly, the_ _higher perturbation results in a smaller task loss._ This counterintuitive behavior is also observed in practice, where quantizing layers with higher _**H**_ sensitivity to a lower bitwidth does not necessarily lead to significant performance degradation. We argue that allocating higher bitwidth to layers primarily reduces _||_ ∆ _**w**_ _||_ . However, this does not always guarantee a lower task loss, as _L_ is anisotropy under ∆ _**w**_ in INR-VC. The key insight is that task loss also depends on the direction of ∆ _**w**_, not just its magnitude _||_ ∆ _**w**_ _||_ . In conclusion, the sensitivity criterion of INR-VC must account for both the full Hessian matrix _**H**_ [(] _[w]_ [)] and the direction of weight perturbations ∆ _**w**_ . This leads to the following theorem: **Theorem 1.** _Assuming the INR-VC weights are twice differentiable and have converged to a local_ _minima such that the first and second order optimality conditions are satisfied (i.e., the gradients are_ _zero and the Hessian is positive semi-definite), the optimal sensitivity criteria for mixed-precision_ _INR-VC is given by weighted Hessian information_ Ω= ∆ _**w**_ _[T]_ _·_ _**H**_ [(] _**[w]**_ [)] _·_ ∆ _**w**_ _._ The criteria Ω, formed by Hessian-Vector product, can essentially be interpreted as a linear transformation on _**H**_ [(] _**[w]**_ [)], accounting for _**H**_ [(] _**[w]**_ [)] along the weight perturbation directions. Existing Hessianbased criteria can be viewed as a degraded version of the proposed Ω that neglects the off-diagonal terms. For instance, Eq. 7 would degrade to 8∆ _x_ [2] + 4∆ _y_ [2], and thus, loss is independent of intervariable dependencies and perturbation direction. **Approximating Hessian-Vector Product.** The Hessian matrix is challenging to explicitly compute and store as its quadratic complexity relative to the number of weights. Instead of forming _**H**_ [(] _**[w]**_ [)] explicitly, we focus on the sensitivity criterion Ω= ∆ _**w**_ _[T]_ _·_ _**H**_ [(] _**[w]**_ [)] _·_ ∆ _**w**_ . Let’s construct a function of the form _G_ = _**g**_ ∆ _**w**_, where _**g**_ is the gradient of _L_ with respect to _**w**_ . The gradient of _G_ can be expressed as: [∆] _**[w]**_ = _**H**_ [(] _**[w]**_ [)] ∆ _**w**_ + _**g**_ _[∂]_ [∆] _**[w]**_ _∂_ _**w**_ _∂_ _**w**_ _∇_ _**w**_ _G_ = _[∂]_ _**[g]**_ [∆] _**[w]**_ _[∂]_ _**[g]**_ _∂_ _**w**_ [∆] _**[w]**_ [ +] _**[ g]**_ _[ ∂]_ _∂_ [∆] _**w**_ _**[w]**_ [∆] _**[w]**_ = _[∂]_ _**[g]**_ _∂_ _**w**_ _∂_ _**w**_ [∆] _**[w]**_ = _[∂]_ [2] _[L]_ _∂_ _**w**_ _∂_ _**w**_ [2] _[∂]_ _[L]_ _∂_ _**w**_ [2] [ ∆] _**[w]**_ [ +] _**[ g]**_ _[ ∂]_ _∂_ [∆] _**w**_ _**[w]**_ (8) _∂_ _**w**_ _[.]_ In a converged model, _**g**_ approaches 0. Moreover, quantization error can be modeled as a random vector, with its component sampled independently form a Uniform distribution: ∆ _**w**_ _∼_ _U_ ( _−_ 0 _._ 5 _,_ 0 _._ 5) (Ball´e et al., 2017). Thus, the second term in Eq. 8 can be ignored. This approximation is also akin to straight-through estimator (STE) (Liu et al., 2022), where _∂_ _[∂]_ _**ww**_ [ˆ] [=] _[∂]_ _∂_ _**[w]**_ _**w**_ [leads to] mation is also akin to straight-through estimator (STE) (Liu et al., 2022), where _∂_ _**ww**_ [=] _∂_ _**[w]**_ _**w**_ [leads to] _∂_ ∆ _**w**_ _∂_ _**ww**_ [= 0][. Consequently, we arrive at the final formulation for][ Ω][:] Ω= E[∆ _**w**_ _[T]_ _∇_ _**w**_ _G_ ] _,_ _where G_ = _**g**_ ∆ _**w**_ = E[ _∇_ _**w**_ _L_ ∆ _**w**_ ] _, G ∈_ R [1] _._ (9) In Eq. 9, ∆ _**w**_ is treated as a perturbation around _**w**_, allowing us to compute _**g**_ centered at _**w**_ . For each potential bitwidth configuration, we only need to compute ∆ _**w**_ and the gradient of _G_ in linear time. Notably, different from using _L_ directly, such criteria-based methods do not require supervised labels or forward inference over the entire full datasets for each potential bitwidth candidate, enabling efficient mixed-precision search using techniques like integer programming, genetic algorithms (Guo et al., 2020), or iterative approaches. So far, we have realized bit allocation for a target bitrate. The next step involves calibrating QPs to minimize the reconstruction distortion. 5
Idea Generation Category:
| 0Conceptual Integration
|
44cMlQSreK
|
# - T IGHT L OWER B OUNDS UNDER A SYMMETRIC H IGH - O RDER H ¨ OLDER S MOOTHNESS AND U NIFORM C ON ## VEXITY **Cedar Site Bai** Department of Computer Science Purdue University West Lafayette, IN, USA [email protected] **Brian Bullins** Department of Computer Science Purdue University West Lafayette, IN, USA [email protected] A BSTRACT In this paper, we provide tight lower bounds for the oracle complexity of minimizing high-order Holder smooth and uniformly convex functions. ¨ Specifically, for a function whose _p_ _[th]_ -order derivatives are Holder continuous with ¨ degree _ν_ and parameter _H_, and that is uniformly convex with degree _q_ and parameter _σ_, we focus on two asymmetric cases: (1) _q_ _>_ _p_ + _ν_, and (2) _q_ _<_ _p_ + _ν_ . Given up to _p_ _[th]_ -order oracle access, we estab lish worst-case oracle complexities of Ω � _Hσ_ � 2( _q−p−ν_ ) _σϵ_ � _q_ (3( _p_ + _ν_ ) _−_ 2) in the � 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 [�] _σϵ_ first case with an _ℓ_ _∞_ -ball-truncated-Gaussian smoothed hard function and Ω _H_ _σ_ �� 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 + log log _σH_ _[p]_ [+] _[q][ν]_ �� 1 _[p]_ [+] _[ν]_ _p_ + _ν−q_ 1 _H_ _[q]_ � _ϵ_ _ϵ_ in the second case, for reaching �� an _ϵ_ -approximate solution in terms of the optimality gap. Our analysis generalizes previous lower bounds for functions under first- and second-order smoothness as well as those for uniformly convex functions, and furthermore our results match the corresponding upper bounds in this general setting. 1 I NTRODUCTION With the advancement in computational power, high-order optimization methods ( _p_ _[th]_ -order with _p ≥_ 2 ) are gaining more attention for their merit of faster convergence and higher precision. Consequently, uniformly convex problems (with degree _q_ ) have become a recent focus, particularly the subproblems of some high-order optimization methods. The subproblem of the cubic-regularized Newton ( _p_ = 2 _, q_ = 3 ) (Nesterov & Polyak, 2006) is an example, as are methods of even higher orders ( _p ≥_ 3, _q ≥_ 4) (Zhu & Cartis, 2022). Although these problems are high-order smooth by definition, a lower-order algorithm may be employed to obtain an approximate solution. For instance, solving the subproblem of cubic-regularized (i.e., _q_ = 3 ) Newton with gradient descent (accessing first-order oracle, i.e., _p_ = 1 ), or, more generally, approximately solving the subproblem of ( _q −_ 1) _[th]_ -order Taylor descent (Bubeck et al., 2019) (which typically contains a regularization term to the power of _q_ ) with lower-order oracle access, introduces an asymmetry between the algorithm’s oracle access order and the degree of uniform convexity ( _q > p_ + 1). Conversely, a lower-degree regularization can be paired with a higher-order smooth function. This enables methods that access higher-order oracles, which leads to the opposite asymmetry ( _q < p_ + 1 ). Examples include the objective function of logistic regression, which is known to be infinite-order smooth. Coupled with standard _ℓ_ 2 -regularization, the problem can be analyzed as a _p_ _[th]_ -order smooth and strongly convex ( _q_ = 2 ) problem, e.g., _p_ = 2 with access to the Hessian matrix, _p_ = 3 accessing the third-order derivative tensor. In addressing specific instances of this asymmetry, previous works established some upper bounds (Gasnikov et al., 2019; Song et al., 2021) and lower bounds (Arjevani et al., 2019; Kornowski 1 & Shamir, 2020; Doikov, 2022; Thomsen & Doikov, 2024) for the oracle complexity. Notably, Song et al. (2021) proposed a unified acceleration framework for functions that are _p_ _[th]_ -order Holder smooth with degree ¨ _ν_, and uniformly convex with degree _q_, providing upper bounds for any combination of _p_, _q_, and _ν_ . For the case where _q > p_ + _ν_, they show an oracle complex ity of _O_ � _Hσ_ � 2( _q−p−ν_ ) _σϵ_ � _q_ (3( _p_ + _ν_ ) _−_ 2), and for the case where _q < p_ + _ν_, the complexity is � 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 [�] _σϵ_ _O_ _H_ _σ_ �� 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 + log log _σH_ _[p]_ [+] _[q][ν]_ �� 1 _[p]_ [+] _[ν]_ _p_ + _ν−q_ 1 _H_ _[q]_ � _ϵ_ _ϵ_ . To the best of our knowledge, no lower bounds �� exist in this general setting, particularly with H¨older smoothness and uniform convexity. In this paper, we provide matching lower bounds to the upper bounds in (Song et al., 2021) for these asymmetric cases. Specifically, we establish Ω _Hσ_ �� 2( _q−p−ν_ ) _σϵ_ � _q_ (3( _p_ + _ν_ ) _−_ 2) for _q > p_ + _ν_ � 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 [�] _σϵ_ 2 _Hσ_ � 3( _p_ + _ν_ ) _−_ 2 + log log _σH_ _[p]_ [+] _[q][ν]_ �� and Ω _H_ _σ_ �� 1 _[p]_ [+] _[ν]_ _p_ + _ν−q_ 1 _H_ _[q]_ � _ϵ_ _ϵ_ for _q < p_ + _ν_ . For the _q > p_ + _ν_ case, we �� adopt the framework proposed by (Guzman & Nemirovski ´, 2015), utilizing a smoothing operator to generate a high-order smooth function. We propose the use of _ℓ_ _∞_ -ball-truncated Gaussian smoothing, which, as we later justify, is novelly designed to achieve the optimal rate and be compatible with both high-order smooth and uniformly convex settings. Both the truncated Gaussian smoothing and the construction of the _ℓ_ _∞_ ball are crucial to improve upon the sub-optimal derivation using uniform smoothing within an _ℓ_ 2 ball in (Agarwal & Hazan, 2018). Our results generalize the lower bounds in (Doikov, 2022; Thomsen & Doikov, 2024) to higher-order and Holder smooth settings. For the ¨ _q < p_ + _ν_ case, we adopt Nesterov’s framework (Nesterov et al., 2018) and generalize the lower bounds in (Arjevani et al., 2019; Kornowski & Shamir, 2020) to include Holder smooth and uniformly ¨ convex settings. 2 R ELATED W ORK **Upper Bounds.** Doikov & Nesterov (2021) showcase the upper bound for uniformly convex functions with Holder-continuous Hessian via cubic regularized Newton method, but the rate is not ¨ optimal. For higher order result, Bubeck et al. (2019) and Jiang et al. (2019) established a near optimal 2 upper bound of _O_ [˜] _ϵ_ _[−]_ 3 _p_ +1 in the simpler case of _ν_ = 1 without uniform convexity. Gasnikov et al. � � (2019) achieve the same near-optimal rate, but also consider uniform convexity, and by the restarting mechanism, derive the rate that for _q > p_ + 1 as well, generalizing the upper bounds established in second-order (Monteiro & Svaiter, 2013) and matching the lower bounds later derived in (Kornowski & Shamir convexity or H, 2020 older smoothness. For minimizing uniformly convex functions, ¨ ). Kovalev & Gasnikov (2022) closed the log � 1 _ϵ_ � gap, but does not consider uniform Juditsky & Nesterov (2014) and Roulet & d’Aspremont (2017) study the complexity of first-order methods. Recently, Song et al. (2021) establish the most general upper bounds for arbitrary combinations of the order of Holder smoothness and the degree of uniform convexity, which include the rates for both ¨ _q > p_ + _ν_ and _q < p_ + _ν_ cases. **Lower Bounds.** Agarwal & Hazan (2018) proved for _p_ _[th]_ -order smooth convex functions an 2 Ω _ϵ_ _[−]_ 5 _p_ +1 lower bound based on constructing the hard function with randomized smoothing uni� � formly over a unit ball. But their rate is not optimal due to the extra dimension factor appearing in the smoothness constant due to the uniform randomized smoothing. Garg et al. (2021) added 2 softmax smoothing prior to randomized smoothing, achieving a near-optimal rate of Ω _ϵ_ _[−]_ 3 _p_ +1 � � for randomized and quantum algorithms. Separately, Arjevani et al. (2019) also established the 2 optimal lower bound of Ω _ϵ_ _[−]_ 3 _p_ +1 with the Nesterov’s hard function construction approach. Fur� � thermore, for the asymmetric case of _q < p_ + 1, Arjevani et al. (2019) proved the lower bound of _Hσ_ � 7 [2] Ω _H_ �� _σ_ 7 + log log � _Hσ_ [3][2] _[ ϵ]_ _[−]_ [1] [��] for the _p_ = 2 and _q_ = 2 case, and the result is later generalized to the _p_ _[th]_ order in (Kornowski & Shamir, 2020). No _q >_ 2 uniformly convex settings were considered in these works. For the case of _q > p_ + _ν_, lower bounds for uniformly convex functions for _q ≥_ 3 are limited to the first-order smoothness setting where _p_ = 1 (Juditsky & Nesterov, 2014; Doikov, 2 2022; Thomsen & Doikov, 2024). No lower bounds for uniformly convex functions were established, to our knowledge, in the high-order setting. 3 P RELIMINARIES AND S ETTINGS **Notations.** We use [ _n_ ] to represent the set _{_ 1 _,_ 2 _, ..., n}_ . We use _∥· ∥_ to denote an _ℓ_ 2 operator norm. We use _∇_ for gradients, _∂_ for subgradients, and _⟨·, ·⟩_ for inner products. Related to the algorithm, bold lower letters for vectors (e.g., **x**, **y** ), and with subscript, the vectors in different iterations (e.g., **x** _T_ ). We use regular lower letters for scalars, and with subscript, a coordinate of a vector (e.g., _x_ _i_ ). Depending on the context, we use capital letters for a matrix or a random variable. We use _ϕ_ for the probability density function of the standard normal or the standard multivariate normal (MVN), and Φ for the cumulative (density) function of standard normal or MVN. We further overuse the notation of _ϕ_ [ _·,·_ ] Φ [ _·,·_ ] for their truncated counterparts for the normal distribution (standard normal if not specified with parameters), and _ϕ_ _∥·∥_ _∞_ _≤·_ Φ _∥·∥_ _∞_ _≤·_ for the MVN truncated within an _ℓ_ _∞_ ball. 3.1 D EFINITIONS **Definition 1** (High-order Smoothness) **.** _For_ _p ∈_ Z [+] _, a function_ _f_ : R _[d]_ _→_ R _is_ _p_ _[th]_ _-order smooth_ _or whose_ _p_ _[th]_ _- derivatives are_ _L_ _p_ _-Lipschitz if for_ _L_ _p_ _>_ 0 _,_ _∀_ **x** _,_ **y** _∈_ R _[d]_ _,_ _∥∇_ _[p]_ _f_ ( **x** ) _−∇_ _[p]_ _f_ ( **y** ) _∥≤_ _L_ _p_ _∥_ **x** _−_ **y** _∥._ ¨ **Definition 2** (High-order Holder Smoothness) **.** _For_ _p ∈_ Z [+] _, a function_ _f_ : R _[d]_ _→_ R _is_ _p_ _[th]_ _-_ _order Holder smooth or has H_ _¨_ _older continuous_ _¨_ _p_ _[th]_ _-order derivatives if for_ _ν ∈_ (0 _,_ 1] _and_ _H >_ 0 _,_ _∀_ **x** _,_ **y** _∈_ R _[d]_ _, ∥∇_ _[p]_ _f_ ( **x** ) _−∇_ _[p]_ _f_ ( **y** ) _∥≤_ _H∥_ **x** _−_ **y** _∥_ _[ν]_ _._ **Definition 3.** (Uniform Convexity (Nesterov et al., 2018, Section 4.2.2)) _For integer_ _q ≥_ 2 _and_ _σ >_ 0 _, a function_ _f_ : R _[d]_ _→_ R _is uniformly convex with degree_ _q_ _and modulus_ _σ_ _if_ _∀_ **x** _,_ **y** _∈_ R _[d]_ _,_ _f_ ( **y** ) _−f_ ( **x** ) _−⟨∇f_ ( **x** ) _,_ **y** _−_ **x** _⟩≥_ _[σ]_ _q_ _[∥]_ **[y]** _[−]_ **[x]** _[∥]_ _[q]_ _[, or the function satisfies]_ _[ ⟨∇][f]_ [(] **[y]** [)] _[ −∇][f]_ [(] **[x]** [)] _[,]_ **[ y]** _[ −]_ **[x]** _[⟩≥]_ _σ∥_ **y** _−_ **x** _∥_ _[q]_ _._ 4 L OWER B OUND FOR THE _q > p_ + _ν_ C ASE The derivation of the lower bound is to find such a function by construction that satisfies the uniformly convex and Holder smooth conditions and requires at least a certain amount of iterations to reach an ¨ _ϵ_ -approximate solution. The general steps follow from the framework of showing lower complexity bounds for smooth convex optimization (Guzman & Nemirovski ´, 2015), which originates from (Nemirovskii & Nesterov, 1985) and serves as the basis for results in various follow-up settings (Agarwal & Hazan, 2018; Garg et al., 2021; Doikov, 2022). The construction starts from a nonsmooth function, then smooths the function with some smoothing operator (e.g. Moreau envelope in (Guzman & Nemirovski ´, 2015; Doikov, 2022), randomized smoothing uniformly within a ball in (Agarwal & Hazan, 2018; Garg et al., 2021)). We design a truncated Gaussian smoothing operator within the _ℓ_ _∞_ ball and start the derivation by stating its formal definition and key properties. 4.1 T RUNCATED G AUSSIAN S MOOTHING **Definition 4** (Truncated Gaussian Smoothing) **.** _For_ _f_ : R _[d]_ _→_ R _and a parameter_ _ρ >_ 0 _, define the_ _truncated Gaussian smoothing operator S_ _ρ_ [ _f_ ] : (R _[d]_ _→_ R) _→_ (R _[d]_ _→_ R) _as_ _S_ _ρ_ [ _f_ ]( **x** ) = E _V_ [ _f_ ( **x** + _ρV_ )] _where_ _V_ _is a_ _d_ _-dimensional random variable that follows the standard multivariate normal (MVN)_ _distribution truncated within a unit ball. That is, the probability density function (PDF) of V is_ 1 _−_ **[v]** _[⊤]_ **[v]** _Z_ ( _d_ )(2 _π_ ) _d_ 2 [exp] � 2 1 P[ _V_ = **v** ] = 2 I [ _∥_ **v** _∥_ _∞_ _≤_ 1] _,_ � _in which_ I [ _·_ ] = 1 _if_ _·_ _is true_ 0 _otherwise is the indicator function and_ _Z_ ( _d_ ) _is the normalizing factor,_ _i.e., the cumulative distribution within the d-dimensional unit ℓ_ _∞_ _-ball (Cartinhour, 1990)._ _We denote_ _f_ _ρ_ = _S_ _ρ_ [ _f_ ] _, and use the shorthand notation for the function that applied the smoothing_ _operator for p times: f_ _ρ_ _[p]_ [=] _[ S]_ _ρ_ _[p]_ [[] _[f]_ [] =] _[ S]_ _[ρ]_ [[] _[· · ·]_ [ [] _[S]_ _[ρ]_ [[] _[f]_ []]] _[ · · ·]_ [ ]] _[ for][ p][ times.]_ 3 Now we justify the choice of truncated Gaussian smoothing for the construction of hard function. We notice that Agarwal & Hazan (2018) choose randomized smoothing uniformly over a unit _ℓ_ 2 -ball, which by their Lemma 2.3 that the smoothed function is _O_ ( _d_ ) -smooth (which in fact can be tightened to _O_ ( _√d_ ) by (Yousefian et al., 2012; Duchi et al., 2012, Lemma 8)) where _d_ is the dimension of the variable. Since the number of iteration _T ∈O_ ( _d_ ), their result _O_ � _T_ _[−]_ 5 _p_ 2+1 [�] is sub-optimal by an extra _T_ comparing to the tight lower bound _O_ � _T_ _[−]_ 3 _p_ 2+1 [�] (Arjevani et al., 2019). Therefore we search for a smoothing operator with Lipschitz constant being _dimension-free_ . We notice that Gaussian smoothing (Duchi et al., 2012, Lemma 9), softmax smoothing (Bullins, 2020, Lemma 7), and Moreau smoothing (Doikov, 2022, Lemma 1) are such operators. Yet as the reader will later see in the proof that the converging points are generated through a sequence of functions, instead of those generated from one hard function. For these two sequences of points to be identical so that the lower bound is indeed for optimizing the hard function constructed, we need the smoothing operator to be _local_, that is, accessing information within _some neighborhood_ of the queried point, e.g., a unit _ℓ_ 2 -ball in (Doikov, 2022). Unfortunately, Gaussian smoothing and softmax smoothing need access to global information. For Moreau smoothing that indeed depends on local information, it’s successfully applied in proving the lower bound in the first-order setting (Doikov, 2022), but is not suited for the high-order setting. First, one may attempt the extension of Moreau smoothing with a _p_ _[th]_ -power regularization, yet it can be shown that the function is not _p_ _[th]_ -order smooth. Next, one may try to apply Moreau smoothing _p_ times, yet unlike randomized smoothing in (Agarwal & Hazan, 2018), the Lipschitz constant does not raise to the _p_ _[th]_ -power with the number of times the smoothing operator is applied, which leads to the same rate as in the first order. Observing the proof of (Agarwal & Hazan, 2018, Corollary 2.4), this is in essence due to the fact that the minimization in Moreau smoothing does not commute with derivative, whereas the expectation in randomized smoothing does. We then come up with the idea of a truncated multivariate Gaussian smoothing operator that is (i) local (ii) smooth with a dimension-free constant (iii) _p_ _[th]_ -order smooth with smoothness constant raising to the _p_ _[th]_ power as well. Initially, we applied the Gaussian smoothing truncated within a unit ball in _ℓ_ 2 by default. We noticed later, however, that the marginal distribution of unit- _ℓ_ 2 -ball truncated multivariate Gaussian is not the truncated standard normal between [ _−_ 1 _,_ 1], but with an extra _d_ -dependent normalizing constant, which adds the _d_ -dependency to the smoothness constant of the hard function. To ensure a dimension-free smoothness constant, we instead apply the multivariate Gaussian smoothing truncated within an _ℓ_ _∞_ ball, a.k.a., the hypercube with edge length 2, whose marginal distribution is indeed the truncated standard normal between [ _−_ 1 _,_ 1] (Cartinhour, 1990). The following lemma characterizes these desired properties including convexity, continuity, approximation, and smoothness, with proof deferred to Appendix A.1. **Lemma 1.** _Given a L-Lipschitz function f_ _, the function f_ _ρ_ _[p]_ [=] _[ S]_ _[ρ]_ [[] _[· · ·]_ [ [] _[S]_ _[ρ]_ [[] _[f]_ []]] _[ · · ·]_ [ ]] _[ satisfies]_ _(i) If f is convex, f_ _ρ_ _[p]_ _[is convex and][ L][-Lipschitz with respect to the][ ℓ]_ [2] _[norm.]_ _(ii) If f is convex, f_ ( **x** ) _≤_ _f_ _ρ_ _[p]_ [(] **[x]** [)] _[ ≤]_ _[f]_ [(] **[x]** [) +] [5] 4 _[p]_ _[Lρ]_ _√d._ 4 _[p]_ _[Lρ]_ _√_ _d._ _i_ _(iii) ∀i ∈_ [ _p_ ] _, ∀_ **x** _,_ **x** _[′]_ _∈_ R _[d]_ _, ∥∇_ _[i]_ _f_ _ρ_ _[p]_ [(] **[x]** [)] _[ −∇]_ _[i]_ _[f]_ _ρ_ _[ p]_ [(] **[x]** _[′]_ [)] _[∥≤]_ � _ρ_ 2 � _L∥_ **x** _−_ **x** _[′]_ _∥._ 4.2 T HE L OWER B OUND : F UNCTION C ONSTRUCTION AND T RAJECTORY G ENERATION **Theorem 1.** _For any_ _T_ _-step_ ( _√d −_ 1 _≤_ _T ≤_ _d_ ) _deterministic algorithm_ _A_ _with oracle access up to_ _the_ _p_ _[th]_ _order, there exists a convex function_ _f_ ( **x** ) _whose_ _p_ _[th]_ _-order derivative is Holder continuous of_ _¨_ _degree_ _ν_ _with modulus_ _H_ _and a corresponding_ _F_ ( **x** ) = _f_ ( **x** ) + _[σ]_ _q_ _[∥]_ **[x]** _[∥]_ _[q]_ _[ with regularization that is]_ _uniformly convex of degree q with modulus σ, such that q > p_ + _ν, it takes_ 2( _q−p−ν_ ) � _q_ (3( _p_ + _ν_ ) _−_ 2) � 2 3( _p_ + _ν_ ) _−_ 2 [�] _σ_ � _ϵ_ _T ∈_ Ω _H_ _σ_ �� _steps to reach an ϵ-approximate solution_ **x** _T_ _satisfying F_ ( **x** _T_ ) _−_ _F_ ( **x** _[∗]_ ) _≤_ _ϵ._ _Proof._ We begin the proof by constructing the hard function. 4 4.2.1 F UNCTION C ONSTRUCTION WITH T RUNCATED G AUSSIAN S MOOTHING _1. Non-smooth Function Construction._ We first construct the function _g_ _t_ ( **x** ) = max 1 _≤k≤t_ _[r]_ _[k]_ [(] **[x]** [)] _where_ _∀_ _k ∈_ [ _T_ ] _, r_ _k_ ( **x** ) = _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � _−_ ( _k −_ 1) _δ._ _ξ_ _k_ _∈{−_ 1 _,_ 1 _}_, **e** is the standard basis, _α_ is a permutation of [ _T_ ], and _δ >_ 0 is some parameter that we will choose later. Lemma 2 characterizes the properties of _g_ _t_ with proof in Appendix A.2. **Lemma 2.** _∀_ _t ∈_ [ _T_ ] _,_ _g_ _t_ _is convex and_ 1 _-Lipschitz with respect to the_ _ℓ_ _∞_ _-norm, and also the_ _ℓ_ 2 _-norm._ _2. Truncate Gaussian Smoothing._ Next, we smooth the function _g_ _t_ ( **x** ) with truncate Gaussian smoothing as in Definition 4. Given a parameter _ρ >_ 0 and _p ∈_ Z [+], _G_ _t_ ( **x** ) = _S_ _ρ_ _[p]_ [[] _[g]_ _[t]_ [](] **[x]** [)] Based on Lemma 1, we show that _G_ _t_ ( **x** ) satisfies the following lemma, with proof in Appendix A.2. **Lemma 3.** _∀_ _t ∈_ [ _T_ ] _, ∀_ **x** _,_ **y** _∈_ R _[d]_ _,_ _(i) G_ _t_ ( **x** ) _is convex and 1-Lipschitz, i.e., G_ _t_ ( **x** ) _−_ _G_ _t_ ( **y** ) _≤∥_ **x** _−_ **y** _∥._ _(ii) g_ _t_ ( **x** ) _≤_ _G_ _t_ ( **x** ) _≤_ _g_ _t_ ( **x** ) + [5] _[pρ]_ _√d._ [5] 4 _[pρ]_ _√_ _d._ _i_ _(iii) For some fixed p ∈_ Z [+] _, ∀_ _i ∈_ [ _p_ ] _, ∥∇_ _[i]_ _G_ _t_ ( **x** ) _−∇_ _[i]_ _G_ _t_ ( **y** ) _∥≤_ � _ρ_ 2 � _∥_ **x** _−_ **y** _∥._ _3. Adding Uniform Convexity._ Now that the constructed function _G_ _t_ ( **x** ) is all-order smooth, we add to it the uniformly convex regularization. We define _f_ _t_ ( **x** ) = _βG_ _t_ ( **x** ) _f_ ( **x** ) = _f_ _T_ ( **x** ) _F_ _t_ ( **x** ) = _f_ _t_ ( **x** ) + _d_ _q_ ( **x** ) for _d_ _q_ ( **x** ) = _[σ]_ _q_ _q_ �� **x** �� _,_ **x** _∈Q_ _F_ ( **x** ) = _F_ _T_ ( **x** ) _,_ 1 where _β >_ 0 is a parameter that we will choose later, _Q_ = _{_ **x** : _∥_ **x** _∥_ 2 _≤_ _D}_ [1] for _D ≤_ � 2 [1] _[−]_ _H_ _[ν]_ _C_ � _q−p−ν_ and _C_ = _σ_ ( _q −_ 1) _× · · · ×_ ( _q −_ _p_ ). **Lemma 4.** _For F_ ( **x** ) = _f_ _T_ ( **x** ) + _d_ _q_ ( **x** ) _where d_ _q_ ( **x** ) = _[σ]_ _q_ �� **x** �� _q_ _and_ **x** _∈Q,_ _(i) F is uniformly convex function with degree q and modulus σ >_ 0 _._ _(ii) F_ ( **x** ) _is p_ _[th]_ _-order H¨older smooth with parameter H_ = _ρ_ 2 _[p][p]_ [+][+1] _[ν][−]_ _β_ [1] _[,][ ∀]_ _[p][ ∈]_ [Z] [+] _[.]_ Therefore, by Lemma 4, the function constructed satisfies the desired uniform convexity and highorder smoothness conditions. Next, we characterize with Lemma 5 the upper and lower bounds of the constructed function which will be used in the proof later. **Lemma 5.** _For R_ ( **x** ) = _β_ max _k∈_ [ _T_ ] _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � + _[σ]_ _q_ _[∥]_ **[x]** _[∥]_ _[q]_ _[, we have]_ _R_ ( **x** ) _−_ _β_ ( _T −_ 1) _δ ≤_ _F_ ( **x** ) _≤_ _R_ ( **x** ) + [5] _√_ 4 _[pβρ]_ 4.2.2 C ONVERGENCE T RAJECTORY G ENERATION _d._ _4. Trajectory Generation Procedure._ The trajectory is generated following a standard _T_ -step iterative procedure same as outlined in (Guzm´an & Nemirovski, 2015; Doikov, 2022): _·_ For _t_ = 1, **x** 1 is the first point of the trajectory and is chosen by initialization of some algorithm _A_, independent of _F_ . Subsequently, choose _α_ (1) _∈_ arg max _k∈_ [ _T_ ] ��� **e** _α_ ( _k_ ) _,_ **x** 1 ��� _ξ_ 1 = sign �� **e** _α_ (1) _,_ **x** 1 �� _,_ after which a fixed _F_ 1 ( **x** ) is generated. 1 _th_ We would note that for the _q > p_ + _ν_ case, _F_ is guaranteed to be _p_ -order smooth only in the bounded domain as constructed, since the regularization term _d_ _q_ ( **x** ) may not be _p_ _[th]_ -order smooth on R _[d]_ . The construction is inspired by that in (Juditsky & Nesterov, 2014). This is not explicitly discussed in (Song et al., 2021; Doikov, 2022; Thomsen & Doikov, 2024). 5 _·_ For 2 _≤_ _t_ _≤_ _T_, at the beginning of each such iteration, we have access to **x** 1 _, · · ·,_ **x** _t−_ 1, the function _F_ _t−_ 1, and its gradient information, which we denote as _I_ _t−_ 1 ( **x** ) = _{F_ _t−_ 1 _, ∇F_ _t−_ 1 _, · · ·, ∇_ _[p]_ _F_ _t−_ 1 _}_ . The algorithm _A_ generates the next point with this information: **x** _t_ = _A_ ( _I_ _t−_ 1 ( **x** 1 ) _, · · ·, I_ _t−_ 1 ( **x** _t−_ 1 )). Then choose _α_ ( _t_ ) _∈_ arg max _k∈_ [ _T_ ] _\{α_ ( _i_ ): _i<t}_ ��� **e** _α_ ( _k_ ) _,_ **x** _t_ ��� _ξ_ _t_ = sign �� **e** _α_ ( _t_ ) _,_ **x** _t_ �� after which a fixed _F_ _t_ ( **x** ) is generated for the next iteration. _5. Indistinguishability of_ _F_ _t_ _and_ _F_ _for Trajectory Generation._ It’s important to note that the trajectory **x** 1 _, · · ·,_ **x** _T_ is generated based on _a sequence of functions_ _F_ 1 _, · · ·, F_ _T_, whereas our object of analysis should be just _one hard function F_ = _F_ _T_ . Here we show: **Lemma 6.** _The trajectory_ **x** 1 _, · · ·,_ **x** _T_ _generated by applying an algorithm_ _A_ _iteratively on the_ _sequence of functions_ _F_ 1 _, · · ·, F_ _T_ _, with up to_ _p_ _[th]_ _-order oracle access, is the same as the trajectory_ _generated applying_ _A_ _directly on_ _F_ _when oracle access pertains only local information within an_ _ℓ_ _∞_ _-ball with radius δ/_ 2 _._ _Proof._ The idea is to show that _∀_ 2 _≤_ _t ≤_ _T_, the function _g_ _t_ coincides with _g_ _T_ (so that _F_ _t_ coincides with _F_ _T_ in terms of generating **x** _t_ +1, i.e., _I_ _t_ = _I_ _T_ ) under some mild conditions. Similar proof can be found in (Guzm´an & Nemirovski, 2015; Doikov, 2022, Section 3). By construction, _∀_ _t ∈_ [ _T_ ], _g_ _t_ ( **x** ) = max max = max _g_ _s_ ( **x** ) _,_ max 1 _≤k≤t_ _[r]_ _[k]_ [(] **[x]** [) = max] � 1 _≤k≤s_ _[r]_ _[k]_ [(] **[x]** [)] _[,]_ [ max] _s<k≤t_ _[r]_ _[k]_ [(] **[x]** [)] � � _s<k≤t_ _[r]_ _[k]_ [(] **[x]** [)] � Furthermore, _α_ ( _s_ ) _∈_ arg max _k∈_ [ _T_ ] _\{α_ ( _i_ ): _i<s}_ ��� **e** _α_ ( _k_ ) _,_ **x** _s_ ��� and _ξ_ _s_ = sign �� **e** _α_ ( _s_ ) _,_ **x** _s_ ��, therefore _g_ _s_ ( **x** _s_ ) = max 1 _≤k≤s_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ ≥_ 1 max _≤k≤s_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _s −_ 1) _δ_ _≥_ ��� **e** _α_ ( _s_ ) _,_ **x** _s_ ��� _−_ ( _s −_ 1) _δ ≥_ _s<k_ max _≤t_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _s −_ 1) _δ_ _≥_ _s<k_ max _≤t_ _[ξ]_ _[k]_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ_ + _δ_ ( _k, s ∈_ Z [+] _, k > s_ = _⇒_ _k ≥_ _s_ + 1) If we limit the information access within an _ℓ_ _∞_ -ball with radius _δ/_ 2 when searching for the next point **x** _s_ +1 from **x** **s**, we then establish a local region _∀_ **x**, _∥_ **x** _−_ **x** **s** _∥_ _∞_ _≤_ 2 _[δ]_ [. Further by Lemma][ 2][ that] _g_ _s_ (also _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � ) is 1 -Lipschitz with respect to the _ℓ_ _∞_ norm, we have _∀_ _k_ such that _s < k ≤_ _t_, _g_ _s_ ( **x** _s_ ) _≥_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ_ + 2 _∥_ **x** _−_ **x** **s** _∥_ _∞_ _≥_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** _s_ � _−_ ( _k −_ 1) _δ_ + [ _g_ _s_ ( **x** _s_ ) _−_ _g_ _s_ ( **x** )] + � _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � _−_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** _s_ �� _,_ which implies that _g_ _s_ ( **x** ) _≥_ max _s<k≤t_ _ξ_ _k_ � **e** _α_ ( _k_ ) _,_ **x** � _−_ ( _k_ _−_ 1) _δ_ = max _s<k≤t_ _r_ _k_ ( **x** ) . This concludes that _∀_ **x** such that _∥_ **x** _−_ **x** **s** _∥_ _∞_ _≤_ 2 _[δ]_ [,] _[ g]_ _[t]_ [(] **[x]** [) = max] _[ {][g]_ _[s]_ [(] **[x]** [)] _[,]_ [ max] _[s<k][≤][t]_ _[ r]_ _[k]_ [(] **[x]** [)] _[}]_ [ =] _[ g]_ _[s]_ [(] **[x]** [)] [, which further] implies _F_ _t_ ( **x** ) = _F_ _s_ ( **x** ) . Letting _t_ = _T_ we have _∀_ _t ∈_ [ _T_ ], _F_ _t_ ( **x** ) = _F_ _T_ ( **x** ) for _∥_ **x** _−_ **x** **t** _∥_ _∞_ _≤_ 2 _[δ]_ [.] 4.2.3 L OWER B OUND D ERIVATION _6. Bounding the Optimality Gap._ The following lemma bounds optimality gap, whose proof is based on Lemma 5, and is presented in Appendix A.2. _q_ _σT_ 2 **Lemma 7.** _F_ ( **x** _T_ ) _−_ _F_ ( **x** _[∗]_ ) _≥−β_ ( _T −_ 1) _δ −_ [5] 4 _[pβρ]_ _√_ _d_ + _[q][−]_ [1] _[−]_ [1] _β_ _[q]_ _q_
Idea Generation Category:
| 3Other
|
fMTPkDEhLQ
|
# T OWARDS R OBUST A LIGNMENT OF L ANGUAGE M ODELS : D ISTRIBUTIONALLY R OBUSTIFYING D IRECT P REFERENCE O PTIMIZATION **Junkang Wu** [1] _[∗]_ **Yuexiang Xie** [2] **Zhengyi Yang** [1] **Jiancan Wu** [1] _[†]_ **Jiawei Chen** [3] **Jinyang Gao** [2] **Bolin Ding** [2] **Xiang Wang** [1] **Xiangnan He** [4] _[†]_ 1 University of Science and Technology of China 2 Alibaba Group 3 Zhejiang University 4 MoE Key Lab of BIPC, University of Science and Technology of China _{_ jkwu0909, wujcan, xiangnanhe _}_ @gmail.com A BSTRACT This study addresses the challenge of noise in training datasets for Direct Preference Optimization (DPO), a method for aligning Large Language Models (LLMs) with human preferences. We categorize noise into pointwise noise, which includes low-quality data points, and pairwise noise, which encompasses erroneous data pair associations that affect preference rankings. Utilizing Distributionally Robust Optimization (DRO), we enhance DPO’s resilience to these types of noise. Our theoretical insights reveal that DPO inherently embeds DRO principles, conferring robustness to pointwise noise, with the regularization coefficient _β_ playing a critical role in its noise resistance. Extending this framework, we introduce Distributionally Robustifying DPO (Dr. DPO), which integrates pairwise robustness by optimizing against worst-case pairwise scenarios. The novel hyperparameter _β_ _[′]_ in Dr. DPO allows for fine-tuned control over data pair reliability, providing a strategic balance between exploration and exploitation in noisy training environments. Empirical evaluations demonstrate that Dr. DPO substantially improves the quality of generated text and response accuracy in preference datasets, showcasing enhanced performance in both noisy and noise-free settings. The code is available [at https://github.com/junkangwu/Dr_DPO.](https://github.com/junkangwu/Dr_DPO) 1 I NTRODUCTION Aligning Large Language Models (LLMs) (OpenAI, 2023; Touvron et al., 2023; Anil et al., 2023; Bubeck et al., 2023) with human preferences is critical for their implementation in real-world scenarios. Central to the alignment is the fine-tuning of LLMs using human feedback (Ouyang et al., 2022), ensuring they adhere to human values and mitigate safety risks. Among the alignment methods, Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022) is becoming a widely adopted technology. It initially learns a reward model on pairwise preference data, and optimizes LLMs using the Proximal Policy Optimization (PPO) (Schulman et al., 2017) method. However, its inherent reinforcement learning nature poses significant challenges to computational efficiency and training stability (Rafailov et al., 2023a; Zhao et al., 2023). Addressing these, Direct Preference Optimization (DPO) (Rafailov et al., 2023a) eschews the explicit reward model learning, using human preferences to train the LLMs directly. It achieves the same objectives (Azar et al., 2023) as RLHF by learning an optimal proxy for each pointwise instance and simultaneously ranking preferences in a pairwise manner, offering greater simplicity and training stability (Ivison et al., 2023). While offering an effective solution by directly learning a policy from collected data, DPO inevitably heightens the dependency on the data quality (Liu et al., 2023). However, training data is frequently marred by noise, potentially posing a significant challenge to DPO. Here we delineate two primary noise categories based on their origins: _∗_ Work done at Alibaba Group. _†_ Jiancan Wu and Xiangnan He are the corresponding authors. 1 |Col1|Col2| |---|---| ||| ||| ||Low ( Noisy ) High ( Clean )| Figure 1: **Left** : An example illustrating pointwise and pairwise noise. **Right** : Comparison of gradients between DPO and Dr. DPO under varying levels of pairwise noise. - _Pointwise noise_ (Gunasekar et al., 2023) refers to low-quality data points containing irrelevant or incoherent information. Taking the movie reviews in Figure 1 (Left) as an example, it might manifest as reviews filled with meaningless chatter, thus rendering them uninformative. - _Pairwise noise_ (Sharma et al., 2023; Cui et al., 2023), on the other hand, arises from erroneous associations between data pairs, leading to misjudged preference rankings. Revisiting the movie reviews in Figure 1 (Left), it is evident in misranked reviews where an inferior review ( _y_ _l_ ) is incorrectly rated higher than a superior one ( _y_ _w_ ). The presence of noisy preferences naturally raises a critical question: _How robust is DPO against_ _pointwise and pairwise noise?_ To answer this, we examine DPO through the lens of Distributionally Robust Optimization (DRO) (Namkoong & Duchi, 2017; Duchi & Namkoong, 2018). At the core of DRO is training a model across a distributional family, which is determined by an empirical distribution within a robust radius _η_ . As a result, DRO endows the model with enhanced robustness _w.r.t._ distributional uncertainty, usually caused by the data noise. By incorporating DRO principles, we can assess the resilience of DPO to the pointwise and pairwise noise. Specifically, our DRO lens on DPO offers insightful findings as follows: - **DPO is equivalent to applying DRO on the reward function.** The principal contribution of DPO is deriving the optimal policy for PPO in a closed-form expression. This achievement facilitates the implicit determination of a worst-case distribution for optimization, guided by the Kullback-Leibler (KL) divergence criterion. Such an approach endows DPO with intrinsic pointwise robustness, enabling it to explore a better policy model rather than relying solely on the reference model. - **The DPO’s** _β_ **and DRO’s** _η_ **share an inverse relationship, highlighting noise levels in the** **reference model.** Through DRO theory, we establish that higher noise in the reference model necessitates a larger search radius, corresponding to a larger _η_ (or equivalently, a smaller _β_ ). This inverse relationship provides a clear measure of the noise level in the reference model. These findings elucidate the strengths of DPO in ensuring pointwise robustness. Recent effort (Chowdhury et al., 2024) has started addressing pairwise noise in DPO frameworks; however, this method relies on explicit noise estimation, a process that is computationally intensive and may not fully capture noise complexities. Building on these insights, we introduce the _Distributionally_ _Robustifying DPO_ (Dr. DPO) [1] framework, aiming to incorporate pairwise robustness within the DPO paradigm. The core idea is optimizing against the worst-case pairwise scenarios, enabling the models to implicitly adjust the importance of data pairs in the gradient space and eliminate the explicit noise estimation. Towards the adjustment, Dr. DPO introduces a simple hyperparameter _β_ _[′]_ _∈_ (0 _,_ + _∞_ ) to modulate the loss function, balancing between exploration and exploitation of pairwise preferences. _β_ _[′]_ serves as a pivotal “knob”, allowing the navigation from a conservative strategy that diminishes the influence of potentially noisy pairs ( _e.g.,_ _β_ _[′]_ = 0 _._ 5 ) to a risk-tolerant stance that leverages such pairs ( _e.g.,_ _β_ _[′]_ = 2 ). Consequently, Dr. DPO fosters a more resilient optimization process that effectively mitigates the influence of both pointwise and pairwise noise. 1 The abbreviation “Dr. DPO” not only encapsulates “Distributionally Robustifying DPO” but is playfully intended to echo the abbreviation for ”Doctor,” adding a quirky element to the naming. 2 In a nutshell, our contribution is the development of Dr. DPO, which robustifies DPO with just a single additional line of code. Empirical evaluations reveal that Dr. DPO significantly enhances performance across diverse settings, such as controlling the sentiment in generated text and improving the response quality in single-turn dialogues, under both noisy and noise-free conditions. 2 P RELIMINARIES **Bradley-Terry Model.** Given a context _x_ within a finite space of contexts _X_, we employ the policy _π_ ( _y|x_ ) to independently generate a pair of actions ( _y_ 1 _, y_ 2 ) . These actions are presented to human raters, who then indicate their preference, with the preferred action labeled as _y_ _w_ and the less preferred as _y_ _l_, satisfying _y_ _w_ _⪰_ _y_ _l_ . Although we cannot directly observe the latent reward model _r_ _[∗]_ ( _x, y_ ) that underlies these preferences, the Bradley-Terry (BT) model (Bradley & Terry, 1952) offers a well-established approach for modeling pairwise comparisons, which is given as: exp ( _r_ _[∗]_ ( _x,_ _y_ 1 )) _p_ _[∗]_ ( _y_ 1 _⪰_ _y_ 2 _|x_ ) = (1) exp( _r_ _[∗]_ ( _x, y_ 1 ) + exp( _r_ _[∗]_ ( _x, y_ 2 ))) _[.]_ Given the dataset _O_ = ( _x_ [(] _[i]_ [)] _, y_ _w_ [(] _[i]_ [)] _[, y]_ _l_ [(] _[i]_ [)] [)] _i_ _[N]_ =1 [sampled from] _[ p]_ _[∗]_ [, we can parametrize a reward model] _r_ _ϕ_ ( _x, y_ ) and estimate the parameters by optimizing the following logistic regression loss: _L_ _R_ ( _r_ _ϕ_ _, O_ ) = _−_ E ( _x,y_ _w_ _,y_ _l_ ) _∼O_ [log _σ_ ( _r_ _ϕ_ ( _x, y_ _w_ ) _−_ _r_ _ϕ_ ( _x, y_ _l_ ))] _,_ (2) where _σ_ ( _·_ ) is the sigmoid function. As the size of dataset _O_ grows, the empirical distribution of the dataset _O_ converges to the underlying distribution _p_ _[∗]_, and the reward model _r_ _ϕ_ converges to the true reward model _r_ _[∗]_ . **Reinforcement Learning from Human Feedback (RLHF)** (Ouyang et al., 2022). The standard RLHF paradigm is composed of three phases: i) supervised fine-tuning, ii) reward modeling, and iii) RL fine-tuning. Using the reward model _r_ _ϕ_ learned from the reward modeling, we can then fine-tune the policy _π_ _θ_ by optimizing the following objective: max (3) _π_ _θ_ [E] _[x][∼O][,y][∼][π]_ _[θ]_ [(] _[y][|][x]_ [)] [[] _[r]_ _[ϕ]_ [(] _[x, y]_ [)]] _[ −]_ _[β]_ [D] [KL] [[] _[π]_ _[θ]_ [(] _[y][|][x]_ [)] _[||][π]_ [ref] [(] _[y][|][x]_ [)]] _[.]_ In practice, both the language model policy _π_ _θ_ and the reference policy _π_ ref are typically initialized to the same supervised fine-tuning (SFT) model _π_ SFT . Here, _β_ is a parameter that controls the strength of the regularization term, and D KL represents the KL divergence penalty used to regularize the policy _π_ _θ_ to be close to _π_ ref . **Directed Preference Optimization (DPO)** (Rafailov et al., 2023a). DPO offers an alternative approach to the RL paradigm described above. It establishes a functional mapping between the reward model and the optimal policy under a KL divergence constraint with the following formulation: _r_ ( _x, y_ ) = _β_ log _[π]_ _[θ]_ [(] _[y][|][x]_ [)] + _β_ log _Z_ ( _x_ ) _,_ (4) _π_ ref( _y|x_ ) where _Z_ ( _x_ ) = [�] _y_ _[π]_ [ref] [(] _[y][|][x]_ [) exp(] _[r]_ [(] _[x, y]_ [)] _[/β]_ [)] [ is the partition function. By incorporating this reward] into the BT model, the DPO objective enables the comparison of response pairs, facilitating the discrimination between preferred and dispreferred actions, given by: _[|][ x]_ [)] _L_ DPO ( _π_ _θ_ ; _π_ ref ) = _−_ E ( _x,y_ _w_ _,y_ _l_ ) _∼O_ [log _σ_ ( _β_ log _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[|][ x]_ [)] _[|][ x]_ [)] _[π]_ _[θ]_ [(] _[y]_ _[w]_ _[π]_ _[θ]_ [(] _[y]_ _[l]_ _π_ ref ( _y_ _w_ _| x_ ) _[−]_ _[β]_ [ log] _π_ ref ( _y_ _l_ _| x_ (5) _π_ ref ( _y_ _l_ _| x_ ) [)]] _[.]_ **Distributionally Robust Optimization (DRO)** (Namkoong & Duchi, 2017; Duchi & Namkoong, 2018). DRO provides a strategic framework to effectively mitigate the uncertainty inherent in training data. It achieves this by optimizing for the worst-case expected loss across a set of potential distributions _Q_ . These distributions are confined within a robustness radius _η_ anchored around the empirical training distribution _Q_ 0, and are bounded by a prescribed divergence metric D _ϕ_ . The formal formulation of DRO can be succinctly expressed as follows: _L_ DRO = max _s.t._ D _ϕ_ ( _Q, Q_ 0 ) _≤_ _η,_ (6) _Q_ [E] _[Q]_ [[] _[L]_ [(] _[x]_ [;] _[ θ]_ [)]] _[,]_ where _L_ ( _x_ ; _θ_ ) represents the training loss for an input _x_ . Intuitively, models employing DRO exhibit increased robustness due to the presence of _Q_ that acts as an “adversary”, optimizing the model under a distribution set with adversarial perturbations instead of a single training distribution. 3 3 A NALYZING DPO’ S P OINTWISE R OBUSTNESS In this section, we explore DPO’s robustness to pointwise noise, analyzing its response to noise to identify key strengths and vulnerabilities. We assess how noise degrades performance and leverage insights from DRO to understand DPO’s underlying resilience mechanisms. 3.1 P OINTWISE N OISE I MPAIRS DPO P ERFORMANCE We begin by investigating the impact of pointwise noise on DPO through experiments on the IMDB sentiment dataset (Maas et al., 2011). Following the setup in (Havrilla et al., 2023), we fine-tune the GPT-2-large (Radford et al., 2019) model and use SiEBERT (Hartmann et al., 2023), a specialized variant of RoBERTa-large (Liu et al., 2019), for reward calculation. Pointwise noise is introduced exclusively during the SFT stage by incorporating responses generated by the unrefined GPT-2-large model, resulting in lower quality data for this stage, while the data used in the DPO stage remains unchanged. To assess DPO’s robustness to this pointwise noise, we evaluate each algorithm by examining the trade-off between the achieved reward and the KL divergence from the reference policy. Figure 2: Impact of pointwise noise on the expected reward frontier and KL divergence in DPO ( _β_ = 0 _._ 1). Figure 2 reveals that beyond a KL( _π_ _θ_ _||π_ ref ) threshold of 10.0, both models converge in terms of reward. Notably, the DPO model trained with high-quality data (blue points) significantly outperforms its low-quality data counterpart (orange points), highlighting the critical impact of data quality on optimizing model performance. 3.2 P OINTWISE R OBUSTNESS IN R EWARD M ODELING In Section 3.1, we explore how pointwise noise negatively affects individual instance rewards. To address this issue and enhance the robustness of LLMs, we propose integrating DRO during the reward modeling stage. We define the Reward Modeling DRO (RM-DRO) objective, which optimizes the expected reward under the worst-case noise distribution within a specified ambiguity set: max _π_ _θ_ [E] _[x][∼O][,y][∼][π]_ _[θ]_ [(] _[y][|][x]_ [)] [[] _[r]_ _[ϕ]_ [(] _[x, y]_ [)]] _s.t._ D _ϕ_ ( _π_ _θ_ ( _y|x_ ) _, π_ ref ( _y|x_ )) _≤_ _η._ (7) The direct consequence of pointwise noise is the resultant unreliability of the reference model (SFT). By adopting RM-DRO, we aim to maximize a surrogate objective that accounts for various potential distributions within a robustness radius _η_ around the reference distribution _π_ ref ( _y|x_ ), measured by the distance metric D _ϕ_ . With this formulation, we provide a fresh perspective on DPO. **A. DPO is Implicitly a Pointwise DRO.** **Theorem 3.1** (Optimal Reward Function under KL Divergence) **.** _Let the Kullback-Leibler_ _(KL) divergence between policy_ _π_ _θ_ _and reference policy_ _π_ _ref_ _be defined as:_ D _KL_ ( _π_ _θ_ _|π_ _ref_ ) = _π_ _θ_ ( _x_ ) � _π_ _θ_ ( _x_ ) log � _π_ _ref_ ( _x_ ) � _dx._ _Optimizing the RM-DRO objective as defined in Equation (7) yields an_ _optimal reward r_ _KL_ ( _x, y_ ) _given by:_ _r_ _KL_ ( _x, y_ ) = _β_ _[∗]_ ( _η_ ) log _[π]_ _[θ]_ [(] _[y][|][x]_ [)] _−_ _α._ (8) _π_ _ref_ ( _y|x_ ) _Here,_ _α, β_ _are Lagrange multipliers,_ _β_ _[∗]_ ( _η_ ) _denotes the optimal value of_ _β_ _that minimizes Equation_ _(7), acting as the regularization coefficient in DPO. By deriving the optimal value of α, given by:_ _α_ _[∗]_ = _−β_ log E _x∼O,y∼π_ _ref_ [exp( _[r]_ _[θ]_ [(] _β_ _[y][|][x]_ [)] )] _,_ (9) _Equation 8 can be re-expressed to match the ultimate form of the reward function in Equation 4._ Please refer to Appendix B.1 for detailed proofs and Appendix B.2 for the formal proof. For a broader discussion on optimal reward functions under general _ϕ_ -divergences, see Appendix C.1. Consistent with the reward function formulation in Rafailov et al. (2023a), Theorem 3.1 not only reaffirms established results but also introduces several novel insights, as outlined below: 4 |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56|Col2|noise ratio=2<br>noise ratio=3|0%<br>0%|Col5|Col6|Col7| |---|---|---|---|---|---|---| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56||noise ratio=4|0%|0%||| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56||||||| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56||||||| |64<br>62<br>(%)<br>60 Rate<br>58 Win<br>56|4|4|3|3|2 1|2 1| (b) Performance w/ different _β_ on HH. (a) Performance w/ different _β_ on IMDB. Figure 3: (a) Comparative analysis of the effect of pointwise noise on the expected reward frontier for different _β_ values on IMDB dataset. (b) Comparative analysis of the effect of pointwise noise on on the win rate for different _β_ values on HH dataset. The star ( _⋆, ⋆, ⋆_ ) indicates the optimal _β_ selection for the corresponding pointwise noise ratio. **Why DPO is Robust to Pointwise Noise.** We propose that the reference distribution closely mirrors the empirical training distribution, given the pre-training step (SFT) common to both RLHF and DPO methods. This ensures the reference distribution in the DPO phase accurately reflects the training data noise. In terms of DRO, while the reference model _π_ ref may not be entirely _reliable_, the implicit robust framework of DPO counters data perturbations effectively. Specifically, the “worst-case distribution” is defined as the distribution that maximizes risk within established divergence constraints, analogous to an adversarial noise model in DRO. Varying _β_ enables DPO to exhibit varying search space for a better _π_ _θ_, leading to improved performance. For more discussion about the connection between DPO and DRO, please refer to Appendix C.2. Moreover, the incorporation of DRO provides a new interpretation of the coefficient _β_ in DPO, transforming it from a mere heuristic design into a “noise reflector”. We provide Lemma 3.2 to disclose the relationship between _β_ and _η_ . **B. The Optimal Value of** _β_ **Reflects the Noise within the SFT Model.** **Lemma 3.2.** _(Faury et al., 2020, Lemma 5) The optimal_ _β_ _[∗]_ ( _η_ ) _in DPO is monotonically decreasing_ _with respect to η and obeys the following relationship:_ _β_ _[∗]_ ( _η_ ) = ~~�~~ V _π_ _ref_ [ _r_ ( _x, y_ )] _/_ 2 _η,_ (10) _where_ V _π_ _ref_ [ _r_ ( _x, y_ )] = [�] _y_ _[π]_ _[ref]_ [(] _[x, y]_ [)(] _[r]_ [(] _[y][|][x]_ [)] _[ −]_ [�] _y_ _[π]_ _[ref]_ [(] _[y][|][x]_ [)] _[r]_ [(] _[x, y]_ [))] [2] _[ denotes the variance of the]_ _reward model r_ ( _x, y_ ) _under the reference distribution π_ _ref_ _._ _where_ V _π_ _ref_ [ _r_ ( _x, y_ )] = [�] _y_ _[π]_ _[ref]_ [(] _[x, y]_ [)(] _[r]_ [(] _[y][|][x]_ [)] _[ −]_ [�] Lemma 3.2 elucidates the inverse correlation between the parameter _β_ and the robustness radius _η_ . Specifically, as noise within the model increases, the required search space expands, necessitating a larger _η_ and consequently a smaller optimal _β_ . To empirically validate this relationship, we conducted experiments on the IMDB dataset, as outlined in Section 3.1. In these experiments, the noise ratio is controlled by the proportion of low-quality pairs ( _y_ _w_ _, y_ _l_ ) introduced into the training data, generated by the unrefined GPT-2 model. Figure 3a shows that models trained with lower _β_ values (e.g., 0.01) outperform those with higher _β_ values (e.g., 0.1) when trained on 100% low-quality data. This is because a lower _β_ allows for a larger search space to counteract significant pointwise noise in the SFT model. We also conducted experiments on the HH dataset, injecting pointwise noise during the SFT phase by incorporating rejected responses into the training samples. Importantly, during the DPO phase, the positive and negative samples remained consistent, ensuring noise was introduced only during SFT. The noise ratio is determined by the proportion of rejected responses used as training samples during SFT. As shown in Figure 3b, the optimal value of _β_ decreases as the noise ratio increases, indicating that higher noise levels in SFT require a smaller _β_ for optimal performance. For detailed experimental settings and procedures for both datasets, please refer to Appendix C.3, where more comprehensive explanations are provided. 5
Idea Generation Category:
| 0Conceptual Integration
|
CbfsKHiWEn
|
# O N THE E XPRESSIVENESS OF R ATIONAL R E LU N EURAL N ETWORKS W ITH B OUNDED D EPTH **Gennadiy Averkov** BTU Cottbus-Senftenberg [email protected] **Christopher Hojny** TU Eindhoven [email protected] **Maximilian Merkert** TU Braunschweig [email protected] A BSTRACT To confirm that the expressive power of ReLU neural networks grows with their depth, the function _F_ _n_ = max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ has been considered in the literature. A conjecture by Hertrich, Basu, Di Summa, and Skutella [NeurIPS 2021] states that any ReLU network that exactly represents _F_ _n_ has at least _⌈_ log 2 ( _n_ +1) _⌉_ hidden layers. The conjecture has recently been confirmed for networks with integer weights by Haase, Hertrich, and Loho [ICLR 2023]. We follow up on this line of research and show that, within ReLU networks whose weights are decimal fractions, _F_ _n_ can only be represented by networks with at least _⌈_ log 3 ( _n_ + 1) _⌉_ hidden layers. Moreover, if all weights are _N_ -ary fractions, then _F_ _n_ can only be represented by networks with at least Ω( ln lnln _n N_ [)][ layers. These] results are a partial confirmation of the above conjecture for rational ReLU networks, and provide the first non-constant lower bound on the depth of practically relevant ReLU networks. 1 I NTRODUCTION An important aspect of designing neural network architectures is to understand which functions can be exactly represented by a specific architecture. Here, we say that a neural network, transforming _n_ input values into a single output value, _(exactly) represents_ a function _f_ : R _[n]_ _→_ R if, for every input _x ∈_ R _[n]_, the neural network reports output _f_ ( _x_ ). Understanding the expressiveness of neural network architectures can help to, among others, derive algorithms (Arora et al., 2018; Khalife et al., 2024; Hertrich & Sering, 2024) and complexity results (Goel et al., 2021; Froese et al., 2022; Bertschinger et al., 2023; Froese & Hertrich, 2023) for training networks. One of the most popular classes of neural networks are feedforward neural networks with ReLU activation (Goodfellow et al., 2016). Their capabilities to _approximate_ functions is well-studied and led to several so-called universal approximation theorems, e.g., see (Cybenko, 1989; Hornik, 1991). For example, from a result by Leshno et al. (1993) it follows that any continuous function can be approximated arbitrarily well by ReLU networks with a single hidden layer. In contrast to approximating functions, the understanding of which functions can be _exactly_ represented by a neural network is much less mature. A central result by Arora et al. (2018) states that the class of functions that are exactly representable by ReLU networks is the class of continuous piecewise linear (CPWL) functions. In particular, they show that every CPWL function with _n_ inputs can be represented by a ReLU network with _⌈_ log 2 ( _n_ + 1) _⌉_ hidden layers. It is an open question though for which functions this number of hidden layers is also necessary. An active research field is therefore to derive lower bounds on the number of required hidden layers. Arora et al. (2018) show that two hidden layers are necessary and sufficient to represent max _{_ 0 _, x_ 1 _, x_ 2 _}_ by a ReLU network. However, there is no single function which is known to require more than two hidden layers in an exact representation. In fact, Hertrich et al. (2021) formulate the following conjecture. **Conjecture 1.** _For every integer k with_ 1 _≤_ _k ≤⌈_ log 2 ( _n_ + 1) _⌉, there exists a function f_ : R _[n]_ _→_ R _that can be represented by a ReLU network with k hidden layers, but not with k −_ 1 _hidden layers._ Hertrich et al. (2021) also show that this conjecture is equivalent to the statement that any ReLU network representing max _{_ 0 _, x_ 1 _, . . ., x_ 2 _k_ _}_ requires _k_ + 1 hidden layers. That is, if the conjecture 1 holds true, the lower bound of _⌈_ log 2 ( _n_ + 1) _⌉_ by Arora et al. (2018) is tight. While Conjecture 1 is open in general, it has been confirmed for two subclasses of ReLU networks, namely networks all of whose weights only take integer values (Haase et al., 2023) and, for _n_ = 4, so-called _H_ -conforming neural networks (Hertrich et al., 2021). In this article, we follow this line of research by deriving a non-constant lower bound on the number of hidden layers in ReLU networks all of whose weights are _N_ -ary fractions. Recall that a rational number is an _N_ -ary fraction if it can be written as _Nz_ _[t]_ [ for some integer] _[ z]_ [ and non-negative integer] _[ t]_ [.] **Theorem 2.** _Let n and N be positive integers, and let p be a prime number that does not divide N_ _._ _Every ReLU network with weights being N_ _-ary fractions requires at least ⌈_ log _p_ ( _n_ + 1) _⌉_ _hidden_ _layers to exactly represent the function_ max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}._ **Corollary 3.** _Every ReLU network all of whose weights are decimal fractions requires at_ _least ⌈_ log 3 ( _n_ + 1) _⌉_ _hidden layers to exactly represent_ max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}._ While Theorem 2 does not resolve Conjecture 1 because it makes no statement about general real weights, note that in most applications floating point arithmetic is used (IEEE, 2019). That is, in neural network architectures used in practice, one is actually restricted to weights being _N_ -ary fractions. Moreover, when quantization, see, e.g., (Gholami et al., 2022) is used to make neural networks more efficient in terms of memory and speed, weights can become low-precision decimal numbers, cf., e.g., (Nagel et al., 2020). Consequently, Theorem 2 provides, to the best of our knowledge, the first non-constant lower bound on the depth of practically relevant ReLU networks. Relying on Theorem 2, we also derive the following lower bound. **Theorem 4.** _There is a constant C >_ 0 _such that, for all integers n, N ≥_ 3 _, every ReLU network_ _with weights being N_ _-ary fractions that represents_ max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _} has depth at least C_ _·_ ln lnln _n N_ _[.]_ Theorem 4, in particular, shows that there is no constant-depth ReLU network that exactly represents max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ if all weights are rational numbers all having a common denominator _N_ . In view of the integral networks considered by Haase et al. (2023), we stress that our results do not simply follow by scaling integer weights to rationals, which has already been discussed in Haase et al. (2023, Sec. 1.3). We therefore extend the techniques by Haase et al. (2023) to make use of number theory and polyhedral combinatorics to prove our results that cover standard number representations of rationals on a computer. **Outline** To prove our main results, Theorems 2 and 4, the rest of the paper is structured as follows. First, we provide some basic definitions regarding neural networks that we use throughout the article, and we provide a brief overview of related literature. Section 2 then provides a short summary of our overall strategy to prove Theorems 2 and 4 as well as some basic notation. The different concepts of polyhedral theory and volumes needed in our proof strategy are detailed in Section 2.1, whereas Section 2.2 recalls a characterization of functions representable by a ReLU neural network from the literature, which forms the basis of our proofs. In Section 3, we derive various properties of polytopes associated with functions representable by a ReLU neural network, which ultimately allows us to prove our main results in Section 3.3. The paper is concluded in Section 4. **Basic Notation for ReLU Networks** To describe the neural networks considered in this article, we introduce some notation. We denote by Z, N, and R the sets of integer, positive integer, and real numbers, respectively. Moreover, Z + and R + denote the sets of non-negative integers and reals, respectively. Let _k ∈_ Z + . A _feedforward neural network with rectified linear units (ReLU)_ (or simply _ReLU_ _network_ in the following) with _k_ + 1 layers can be described by _k_ + 1 affine transformations _t_ [(1)] : R _[n]_ [0] _→_ R _[n]_ [1] _, . . ., t_ [(] _[k]_ [+1)] : R _[n]_ _[k]_ _→_ R _[n]_ _[k]_ [+1] . It _exactly represents_ a function _f_ : R _[n]_ _→_ R if and only if _n_ 0 = _n_, _n_ _k_ +1 = 1, and the alternating composition _t_ [(] _[k]_ [+1)] _◦_ _σ ◦_ _t_ [(] _[k]_ [)] _◦_ _σ ◦· · · ◦_ _t_ [(2)] _◦_ _σ ◦_ _t_ [(1)] coincides with _f_, where, by slightly overloading notation, _σ_ denotes the component-wise application of the _ReLU activation function σ_ : R _→_ R, _σ_ ( _x_ ) = max _{_ 0 _, x}_ to vectors in any dimension. For each _i ∈{_ 1 _, . . ., k_ + 1 _}_ and _x ∈_ R _[n]_ _[i][−]_ [1], let _t_ [(] _[i]_ [)] ( _x_ ) = _A_ [(] _[i]_ [)] _x_ + _b_ [(] _[i]_ [)] for some _A_ [(] _[i]_ [)] _∈_ R _[n]_ _[i]_ _[×][n]_ _[i][−]_ [1] and _b_ [(] _[i]_ [)] _∈_ R _[n]_ _[i]_ . The entries of _A_ [(] _[i]_ [)] are called _weights_ and those of _b_ [(] _[i]_ [)] are called _biases_ of the network. The network’s _depth_ is _k_ + 1, and the _number of hidden layers_ is _k_ . 2 The set of all functions R _[n]_ _→_ R that can be represented exactly by a ReLU network of depth _k_ + 1 is denoted by ReLU _n_ ( _k_ ). Moreover, if _R ⊆_ R is a ring, we denote by ReLU _[R]_ _n_ [(] _[k]_ [)][ the set of all] functions R _[n]_ _→_ R that can be represented exactly by a ReLU network of depth _k_ + 1 all of whose weights are contained in _R_ . Throughout this paper, we will mainly work with the rings Z, R, or the ring of _N_ -ary fractions. The set ReLU _[R]_ _n_ [(0)][ is the set of affine functions] _[ f]_ [(] _[x]_ [1] _[, . . ., x]_ _[n]_ [) =] _[ b]_ [+] _[a]_ [1] _[x]_ [1] [+] _[· · ·]_ [+] _[a]_ _[n]_ _[x]_ _[n]_ [with] _[ b][ ∈]_ [R][,] and _a_ 1 _, . . ., a_ _n_ _∈_ _R_ . It can be directly seen from the definition of ReLU networks that, for _k ∈_ N, one has _f ∈_ ReLU _[R]_ _n_ [(] _[k]_ [)][ if and only if] _[ f]_ [(] _[x]_ [) =] _[ u]_ [0] [+] _[ u]_ [1] [max] _[{]_ [0] _[, g]_ [1] [(] _[x]_ [)] _[}]_ [ +] _[ · · ·]_ [ +] _[ u]_ _[m]_ [max] _[{]_ [0] _[, g]_ _[m]_ [(] _[x]_ [)] _[}]_ holds for some _m ∈_ N, _u_ 0 _∈_ R _, u_ 1 _, . . ., u_ _m_ _∈_ _R_, and functions _g_ 1 _, . . ., g_ _m_ _∈_ ReLU _[R]_ _n_ [(] _[k][ −]_ [1)][.] **Related Literature** Regarding the expressiveness of ReLU networks, Hertrich et al. (2021) show that four layers are needed to exactly represent max _{_ 0 _, x_ 1 _, . . ., x_ 4 _}_ if the network satisfies the technical condition of being _H_ -conforming. By restricting the weights of a ReLU network to be integer, Haase et al. (2023) prove that ReLU [Z] _n_ [(] _[k][ −]_ [1)][ ⊊] [ReLU] [Z] _n_ [(] _[k]_ [)][ for every] _[ k][ ≤⌈]_ [log] 2 [(] _[n]_ [ + 1)] _[⌉]_ [. In] particular, max _{_ 0 _, x_ 1 _, . . ., x_ 2 _k_ _} /∈_ ReLU [Z] 2 _[k]_ [(] _[k]_ [)][. If the activation function is changed from ReLU] to _x �→_ 1 _{x>_ 0 _}_, Khalife et al. (2024) show that two hidden layers are both necessary and sufficient for all functions representable by such a network. If one is only interested in approximating a function, Safran et al. (2024) show that max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ can be approximated arbitrarily well by ReLU [Z] _n_ [(2)][-networks of width] _n_ ( _n_ + 1) with respect to the _L_ 2 norm for continuous distributions. By increasing the depth of these networks, they also derive upper bounds on the required width in such an approximation. The results by Safran et al. (2024) belong to the class of so-called universal approximation theorems, which describe the ability to approximate classes of functions by specific types of neural networks, see, e.g., (Cybenko, 1989; Hornik, 1991; Barron, 1993; Pinkus, 1999; Kidger & Lyons, 2020). However, Vardi & Shamir (2020) show that there are significant theoretical barriers for depth-separation results for polynomially-sized ReLU _n_ ( _k_ )-networks for _k ≥_ 3, by establishing links to the separation of threshold circuits as well as to so-called natural-proof barriers. When taking specific data into account, Lee et al. (2024) derive lower and upper bounds on both the depth and width of a neural network that correctly classifies a given data set. More general investigations of the relation between the width and depth of a neural network are discussed, among others, by Arora et al. (2018); Eldan & Shamir (2016); Hanin (2019); Raghu et al. (2017); Safran & Shamir (2017); Telgarsky (2016). 2 P ROOF S TRATEGY AND T HEORETICAL C ONCEPTS To prove Theorems 2 and 4, we extend the ideas of Haase et al. (2023). We therefore provide a very concise summary of the arguments of Haase et al. (2023). Afterwards, we briefly mention the main ingredients needed in our proofs, which are detailed in the following subsections. A central ingredient for the results by Haase et al. (2023) is a polyhedral characterization of all functions in ReLU _n_ ( _k_ ), which has been derived by Hertrich (2022). This characterization links functions representable by a ReLU network and so-called support functions of polytopes _P ⊆_ R _[n]_ all of whose vertices belong to Z _[n]_, so-called _lattice polytopes_ . It turns out that the function max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_ in Theorems 2 and 4 can be expressed as the support function of a particular lattice polytope _P_ _n_ _⊆_ R _[n]_ . By using a suitably scaled version Vol _n_ of the classical Euclidean volume in R _[n]_, one can achieve Vol _n_ ( _P_ ) _∈_ Z for all lattice polytopes _P ⊆_ R _[n]_ . Haase et al. (2023) then show that, if the support function _h_ _P_ of a lattice polytope _P ⊆_ R _[n]_ can be exactly represented by a ReLU network with _k_ hidden layers, all faces of _P_ of dimension at least 2 _[k]_ have an even normalized volume. For _n_ = 2 _[k]_, however, Vol _n_ ( _P_ _n_ ) is odd. Hence, its support function cannot be represented by a ReLU network with _k_ hidden layers. We show that the arguments of Haase et al. (2023) can be adapted by replacing the divisor 2 with an arbitrary prime number _p_ . Another crucial insight is that the theory of mixed volumes can be used to analyze the behavior of Vol _n_ ( _A_ + _B_ ) for the Minkowski sum _A_ + _B_ := _{a_ + _b_ : _a ∈_ _A, b ∈_ _B}_ of lattice polytopes _A, B ⊂_ R _[n]_ . The Minkowski-sum operation is also involved in the polyhedral characterization of Hertrich (2022), and so it is also used by Haase et al. (2023), who provide a version of Theorem 2 for integer weights. They, however, do not directly use mixed volumes. A key observation used in our proofs, and obtained by a direct application of mixed volumes, is that the 3 map associating to a lattice polytope _P_ the coset of Vol _n_ ( _P_ ) modulo a prime number _p_ is additive when _n_ is a power of _p_ . Combining these ingredients yields Theorems 2 and 4. **Some Basic Notation** The standard basis vectors in R _[n]_ are denoted by _e_ 1 _, . . ., e_ _n_, whereas 0 denotes the null vector in R _[n]_ . Throughout the article, all vectors _x ∈_ R _[n]_ are column vectors, and we denote the transposed vector by _x_ _[⊤]_ . If _x_ is contained in the integer lattice Z _[n]_, we call it a _lattice_ _point_ . For vectors _x, y ∈_ R _[n]_, their scalar product is given by _x_ _[⊤]_ _y_ . For _m ∈_ N, we will write [ _m_ ] for the set _{_ 1 _, . . ., m}_ . The convex-hull operator is denoted by conv, and the base- _b_ logarithm by log _b_, while the natural logarithm is denoted ln. The central function of this article is max _{_ 0 _, x_ 1 _, . . ., x_ _n_ _}_, which we abbreviate by _F_ _n_ . 2.1 B ASIC P ROPERTIES OF P OLYTOPES AND L ATTICE P OLYTOPES As outlined above, the main tools needed to prove Theorems 2 and 4 are polyhedral theory and different concepts of volumes. This section summarizes the main concepts and properties that we need in our argumentation in Section 3. For more background, we refer the reader to the monographs (Beck & Robins, 2020; Hug & Weil, 2020; Schneider, 2014). **Polyhedra, Lattice Polyhedra, and Their Normalized Volume** A _polytope P ⊆_ R _[n]_ is the convex hull conv( _p_ 1 _, . . ., p_ _m_ ) of finitely many points _p_ 1 _, . . ., p_ _m_ _∈_ R _[n]_ . We introduce the family _P_ ( _S_ ) := _{_ conv( _p_ 1 _, . . ., p_ _m_ ): _m ∈_ N _, p_ 1 _, . . ., p_ _m_ _∈_ _S}_ of all non-empty polytopes with vertices in _S ⊆_ R _[n]_ . Thus, _P_ (R _[n]_ ) is the family of all polytopes in R _[n]_ and _P_ (Z _[n]_ ) is the family of all _lattice polytopes_ in R _[n]_ . For _d ∈{_ 0 _, . . ., n}_, we also introduce the family _P_ _d_ ( _S_ ) := _{P ∈P_ ( _S_ ): dim( _P_ ) _≤_ _d}._ of polytopes of dimension at most _d_, where the dimension of a polytope _P_ is defined as the dimension of its affine hull, i.e., the smallest affine subspace of R _[n]_ containing _P_ . The _Euclidean volume_ vol _n_ on R _[n]_ is the _n_ -dimensional Lebesgue measure, scaled so that vol _n_ is equal to 1 on the unit cube [0 _,_ 1] _[d]_ . Note that measure-theoretic subtleties play no role in our context since we restrict the use of vol _n_ to _P_ (R _[n]_ ). The _normalized volume_ Vol _n_ in R _[n]_ is the _n_ -dimensional Lebesgue measure normalized so that Vol _n_ is equal to 1 on the _standard simplex_ ∆ _n_ := conv(0 _, e_ 1 _, . . ., e_ _n_ ). Clearly, Vol _n_ = _n_ ! _·_ vol _n_ and Vol _n_ takes non-negative integer values on lattice polytopes. **Support Functions** For a polytope _P_ = conv( _p_ 1 _, . . ., p_ _m_ ) _⊆_ R _[n]_, its _support function_ is _h_ _P_ ( _x_ ) := max _{x_ _[⊤]_ _y_ : _y ∈_ _P_ _},_ and it is well-known that _h_ _P_ ( _x_ ) = max _{p_ _[⊤]_ 1 _[x, . . ., p]_ _[⊤]_ _m_ _[x][}]_ [. Consequently,][ max] _[{]_ [0] _[, x]_ [1] _[, . . ., x]_ _[n]_ _[}]_ [ from] Theorems 2 and 4 is the support function of ∆ _n_ . **Mixed Volumes** For sets _A, B ⊆_ R _[n]_, we introduce the _Minkowski sum_ _A_ + _B_ := _{a_ + _b_ : _a ∈_ _A, b ∈_ _B}_ and the multiplication _λA_ = _{λa_ : _a ∈_ _A}_ of _A_ by a non-negative factor _λ ∈_ R + . For an illustration of the Minkowski sum, we refer to Figure 2. Note that, if _S ∈{_ R _[n]_ _,_ Z _[n]_ _}_ and _A, B ∈P_ ( _S_ ), then _A_ + _B ∈P_ ( _S_ ), too. If _A_ and _B_ are (lattice) polytopes, then _A_ + _B_ is also a (lattice) polytope, and the support functions of _A, B_ and _A_ + _B_ are related by _h_ _A_ + _B_ = _h_ _A_ + _h_ _B_ . If ( _G,_ +) is an Abelian semi-group (i.e., a set with an associative and commutative binary operation), we call a map _φ_ : _P_ (R _[n]_ ) _→_ _G Minkowski additive_ if the Minkowski addition on _P_ (R _[n]_ ) gets preserved by _φ_ in the sense that _φ_ ( _A_ + _B_ ) = _φ_ ( _A_ ) + _φ_ ( _B_ ) holds for all _A, B ∈P_ (R _[n]_ ). The following is a classical result of Minkowski. **Theorem 5** (see, e.g., (Schneider, 2014, Ch. 5)) **.** _There exists a unique functional, called the_ mixed volume _,_ V: _P_ (R _[n]_ ) _[n]_ _→_ R _,_ _with the following properties valid for all P_ 1 _, . . ., P_ _n_ _, A, B ∈P_ (R _[n]_ ) _and α, β ∈_ R + _:_ 4 _(a)_ V _is invariant under permutations, i.e._ V( _P_ 1 _, . . ., P_ _n_ ) = V( _P_ _σ_ (1) _, . . ., P_ _σ_ ( _n_ ) ) _for every permu-_ _tation σ on_ [ _n_ ] _._ _(b)_ V _is Minkowski linear in all input parameters, i.e., for all i ∈_ [ _n_ ] _, it holds that_ V( _P_ 1 _, . . . P_ _i−_ 1 _, αA_ + _βB, P_ _i_ +1 _, . . ., P_ _n_ ) = _α_ V( _P_ 1 _, . . . P_ _i−_ 1 _, A, P_ _i_ +1 _, . . ., P_ _n_ ) + _β_ V( _P_ 1 _, . . . P_ _i−_ 1 _, B, P_ _i_ +1 _, . . ., P_ _n_ ) _(c)_ V _is equal to_ Vol _n_ _on the diagonal, i.e.,_ V( _A, . . ., A_ ) = Vol _n_ ( _A_ ) _._ We refer to Chapter 5 of the monograph by Schneider (2014) on the Brunn-Minkowski theory for more information on mixed volumes, where also an explicit formula for the mixed volume is presented. Our definition of the mixed volume differs by a factor of _n_ ! from the definition in Schneider (2014) since we use the normalized volume Vol _n_ rather than the Euclidean volume vol _n_ to fix V( _P_ 1 _, . . ., P_ _n_ ) in the case _P_ 1 = _. . ._ = _P_ _n_ . Our way of introducing mixed volumes is customary in the context of algebraic geometry. It is known that, for this normalization, V( _P_ 1 _, . . ., P_ _n_ ) _∈_ Z + when _P_ 1 _, . . ., P_ _n_ are lattice polytopes; see, for example, (Maclagan & Sturmfels, 2015, Ch. 4.6). From the defining properties one can immediately see that, for _A, B ∈P_ (R _[n]_ ), one has the analogue of the binomial formula, which we will prove in Appendix A.2 for the sake of completeness: V( _A, . . ., A, B, . . ., B_ ) _._ (1) � ~~�~~ ~~��~~ ~~�~~ � ~~�~~ � ~~�~~ _i_ _n−i_ Vol _n_ ( _A_ + _B_ ) = _n_ � _i_ =0 _n_ � _i_ **Normalized Volume of Non-Full-Dimensional Polytopes** So far, we have introduced the normalized volume Vol _n_ : _P_ (R _[n]_ ) _→_ R +, i.e., if _P ∈P_ (R _[n]_ ) is not full-dimensional, then Vol _n_ ( _P_ ) = 0. We also associate with a polytope _P ∈P_ _d_ (Z _[n]_ ) of dimension at most _d_ an appropriately normalized _d_ -dimensional volume by extending the use of Vol _d_ : _P_ (Z _[d]_ ) _→_ Z + to Vol _d_ : _P_ _d_ (Z _[n]_ ) _→_ Z + . In the case dim( _P_ ) _< d_, we define Vol _d_ ( _P_ ) = 0. If _d_ = 0, let Vol _d_ ( _P_ ) = 1. In the non-degenerate case _d_ = dim( _P_ ) _∈_ N, we fix _Y_ to be the affine hull of _P_ and consider a bijective affine map _T_ : R _[d]_ _→_ _Y_ satisfying _T_ (Z _[d]_ ) = _Y ∩_ Z _[n]_ . For such choice of _T_, we have _T_ _[−]_ [1] ( _P_ ) _∈P_ (Z _[d]_ ). It turns out that the _d_ -dimensional volume of _T_ _[−]_ [1] ( _P_ ) depends only on _P_ and not on _T_ so that we define Vol _d_ ( _P_ ) := Vol _d_ ( _T_ _[−]_ [1] ( _P_ )). Based on (Beck & Robins, 2020, Corollary 3.17 and _§_ 5.4), there is the following intrinsic way of introducing Vol _d_ ( _P_ ). Let _G_ ( _P_ ) denote the number of lattice points in _P_ . Then, for _t ∈_ Z +, one has Vol _d_ ( _P_ ) := _d_ ! _·_ lim _t→∞_ _t_ [1] _[d]_ _[ G]_ [(] _[tP]_ [)][.] **Remark 6.** _For every d-dimensional affine subspace Y ⊆_ R _[n]_ _which is affinely spanned by d_ + 1 _lattice points, we can define_ Vol _d_ _for every polytope P ∈P_ ( _Y_ ) _, which is not necessarily a lattice_ _polytope, by the same formula_ Vol _d_ ( _P_ ) := Vol _d_ ( _T_ _[−]_ [1] ( _P_ )) _, using an auxiliary map T_ : R _[d]_ _→_ _Y_ _described above. Consequently, by replacing the dimension n with d and the family of polytopes_ _P_ (R _[n]_ ) _with the family P_ ( _Y_ ) _in Minkowski’s Theorem 5, we can introduce the notion of mixed_ _volumes for polytopes in P_ ( _Y_ ) _. More specifically, we will make use of the mixed volumes of lattice_ _polytopes in P_ ( _Y ∩_ Z _[n]_ ) _._ **Normalized Volume of the Affine Join** The following proposition, borrowed from Haase et al. (2023), addresses the divisibility properties of the convex hull of the union of lattice polytopes that lie in skew affine subspaces. **Proposition 7** (Haase et al. 2023, Lemma 6) **.** _Let A, B ∈P_ (Z _[n]_ ) _be polytopes of dimensions i ∈_ Z + _and j ∈_ Z + _, respectively, such that P_ := conv( _A_ _∪_ _B_ ) _is of dimension i_ + _j_ +1 _. Then_ Vol _i_ + _j_ ( _P_ ) _is_ _divisible by_ Vol _i_ ( _A_ ) Vol _j_ ( _B_ ) _. In particular, if i_ = 0 _, then P is a pyramid over B whose normalized_ _volume_ Vol 1+ _j_ ( _B_ ) _is divisible by the normalized volume_ Vol _j_ ( _B_ ) _of its base B._ For an example illustration, see Figure 1. Since _P_ 1 and _P_ 2 lie in skew affine subspaces, Proposition 7 applies. Indeed, Vol 3 (conv( _P_ 1 _∪_ _P_ 2 )) = 12 is divisible by Vol 2 ( _P_ 1 ) = 6 (and Vol 0 ( _P_ 2 ) = 1). 2.2 A P OLYHEDRAL C RITERION FOR F UNCTIONS R EPRESENTABLE W ITH _k_ H IDDEN L AYERS Next to the geometric concepts that we discussed before, the second main building block of our proofs is the polyhedral characterization of ReLU _n_ ( _k_ ) by Hertrich (2022). In the following, we introduce the necessary concepts and present Hertrich’s characterization. 5
Idea Generation Category:
| 2Direct Enhancement
|
uREg3OHjLL
|
"# Mufu: Multilingual Fused Learning for Low- Resource Translation with LLM **Zheng Wei Lim** _[♥(...TRUNCATED)
| 0Conceptual Integration
|
0eMsrRMmCw
|
"# N ET M O E: A CCELERATING M O E T RAINING THROUGH D YNAMIC S AMPLE P LACEMENT **Xinyi Liu** [1] *(...TRUNCATED)
| 2Direct Enhancement
|
1qP3lsatCR
|
End of preview. Expand
in Data Studio
Extracted idea data about ICLR 2025 papers using o4-mini
- Downloads last month
- 51