Graduate Student Publications

This page houses all publications submitted to MGSA by NCSU math grad students. Submit your publications here.

We consider fourth order accurate compact schemes, in both space and time, for the second order wave equation with a variable speed of sound. For unbounded domains we add a fourth order accurate sponge layer to damp the outgoing waves. We demonstrate that usually this is more efficient than lower order schemes despite being implicit and conditionally stable. Fast time marching of the implicit scheme is accomplished by iterative methods such as multigrid. Computations confirm the design convergence rate for the in-homogeneous, variable wave speed equation. Link to paper.

In this paper a compact Riemannian manifold with strictly convex boundary is reconstructed from its partial travel time data. This data assumes that an open measurement region on the boundary is given, and that for every point in the manifold, the respective distance function to the points on the measurement region is known. This geometric inverse problem has many connections to seismology, in particular to micro seismicity. The reconstruction is based on embedding the manifold in a function space. This requires the differentiation of the distance functions. Therefore this paper also studies some global regularity properties of the distance function on a compact Riemannian manifold with strictly convex boundary.
Link to paper.

In this paper we address optimal control problems subject to fluid flows through deformable, porous media. In particular, we focus on linear quadratic elliptic-parabolic control problems, with both distributed and boundary controls, and prove existence and uniqueness of optimal control. Furthermore, we derive the first order necessary optimality conditions. These problems have important biological and biomechanical applications. For example, optimizing the pressure of the flow, and investigating the influence and control of pertinent biological parameters are relevant in the case of the lamina cribrosa – a porous tissue at the base of the optic nerve head inside the eye – which is modeled by poroelasticity – where these factors are believed to be related to the development of ocular neurodegenerative diseases such as glaucoma. Moreover, the study and results will be applicable to other situations such as the poroelastic modeling of cartilages, bones, and engineered tissues.
Link to paper.

We consider the identifiability problem for the parameters of series-parallel LCR circuit networks. We prove that for networks with only two classes of components (inductor-capacitor (LC), inductor-resistor (LR), and capacitor-resistor (RC)), the parameters are identifiable if and only if the number of non-monic coefficients of the constitutive equations equals the number of parameters. The notion of the “type” of the constitutive equations plays a key role in the identifiability of LC, LR, and RC networks. We also investigate the general series-parallel LCR circuits (with all three classes of components), and classify the types of constitutive equations that can arise, showing that there are 22 different types. However, we produce an example that shows that the basic notion of type that works to classify identifiability of two class networks is not sufficient to classify the identifiability of general series-parallel LCR circuits.
Link to paper.

We introduce a class of linear compartmental models called identifiable path/cycle models which have the property that all of the monomial functions of parameters associated to the directed cycles and paths from input compartments to output compartments are identifiable and give sufficient conditions to obtain one. Removing leaks, we then show how one can obtain a locally identifiable model from an identifiable path/cycle model. These identifiable path/cycle models yield the only identifiable models with certain conditions on their graph structure and thus we provide necessary and sufficient conditions for identifiable models with certain graph properties. A sufficient condition based on the graph structure of the model is also provided so that one can test if a model is an identifiable path/cycle model by examining the graph itself. We also provide some necessary conditions for identifiability based on graph structure. Our proofs use algebraic and combinatorial techniques.
Link to paper.

A foundational question in the theory of linear compartmental models is how to assess whether a model is identifiable — that is, whether parameter values can be inferred from noiseless data — directly from the combinatorics of the model. We completely answer this question for those models (with one input and one output) in which the underlying graph is a bidirectional tree. Such models include two families of models appearing often in biological applications: catenary and mammillary models. Our proofs are enabled by two supporting results, which are interesting in their own right. First, we give the first general formula for the coefficients of input-output equations (certain equations that can be used to determine identifiability). Second, we prove that identifiability is preserved when a model is enlarged in specific ways involving adding a new compartment with a bidirected edge to an existing compartment.
Link to paper.

Many algorithms for determining properties of real algebraic or semi-algebraic sets rely upon the ability to compute smooth points. Existing methods to compute smooth points on semi-algebraic sets use symbolic quantifier elimination tools. In this paper, we present a simple algorithm based on computing the critical points of some well-chosen function that guarantees the computation of smooth points in each connected compact component of a real (semi)-algebraic set. Our technique is intuitive in principal, performs well on previously difficult examples, and is straightforward to implement using existing numerical algebraic geometry software. The practical efficiency of our approach is demonstrated by solving a conjecture on the number of equilibria of the Kuramoto model for the n=4 case. We also apply our method to design an efficient algorithm to compute the real dimension of (semi)-algebraic sets, the original motivation for this research.
Link to paper.

We study the univariate moment problem of piecewise-constant density functions on the interval [0,1] and its consequences for an inference problem in population genetics. We show that, up to closure, any collection of n moments is achieved by a step function with at most n−1 breakpoints and that this bound is tight. We use this to show that any point in the nth coalescence manifold in population genetics can be attained by a piecewise constant population history with at most n−2 changes. Both the moment cones and the coalescence manifold are projected spectrahedra and we describe the problem of finding a nearest point on them as a semidefinite program.
Link to paper.

We study probability density functions that are log-concave. Despite the space of all such densities being infinite-dimensional, the maximum likelihood estimate is the exponential of a piecewise linear function determined by finitely many quantities, namely the function values, or heights, at the data points. We explore in what sense exact solutions to this problem are possible. First, we show that the heights given by the maximum likelihood estimate are generically transcendental. For a cell in one dimension, the maximum likelihood estimator is expressed in closed form using the generalized W-Lambert function. Even more, we show that finding the log-concave maximum likelihood estimate is equivalent to solving a collection of polynomial-exponential systems of a special form. Even in the case of two equations, very little is known about solutions to these systems. As an alternative, we use Smale’s alpha-theory to refine approximate numerical solutions and to certify solutions to log-concave density estimation.
Link to paper.

The image of a linear space under inversion of some coordinates is an affine variety whose structure is governed by an underlying hyperplane arrangement. In this paper, we generalize work by Proudfoot and Speyer to show that circuit polynomials form a universal Gröbner basis for the ideal of polynomials vanishing on this variety. The proof relies on degenerations to the Stanley–Reisner ideal of a simplicial complex determined by the underlying matroid, which is closely related to the external activity complex defined by Ardila and Boocher. If the linear space is real, then the semi-inverted linear space is also an example of a hyperbolic variety, meaning that all of its intersection points with a large family of linear spaces are real.
Link to paper.

We use the Method of Difference Potentials (MDP) to solve a non-overlapping domain decomposition formulation of the Helmholtz equation. The MDP reduces the Helmholtz equation on each subdomain to a Calderon’s boundary equation with projection on its boundary. The unknowns for the Calderon’s equation are the Dirichlet and Neumann data. Coupling between neighboring subdomains is rendered by applying their respective Calderon’s equations to the same data at the common interface. Solutions on individual subdomains are computed concurrently using a straightforward direct solver. We provide numerical examples demonstrating that our method is insensitive to interior cross-points and mixed boundary conditions, as well as large jumps in the wavenumber for transmission problems, which are known to be problematic for many other Domain Decomposition Methods.
Link to paper.

We construct a family of compact, implicit, conditionally stable fourth order accurate finite difference schemes for the three dimension scalar wave equation with variable propagation speed. High order accuracy is of key importance for the numerical simulation of waves as it reduces the dispersion error. The schemes we propose use at most three nodes in any coordinate direction or in time.
Link to paper.

While the equality of differential signatures is known to be a necessary condition for congruence, it is not sufficient. Hickman claimed that for non-degenerate planar curves, equality of Euclidean signatures implies congruence. We prove that while Hickman’s claim holds for simple, closed curves with simple signatures, it fails for curves with non-simple signatures. In the latter case, we associate a directed graph with the signature and show how various paths along the graph give rise to a family of non-congruent, non-degenerate curves with identical signatures. Using this additional structure, we formulate congruence criteria for non-degenerate, closed, simple curves and show how the paths reflect the global and local symmetries of the corresponding curve.
Link to paper.

Discrete max-linear Bayesian networks are directed graphical models specified by the same recursive structural equations as max-linear models but with discrete innovations. When all of the random variables in the model are binary, these models are isomorphic to the conjunctive Bayesian network (CBN) models of Beerenwinkel, Eriksson, and Sturmfels. Many of the techniques used to study CBN models can be extended to discrete max-linear models and similar results can be obtained. In particular, we extend the fact that CBN models are toric varieties after linear change of coordinates to all discrete max-linear models.
Link to paper.

Phylogenetic networks can model more complicated evolutionary phenomena that trees fail to capture such as horizontal gene transfer and hybridization. The same Markov models that are used to model evolution on trees can also be extended to networks and similar questions, such as the identifiability of the network parameter or the invariants of the model, can be asked. In this paper we focus on finding the invariants of the Cavendar-Farris-Neyman (CFN) model on level-1 phylogenetic networks. We do this by reducing the problem to finding invariants of sunlet networks, which are level-1 networks consisting of a single cycle with leaves at each vertex. We then determine all quadratic invariants in the sunlet network ideal which we conjecture generate the full ideal.
Link to paper.

Identifiability is a crucial property for a statistical model since distributions in the model uniquely determine the parameters that produce them. In phylogenetics, the identifiability of the tree parameter is of particular interest since it means that phylogenetic models can be used to infer evolutionary histories from data. In this paper we introduce a new computational strategy for proving the identifiability of discrete parameters in algebraic statistical models that uses algebraic matroids naturally associated to the models. We then use this algorithm to prove that the tree parameters are generically identifiable for 2-tree CFN and K3P mixtures. We also show that the k-cycle phylogenetic network parameter is identifiable under the K2P and K3P models.
Link to paper.

We introduce a notion of finite sampling consistency for phylogenetic trees and
show that the set of finitely sampling consistent and exchangeable distributions
on $n$ leaf phylogenetic trees is a polytope. We use this polytope to show that
the set of all exchangeable and infinite sampling consistent distributions on 4 leaf phylogenetic trees is exactly Aldous’ beta-splitting model and give a description of some of
the vertices for the polytope of distributions on 5 leaves. We also introduce a
new semialgebraic set of exchangeable and sampling consistent models we call the multinomial model and use it to characterize the set of exchangeable and sampling consistent distributions. Using this new model, we obtain a finite de Finetti-type theorem for rooted binary trees in the style of Diaconis’ theorem on finite exchangeable sequences.
Link to paper.

The potential energy surface (PES) describes the energy of a chemical system as a function of its geometry and is a fundamental concept in computational chemistry. A PES provides much useful information about the system, including the structures and energies of various stationary points, such as local minima, maxima, and transition states. Construction of full-dimensional PESs for molecules with more than 10 atoms is computationally expensive and often not feasible. Previous work in our group used sparse interpolation with polynomial basis functions to construct a surrogate reduced-dimensional PESs along chemically significant reaction coordinates, such as bond lengths, bond angles, and torsion angles. However, polynomial interpolation does not preserve the periodicity of the PES gradient with respect to angular components of geometry, such as torsion angles, which can lead to nonphysical phenomena. In this work, we construct a surrogate PES using trigonometric basis functions, for a system where the selected reaction coordinates all correspond to the torsion angles, resulting in a periodically repeating PES. We find that a trigonometric interpolation basis not only guarantees periodicity of the gradient but also results in slightly lower approximation error than polynomial interpolation.
Link to paper pages 9677–9684.

We present a method for dimensionally adaptive sparse trigonometric interpolation of multidimensional periodic functions belonging to a smoothness class of finite order. This method targets applications where periodicity must be preserved and the precise anisotropy is not known a priori. To the authors’ knowledge, this is the first instance of a dimensionally adaptive sparse interpolation algorithm that uses a trigonometric interpolation basis. The motivating application behind this work is the adaptive approximation of a multi-input model for a molecular potential energy surface (PES) where each input represents an angle of rotation. Our method is based on an anisotropic quasi-optimal estimate for the decay rate of the Fourier coefficients of the model; a least-squares fit to the coefficients of the interpolant is used to estimate the anisotropy. Thus, our adaptive approximation strategy begins with a coarse isotropic interpolant, which is gradually refined using the estimated anisotropic rates. The procedure takes several iterations where ever-more accurate interpolants are used to generate ever-improving anisotropy rates. We present several numerical examples of our algorithm where the adaptive procedure successfully recovers the theoretical “best” convergence rate, including an application to a periodic PES approximation. An open-source implementation of our algorithm resides in the Tasmanian UQ library developed at Oak Ridge National Laboratory.
Link to paper pages A2436–A2460.

The SBV regularity of weak entropy solutions to the Burgers-Poisson equation is considered. We show that the derivative of a solution consists of only the absolutely continuous part and the jump part.
Link to paper.

The paper studies a system of Hamilton-Jacobi equations, arising from a stochastic optimal debt management problem in an infinite time horizon with exponential discount, modeled as a noncooperative interaction between a borrower and a pool of risk-neutral lenders. We establish an existence result of solutions to this system and in turn recover optimal feedback controls. In addition, the behavior of the controls near the boundaries is studied.
Link to paper.

We establish a sharp estimate for a minimal number of binary digits (bits) needed to represent all bounded total generalized variation functions taking values in a general totally bounded metric space $(E,\rho)$ up to an accuracy of $\varepsilon>0$ with respect to the ${\bf L}^1$-distance. Such an estimate is explicitly computed in terms of doubling and packing dimensions of $(E,\rho)$. The obtained result is applied to provide an upper bound on the metric entropy for a set of entropy admissible weak solutions to scalar conservation laws in one-dimensional space with weakly genuinely nonlinear fluxes.
Link to paper.

We investigate block-activated proximal algorithms for multicomponent minimization problems involving a separable nonsmooth convex function penalizing the components individually, and nonsmooth convex coupling terms penalizing linear mixtures of the components. In the case of smooth coupling functions, several algorithms exist and they are well understood. By contrast, in the fully nonsmooth case, few block-activated methods are available and little effort has been devoted to assessing their merits and numerical performance. The goal of the paper is to address this gap. The numerical experiments concern machine learning and signal recovery problems.
Link to paper.

The present paper first aims to study the BV-type regularity for viscosity solutions of the Hamilton-Jacobi equation
u_t(t,x) H(D_xu(t,x)) = 0 forall(t,x)∈]0,∞[×R^d
with a coercive and uniformly directionally convex Hamiltonian H ∈ C^1(R^d). More
precisely, we establish a BV bound on the slope of backward characteristics DH(u(t, ·))
starting at a positive time t > 0. Relying on the BV bound, we quantify the metric
entropy in W^1,1_loc(R^d) for the map S_t that associates to every given initial data u_0 ∈
Lip (R^d) , the corresponding solution S_t u_0. Finally, a counter example is constructed to show that both D_xu(t, ·) and DH(D_xu(t, ·)) fail to be in BV_loc for a general strictly convex and coercive H ∈ C^2(R^d).
Link to paper.

In this paper, we provide upper and lower estimates for the minimal number of functions needed to represent a bounded variation function with an accuracy of epsilon with respect to L^1-distance.
Link to paper.

We propose an asynchronous block-iterative decomposition algorithm to solve Nash equilibrium problems involving a mix of nonsmooth and smooth functions acting on linear mixtures of strategies. The methodology relies heavily on monotone operator theory and in particular on warped resolvents.
Link to paper.

We propose a novel approach to monotone operator splitting based on the notion of a saddle operator. Under investigation is a highly structured multivariate monotone inclusion problem involving a mix of set-valued, cocoercive, and Lipschitzian monotone operators, as well as various monotonicity-preserving operations among them. This model encompasses most formulations found in the literature. A limitation of existing primal-dual algorithms is that they operate in a product space that is too small to achieve full splitting of our problem in the sense that each operator is used individually. To circumvent this difficulty, we recast the problem as that of finding a zero of a saddle operator that acts on a bigger space. This leads to an algorithm of unprecedented flexibility, which achieves full splitting, exploits the specific attributes of each operator, is asynchronous, and requires to activate only blocks of operators at each iteration, as opposed to activating all of them. The latter feature is of critical importance in large-scale problems. Weak convergence of the main algorithm is established, as well as the strong convergence of a variant. Various applications are discussed, and instantiations of the proposed framework in the context of variational inequalities and minimization problems are presented.
Link to paper.

We establish the convergence of the forward-backward splitting algorithm based on Bregman distances for the sum of two monotone operators in reflexive Banach spaces. Even in Euclidean spaces, the convergence of this algorithm has so far been proved only in the case of minimization problems. The proposed framework features Bregman distances that vary over the iterations and a novel assumption on the single-valued operator that captures various properties scattered in the literature. In the minimization setting, we obtain rates that are sharper than existing ones.
Link to paper.

Resolvents of set-valued operators play a central role in various branches of mathematics and in particular in the design and the analysis of splitting algorithms for solving monotone inclusions. We propose a generalization of this notion, called warped resolvent, which is constructed with the help of an auxiliary operator. The properties of warped resolvents are investigated and connections are made with existing notions. Abstract weak and strong convergence principles based on warped resolvents are proposed and shown to not only provide a synthetic view of splitting algorithms but to also constitute an effective device to produce new solution methods for challenging inclusion problems.
Link to paper.

We show that the weak convergence of the Douglas–Rachford algorithm for finding a zero of the sum of two maximally monotone operators cannot be improved to strong convergence. Likewise, we show that strong convergence can fail for the method of partial inverses.
Link to paper.

We classify the two-way independence quasi-independence models (or independence models with structural zeros) that have rational maximum likelihood estimators, or MLEs. We give a necessary and sufficient condition on the bipartite graph associated to the model for the MLE to be rational. In this case, we give an explicit formula for the MLE in terms of combinatorial features of this graph. We also use the Horn uniformization to show that for general log-linear models M with rational MLE, any model obtained by restricting to a face of the cone of sufficient statistics of M also has rational MLE.
Link to paper pages 917-941.

Using the policy gradient algorithm, we train a single-hidden-layer neural network to balance a physically accurate simulation of a single inverted pendulum. The trained weights and biases can then be transferred to a physical agent, where they are robust enough to to balance a real inverted pendulum. This hybrid approach of training a simulation allows thousands of trial runs to be completed orders of magnitude faster than would be possible in the real world, resulting in greatly reduced training time and more iterations, producing a more robust model. When compared with existing reinforcement learning methods, the resulting control is smoother, learned faster, and able to withstand forced disturbances.
Link to paper

We consider the problem of recovering a signal from nonlinear transformations, under convex constraints modeling a priori information. Standard feasibility and optimization methods are ill-suited to tackle this problem due to the nonlinearities. We show that, in many common applications, the transformation model can be associated with fixed point equations involving firmly nonexpansive operators. In turn, the recovery problem is reduced to a tractable common fixed point formulation, which is solved efficiently by a provably convergent, block-iterative algorithm. Applications to signal and image recovery are demonstrated. Inconsistent problems are also addressed.

Link to paper

Under investigation is the problem of finding the best approximation of a function in a Hilbert space subject to convex constraints and prescribed nonlinear transformations. We show that in many instances these prescriptions can be represented using firmly nonexpansive operators, even when the original observation process is discontinuous. The proposed framework thus captures a large body of classical and contemporary best approximation problems arising in areas such as harmonic analysis, statistics, interpolation theory, and signal processing. The resulting problem is recast in terms of a common fixed point problem and solved with a new block-iterative algorithm that features approximate projections onto the individual sets as well as an extrapolated relaxation scheme that exploits the possible presence of affine constraints. A numerical application to signal recovery is demonstrated.

Link to paper