A large body of work has demonstrated that parameterized artificial neural networks (ANNs) can efficiently describe ground states of numerous interesting quantum many-body Hamiltonians. However, the standard variational algorithms used to update or train the ANN parameters can get trapped in local minima, especially for frustrated systems and even if the representation is sufficiently expressive. We propose a parallel tempering method that facilitates escape from such local minima. This methods involves training multiple ANNs independently, with each simulation governed by a Hamiltonian with a different "driver" strength, in analogy to quantum parallel tempering, and it incorporates an update step into the training that allows for the exchange of neighboring ANN configurations. We study instances from two classes of Hamiltonians to demonstrate the utility of our approach using Restricted Boltzmann Machines as our parameterized ANN. The first instance is based on a permutation-invariant Hamiltonian whose landscape stymies the standard training algorithm by drawing it increasingly to a false local minimum. The second instance is four hydrogen atoms arranged in a rectangle, which is an instance of the second quantized electronic structure Hamiltonian discretized using Gaussian basis functions. We study this problem in a minimal basis set, which exhibits false minima that can trap the standard variational algorithm despite the problem's small size. We show that augmenting the training with quantum parallel tempering becomes useful to finding good approximations to the ground states of these problem instances.
We propose a quantum annealing protocol that effectively probes the dynamics of a single qubit on D-Wave's quantum annealing hardware. This protocol uses D-Wave's h-gain schedule functionality, which allows the rapid suppression of the longitudinal magnetic field at arbitrary points during the anneal. This feature enables us to distinguish between open and closed-system dynamics as well as the presence and absence of longitudinal magnetic field noise. We show that thermal fluctuations alone are not sufficient to explain the system's dynamics and that a prominent role is played by magnetic field fluctuations, which need to be included in an open quantum system description. Moreover, our protocol only requires single-qubit measurements, which makes it suitable as an exploration and calibration tool for large-scale quantum annealing hardware.
Open quantum systems are a topic of intense theoretical research. The use of master equations to model a system's evolution subject to an interaction with an external environment is one of the most successful theoretical paradigms. General experimental tools to study different open system realizations have been limited, and so it is highly desirable to develop experimental tools which emulate diverse master equation dynamics and give a way to test open systems theories. In this paper we demonstrate a systematic method for engineering specific system-environment interactions and emulating master equations of a particular form using classical stochastic noise in a superconducting transmon qubit. We also demonstrate that non-Markovian noise can be used as a resource to extend the coherence of a quantum system and counteract the adversarial effects of Markovian environments.
We investigate the occurrence of many-body localization (MBL) on a spin-1/2 transverse-field Ising model defined on a Chimera connectivity graph with random exchange interactions and longitudinal fields. We observe a transition from an ergodic phase to a nonthermal phase for individual energy eigenstates induced by a critical disorder strength for the Ising parameters. Our result follows from the analysis of both the mean half-system block entanglement and the energy-level statistics. We identify the critical point associated with this transition using the maximum variance of the block entanglement over the disorder ensemble as a function of the disorder strength. The calculated energy density phase diagram shows the existence of a mobility edge in the energy spectrum. In terms of the energy-level statistics, the system changes from the Gaussian orthogonal ensemble for weak disorder to a Poisson distribution limit for strong randomness, which implies localization behavior. We then realize the time-independent disordered Ising Hamiltonian experimentally using a reverse annealing quench-pause-quench protocol on a D-wave 2000Q programmable quantum annealer. We characterize the transition from the thermal to the localized phase through magnetization measurements at the end of the annealing dynamics, and the results are compatible with our theoretical prediction for the critical point. However, the same behavior can be reproduced using a classical spin-vector Monte Carlo simulation, which suggests that genuine quantum signatures of the phase transition remain out of reach using this experimental platform and protocol.
In a typical quantum annealing protocol, the system starts with a transverse field Hamiltonian that is gradually turned off and replaced by a longitudinal Ising Hamiltonian. The ground state of the Ising Hamiltonian encodes the solution to the computational problem of interest, and the state overlap with this ground state gives the success probability of the annealing protocol. The form of the annealing schedule can have a significant impact on the ground-state overlap at the end of the anneal, so precise control over these annealing schedules can be a powerful tool for increasing success probabilities of annealing protocols. Here we show how superconducting circuits, in particular capacitively shunted flux qubits, can be used to construct quantum annealing systems by providing tools for mapping circuit flux biases to Pauli coefficients. We use this mapping to find customized annealing schedules: appropriate circuit control biases that yield a desired annealing schedule, while accounting for the physical limitations of the circuitry. We then provide examples and proposals that utilize this capability to improve quantum annealing performance.
In a typical quantum annealing protocol, the system starts with a transverse field Hamiltonian that is gradually turned off and replaced by a longitudinal Ising Hamiltonian. The ground state of the Ising Hamiltonian encodes the solution to the computational problem of interest, and the state overlap with this ground state gives the success probability of the annealing protocol. The form of the annealing schedule can have a significant impact on the ground-state overlap at the end of the anneal, so precise control over these annealing schedules can be a powerful tool for increasing success probabilities of annealing protocols. Here we show how superconducting circuits, in particular capacitively shunted flux qubits, can be used to construct quantum annealing systems by providing tools for mapping circuit flux biases to Pauli coefficients. We use this mapping to find customized annealing schedules: appropriate circuit control biases that yield a desired annealing schedule, while accounting for the physical limitations of the circuitry. We then provide examples and proposals that utilize this capability to improve quantum annealing performance.
With current semiconductor technology reaching its physical limits, special-purpose hardware has emerged as an option to tackle specific computing-intensive challenges. Optimization in the form of solving quadratic unconstrained binary optimization problems, or equivalently Ising spin glasses, has been the focus of several new dedicated hardware platforms. These platforms come in many different flavors, from highly-efficient hardware implementations on digital-logic of established algorithms to proposals of analog hardware implementing new algorithms. In this work, we use a mapping of a specific class of linear equations whose solutions can be found efficiently, to a hard constraint satisfaction problem (three-regular three-XORSAT, or an Ising spin glass) with a 'golf-course' shaped energy landscape, to benchmark several of these different approaches. We perform a scaling and prefactor analysis of the performance of Fujitsu's digital annealer unit (DAU), the D-Wave advantage quantum annealer, a virtual MemComputing machine, Toshiba's simulated bifurcation machine (SBM), the SATonGPU algorithm from Bernashi et al, and our implementation of parallel tempering. We identify the SATonGPU and DAU as currently having the smallest scaling exponent for this benchmark, with SATonGPU having a small scaling advantage and in addition having by far the smallest prefactor thanks to its use of massive parallelism. Our work provides an objective assessment and a snapshot of the promise and limitations of dedicated optimization hardware relative to a particular class of optimization problems.
Motivated by recent experiments in which specific thermal properties of complex many-body systems were successfully reproduced on a commercially available quantum annealer, we examine the extent to which quantum annealing hardware can reliably sample from the thermal state in a specific basis associated with a target quantum Hamiltonian. We address this question by studying the diagonal thermal properties of the canonical one-dimensional transverse-field Ising model on a D-Wave 2000Q quantum annealing processor. We find that the quantum processor fails to produce the correct expectation values predicted by Quantum Monte Carlo. Comparing to master equation simulations, we find that this discrepancy is best explained by how the measurements at finite transverse fields are enacted on the device. Specifically, measurements at finite transverse field require the system to be quenched from the target Hamiltonian to a Hamiltonian with negligible transverse field, and this quench is too slow. The limitations imposed by such hardware make it an unlikely candidate for thermal sampling, and it remains an open question what thermal expectation values can be robustly estimated in general for arbitrary quantum many-body systems.
We propose a protocol for quantum adiabatic optimization whereby an intermediary Hamiltonian that is diagonal in the computational basis is turned on and off during the interpolation. This “diagonal catalyst” serves to bias the energy landscape towards a given spin configuration, and we show how this can remove the first-order phase transition present in the standard protocol for the ferromagnetic p-spin and the weak-strong cluster problems. The success of the protocol also makes clear how it can fail: biasing the energy landscape towards a state only helps in finding the ground state if the Hamming distance from the ground state and the energy of the biased state are correlated. We present examples where biasing towards low-energy states that are nonetheless very far in Hamming distance from the ground state can severely worsen the efficiency of the algorithm compared to the standard protocol. Our results for the diagonal catalyst protocol are analogous to results exhibited by adiabatic reverse annealing, so our conclusions should apply to that protocol as well.
Annealing schedule control provides opportunities to better understand the manner and mechanisms by which putative quantum annealers operate. By appropriately modifying the annealing schedule to include a pause (keeping the Hamiltonian fixed) for a period of time, we show that it is possible to more directly probe the dissipative dynamics of the system at intermediate points along the anneal and examine thermal relaxation rates, for example, by observing the repopulation of the ground state after the minimum spectral gap. We provide a detailed comparison of experiments from a D-Wave device, simulations of the quantum adiabatic master equation, and a classical analogue of quantum annealing, spin-vector Monte Carlo, and we observe qualitative agreement, showing that the characteristic increase in success probability when pausing is not a uniquely quantum phenomena. We find that the relaxation in our system is dominated by a single timescale, which allows us to give a simple condition for when we can expect pausing to improve the time to solution, the relevant metric for classical optimization. Finally, we also explore in simulation the role of temperature whilst pausing as a means to better distinguish quantum and classical models of quantum annealers.
Quantum fluctuations driven by non-stoquastic Hamiltonians have been conjectured to be an important and perhaps essential missing ingredient for achieving a quantum advantage with adiabatic optimization. We introduce a transformation that maps every non-stoquastic adiabatic path ending in a classical Hamiltonian to a corresponding stoquastic adiabatic path by appropriately adjusting the phase of each matrix entry in the computational basis. We compare the spectral gaps of these adiabatic paths and find both theoretically and numerically that the paths based on non-stoquastic Hamiltonians have generically smaller spectral gaps between the ground and first excited states, suggesting they are less useful than stoquastic Hamiltonians for quantum adiabatic optimization. These results apply to any adiabatic algorithm which interpolates to a final Hamiltonian that is diagonal in the computational basis.
Boltzmann machines, a class of machine learning models, are the basis of several deep learning methods that have been successfully applied to both supervised and unsupervised machine learning tasks. These models assume that some given dataset is generated according to a Boltzmann distribution, and the goal of the training procedure is to learn the set of parameters that most closely match the input data distribution. Training such models is difficult due to the intractability of traditional sampling techniques, and proposals using quantum annealers for sampling hope to mitigate the cost associated with sampling. However, real physical devices will inevitably be coupled to the environment, and the strength of this coupling affects the effective temperature of the distributions from which a quantum annealer samples. To counteract this problem, error correction schemes that can effectively reduce the temperature are needed if there is to be some benefit in using quantum annealing for problems at a larger scale, where we might expect the effective temperature of the device to be too high. To this end, we have applied nested quantum annealing correction (NQAC) to do unsupervised learning with a small bars and stripes dataset, and to do supervised learning with a coarse-grained MNIST dataset, which consists of black-and-white images of hand-written integers. For both datasets we demonstrate improved training and a concomitant effective temperature reduction at higher noise levels relative to the unencoded case. We also find better performance overall with longer anneal times and offer an interpretation of the results based on a comparison to simulated quantum annealing and spin vector Monte Carlo. A counterintuitive aspect of our results is that the output distribution generally becomes less Gibbs-like with increasing nesting level and increasing anneal times, which shows that improved training performance can be achieved without equilibration to the target Gibbs distribution.
We present a quantum Monte Carlo algorithm for the simulation of general quantum and classical many-body models within a single unifying framework. The algorithm builds on a power series expansion of the quantum partition function in its off-diagonal terms and is both parameter-free and Trotter error-free. In our approach, the quantum dimension consists of products of elements of a permutation group. As such, it allows for the study of a very wide variety of models on an equal footing. To demonstrate the utility of our technique, we use it to clarify the emergence of the sign problem in the simulations of non-stoquastic physical models. We showcase the flexibility of our algorithm and the advantages it offers over existing state-of-the-art by simulating transverse-field Ising model Hamiltonians and comparing the performance of our technique against that of the stochastic series expansion algorithm. We also study a transverse-field Ising model Augustmented with randomly chosen two-body transverse-field interactions.
We propose a two-qubit experiment for validating tunable antiferromagnetic XX interactions in quantum annealing. Such interactions allow the time-dependent Hamiltonian to be nonstoquastic, and the instantaneous ground state can have negative amplitudes in the computational basis. Our construction relies on how the degeneracy of the Ising Hamiltonian's ground states is broken away from the end point of the anneal: above a certain value of the antiferromagnetic XX interaction strength, the perturbative ground state at the end of the anneal changes from a symmetric to an antisymmetric state. This change is associated with a suppression of one of the Ising ground states, which can then be detected using solely computational basis measurements. We show that a semiclassical approximation of the annealing protocol fails to reproduce this feature, making it a candidate “quantum signature” of the evolution.
The quantum adiabatic unstructured search algorithm is one of only a handful of quantum adiabatic optimization algorithms to exhibit provable speedups over their classical counterparts. With no fault tolerance theorems to guarantee the resilience of such algorithms against errors, understanding the impact of imperfections on their performance is of both scientific and practical significance. We study the robustness of the algorithm against various types of imperfections: limited control over the interpolating schedule, Hamiltonian misspecification, and interactions with a thermal environment. We find that the unstructured search algorithm's quadratic speedup is generally not robust to the presence of any one of the above non-idealities, and in some cases we find that it imposes unrealistic conditions on how the strength of these noise sources must scale to maintain the quadratic speedup.
Recent technological breakthroughs have precipitated the availability of specialized devices that promise to solve NP-Hard problems faster than standard computers. These 'Ising Machines' are however analog in nature and as such inevitably have implementation errors. We find that their success probability decays exponentially with problem size for a fixed error level, and we derive a sufficient scaling law for the error in order to maintain a fixed success probability. We corroborate our results with experiment and numerical simulations and discuss the practical implications of our findings.
The viability of non-stoquastic catalyst Hamiltonians to deliver consistent quantum speedups in quantum adiabatic optimization remains an open question. The infinite-range ferromagnetic p-spin model is a rare example exhibiting an exponential advantage for non-stoquastic catalysts over its stoquastic counterpart. We revisit this model and note how the incremental changes in the ground state wavefunction give an indication of how the non-stoquastic catalyst provides an advantage. We then construct two new examples that exhibit an advantage for non-stoquastic catalysts over stoquastic catalysts. The first is another infinite range model that is only 2-local but also exhibits an exponential advantage, and the second is a geometrically local Ising example that exhibits a growing advantage up to the maximum system size we study.
The glued-trees problem is the only example known to date for which quantum annealing provides an exponential speedup, albeit by partly using excited state evolution, in an oracular setting. How robust is this speedup to noise on the oracle? To answer this, we construct phenomenological short- range and long-range noise models, and noise models that break or preserve the reflection symmetry of the spectrum. We show that under the long-range noise models an exponential quantum speedup is retained. However, we argue that a classical algorithm with an equivalent long-range noise model also exhibits an exponential speedup over the noiseless model. In the quantum setting the long-range noise is able to lift the spectral gap of the problem so that the evolution changes from diabatic to adiabatic. In the classical setting, long-range noise creates a significant probability of the walker landing directly on the EXIT vertex. Under short-range noise the exponential speedup is lost, but a polynomial quantum speedup is retained for sufficiently weak noise. In contrast to noise range, we find that breaking of spectral symmetry by the noise has no significant impact on the performance of the noisy algorithms. Our results about the long-range models highlight that care must be taken in selecting phenomenological noise models so as not to change the nature of the computational problem. We conclude from the short-range noise model results that the exponential speedup in the glued-trees problem is not robust to noise, but a polynomial quantum speedup is still possible.
The observation of an unequivocal quantum speedup remains an elusive objective for quantum computing. A more modest goal is to demonstrate a scaling advantage over a class of classical algorithms for a computational problem running on quantum hardware. The D-Wave quantum annealing processors have been at the forefront of experimental attempts to address this goal, given their relatively large numbers of qubits and programmability. A complete determination of the optimal time-to-solution using these processors has not been possible to date, preventing definitive conclusions about the presence of a scaling advantage. The main technical obstacle has been the inability to verify an optimal annealing time within the available range. Here, we overcome this obstacle using a class of problem instances constructed by systematically combining many-spin frustrated loops with few-qubit gadgets exhibiting a tunneling event--a combination that we find to promote the presence of tunneling energy barriers in the relevant semiclassical energy landscape of the full problem--and we observe an optimal annealing time using a D-Wave 2000Q processor over a range spanning up to more than 2000 qubits. We identify the gadgets as being responsible for the optimal annealing time, whose existence allows us to perform an optimal time-to-solution benchMarchking analysis. We perform a comparison to several classical algorithms, including simulated annealing, spin-vector Monte Carlo, and discrete-time simulated quantum annealing (SQA), and establish the first example of a scaling advantage for an experimental quantum annealer over classical simulated annealing. Namely, we find that the D-Wave device exhibits certifiably better scaling than simulated annealing, with 95% confidence, over the range of problem sizes that we can test. However, we do not find evidence for a quantum speedup: SQA exhibits the best scaling for annealing algorithms by a significant Marchgin. This is a finding of independent interest, since we associate SQA's advantage with its ability to transverse energy barriers in the semiclassical energy landscape by mimicking tunneling. Our construction of instance classes with verifiably optimal annealing times opens up the possibility of generating many new such classes based on a similar principle of promoting the presence of energy barriers that can be overcome more efficiently using quantum rather than thermal fluctuations, paving the way for further definitive assessments of scaling advantages using current and future quantum annealing devices.
We describe a quantum trajectories technique for the unraveling of the quantum adiabatic master equation in Lindblad form. By evolving a complex state vector of dimension N instead of a complex density matrix of dimension N^{2}, simulations of larger system sizes become feasible. The cost of running many trajectories, which is required to recover the master equation evolution, can be minimized by running the trajectories in parallel, making this method suitable for high performance computing clusters. In general, the trajectories method can provide up to a factor N advantage over directly solving the master equation. In special cases where only the expectation values of certain observables are desired, an advantage of up to a factor N^{2} is possible. We test the method by demonstrating agreement with direct solution of the quantum adiabatic master equation for 8-qubit quantum annealing examples. We also apply the quantum trajectories method to a 16-qubit example originally introduced to demonstrate the role of tunneling in quantum annealing, which is significantly more time consuming to solve directly using the master equation. The quantum trajectories method provides insight into individual quantum jump trajectories and their statistics, thus shedding light on open system quantum adiabatic evolution beyond the master equation.
Adiabatic quantum computing (AQC) started as an approach to solving optimization problems, and has evolved into an important universal alternative to the standard circuit model of quantum computing, with deep connections to both classical and quantum complexity theory and condensed matter physics. In this review we give an account of most of the major theoretical developments in the field, while focusing on the closed-system setting. The review is organized around a series of topics that are essential to an understanding of the underlying principles of AQC, its algorithmic accomplishments and limitations, and its scope in the more general setting of computational complexity theory. We present several variants of the adiabatic theorem, the cornerstone of AQC, and we give examples of explicit AQC algorithms that exhibit a quantum speedup. We give an overview of several proofs of the universality of AQC and related Hamiltonian quantum complexity theory. We finally devote considerable space to Stoquastic AQC, the setting of most AQC work to date, where we discuss obstructions to success and their possible resolutions.
Closed-system quantum annealing is expected to sometimes fail spectacularly in solving simple problems for which the gap becomes exponentially small in the problem size. Much less is known about whether this gap scaling also impedes open-system quantum annealing. Here, we study the performance of a quantum annealing processor in solving such a problem: a ferromagnetic chain with sectors of alternating coupling strength that is classically trivial but exhibits an exponentially decreasing gap in the sector size. The gap is several orders of magnitude smaller than the device temperature. Contrary to the closed-system expectation, the success probability rises for sufficiently large sector sizes. The success probability is strongly correlated with the number of thermally accessible excited states at the critical point. We demonstrate that this behavior is consistent with a quantum open-system description that is unrelated to thermal relaxation, and is instead dominated by the system's properties at the critical point.
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a novel decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from 'fully-quantum' to 'fully-classical', in contrast to many existing methods. We demonstrate the advantages of the technique by comparing it against existing schemes. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Physical implementations of quantum annealing unavoidably operate at finite temperatures. We point to a fundamental limitation of fixed finite temperature quantum annealers that prevents them from functioning as competitive scalable optimizers and show that to serve as optimizers annealer temperatures must be appropriately scaled down with problem size. We derive a temperature scaling law dictating that temperature must drop at the very least in a logarithmic manner but also possibly as a power law with problem size. We corroborate our results by experiment and simulations and discuss the implications of these to practical annealers.
Adiabatic preparation of the ground states of many-body Hamiltonians in the closed-system limit is at the heart of adiabatic quantum computation, but in reality systems are always open. This motivates a natural comparison between, on the one hand, adiabatic preparation of steady states of Lindbladian generators and, on the other hand, relaxation towards the same steady states subject to the final Lindbladian of the adiabatic process. In this work we thus adopt the perspective that the goal is the most efficient possible preparation of such steady states, rather than ground states. Using known rigorous bounds for the open-system adiabatic theorem and for mixing times, we are then led to a disturbing conclusion that at first appears to doom efforts to build physical quantum annealers: relaxation seems to always converge faster than adiabatic preparation. However, by carefully estimating the adiabatic preparation time for Lindbladians describing thermalization in the low-temperature limit, we show that there is, after all, room for an adiabatic speedup over relaxation. To test the analytically derived bounds for the adiabatic preparation time and the relaxation time, we numerically study three models: a dissipative quasifree fermionic chain, a single qubit coupled to a thermal bath, and the “spike” problem of n qubits coupled to a thermal bath. Via these models we find that the answer to the “which wins” question depends for each model on the temperature and the system-bath coupling strength. In the case of the “spike” problem we find that relaxation during the adiabatic evolution plays an important role in ensuring a speedup over the final-time relaxation procedure. Thus, relaxation-assisted adiabatic preparation can be more efficient than both pure adiabatic evolution and pure relaxation.
The performance of open-system quantum annealing is adversely affected by thermal excitations out of the ground state. While the presence of energy gaps between the ground and excited states suppresses such excitations, error correction techniques are required to ensure full scalability of quantum annealing. Quantum annealing correction (QAC) is a method that aims to improve the performance of quantum annealers when control over only the problem (final) Hamiltonian is possible, along with decoding. Building on our earlier work [S. Matsuura et al., Phys. Rev. Lett. 116, 220501 (2016)], we study QAC using analytical tools of statistical physics by considering the effects of temperature and a transverse field on the penalty qubits in the ferromagnetic p-body infinite-range transverse-field Ising model. We analyze the effect of QAC on second (p=2) and first (p >=3) order phase transitions, and construct the phase diagram as a function of temperature and penalty strength. Our analysis reveals that for sufficiently low temperatures and in the absence of a transverse field on the penalty qubit, QAC breaks up a single, large free energy barrier into multiple smaller ones. We find theoretical evidence for an optimal penalty strength in the case of a transverse field on the penalty qubit, a feature observed in QAC experiments. Our results provide further compelling evidence that QAC provides an advantage over unencoded quantum annealing.
We present a general error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. Given any Ising model optimization problem, the encoding replaces each logical qubit by a complete graph of degree C, representing the distance of the error-correcting code. A subsequent minor-embedding step then implements the encoding on the underlying hardware graph of the quantum annealer. We demonstrate experimentally that the performance of a D-Wave Two quantum annealing device improves as C grows. We show that the performance improvement can be interpreted as arising from an effective increase in the energy scale of the problem Hamiltonian, or equivalently, an effective reduction in the temperature at which the device operates. The number C thus allows us to control the amount of protection against thermal and control errors, and in particular, to trade qubits for a lower effective temperature that scales as C−η, with η<=2. This effective temperature reduction is an important step towards scalable quantum annealing.
Quantum annealing aims to exploit quantum mechanics to speed up the search for the solution to optimization problems. Most problems exhibit complete connectivity between the logical spin variables after they are mapped to the Ising spin Hamiltonian of quantum annealing. To account for hardware constraints of current and future physical quantum annealers, methods enabling the embedding of fully connected graphs of logical spins into a constant-degree graph of physical spins are therefore essential. Here, we compare the recently proposed embedding scheme for quantum annealing with all-to-all connectivity due to Lechner, Hauke and Zoller (LHZ) [Science Advances 1 (2015)] to the commonly used minor embedding (ME) scheme. Using both simulated quantum annealing and parallel tempering simulations, we find that for a set of instances randomly chosen from a class of fully connected, random Ising problems, the ME scheme outperforms the LHZ scheme when using identical simulation parameters, despite the fault tolerance of the latter to weakly correlated spin-flip noise. This result persists even after we introduce several decoding strategies for the LHZ scheme, including a minimum-weight decoding algorithm that results in substantially improved performance over the original LHZ scheme. We explain the better performance of the ME scheme in terms of more efficient spin updates, which allows it to better tolerate the correlated spin-flip errors that arise in our model of quantum annealing. Our results leave open the question of whether the performance of the two embedding schemes can be improved using scheme-specific parameters and new error correction approaches.
Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming-weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semi-classical potential arising from the spin-coherent path integral formalism. We then provide an example where the shape of the barrier in the final cost function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided level-crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin vector dynamics, is at least as efficient as diabatic QA. Finally, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin vector dynamics.
Quantum annealing correction (QAC) is a method that combines encoding with energy penalties and decoding to suppress and correct errors that degrade the performance of quantum annealers in solving optimization problems. While QAC has been experimentally demonstrated to successfully error-correct a range of optimization problems, a clear understanding of its operating mechanism has been lacking. Here we bridge this gap using tools from quantum statistical mechanics. We study analytically tractable models using a mean-field analysis, specifically the p-body ferromagnetic infinite-range transverse-field Ising model as well as the quantum Hopfield model. We demonstrate that for p=2, where the phase transition is of second order, QAC pushes the transition to increasingly larger transverse field strengths. For p>=3, where the phase transition is of first order, QAC softens the closing of the gap for small energy penalty values and prevents its closure for sufficiently large energy penalty values. Thus QAC provides protection from excitations that occur near the quantum critical point. We find similar results for the Hopfield model, thus demonstrating that our conclusions hold in the presence of disorder.
We provide a rigorous generalization of the quantum adiabatic theorem for open systems described by a Marchkovian master equation with time-dependent Liouvillian L(t). We focus on the finite system case relevant for adiabatic quantum computing and quantum annealing. Adiabaticity is defined in terms of closeness to the instantaneous steady state. While the general result is conceptually similar to the closed system case, there are important differences. Namely, a system initialized in the zero-eigenvalue eigenspace of L(t) will remain in this eigenspace with a deviation that is inversely proportional to the total evolution time T. In the case of a finite number of level crossings the scaling becomes T−η with an exponent η that we relate to the rate of the gap closing. For master equations that describe relaxation to thermal equilibrium, we show that the evolution time T should be long compared to the corresponding minimum inverse gap squared of L(t). Our results are illustrated with several examples.
Quantum annealing is a promising approach for solving optimization problems, but like all other quantum information processing methods, it requires error correction to ensure scalability. In this work, we experimentally compare two quantum annealing correction (QAC) codes in the setting of antiferromagnetic chains, using two different quantum annealing processors. The lower-temperature processor gives rise to higher success probabilities. The two codes differ in a number of interesting and important ways, but both require four physical qubits per encoded qubit. We find significant performance differences, which we explain in terms of the effective energy boost provided by the respective redundantly encoded logical operators of the two codes. The code with the higher energy boost results in improved performance, at the expense of a lower-degree encoded graph. Therefore, we find that there exists an important trade-off between encoded connectivity and performance for quantum annealing correction codes.
A recent experiment [T. Lanting et al., Phys. Rev. X 4, 021041 (2014)] claimed to provide evidence of up to eight-qubit entanglement in a D-Wave quantum annealing device. However, entanglement was measured using qubit tunneling spectroscopy, a technique that provides indirect access to the state of the system at intermediate times during the anneal by performing measurements at the end of the anneal with a probe qubit. In addition, an underlying assumption was that the quantum transverse-field Ising Hamiltonian, whose ground states are already highly entangled, is an appropriate model of the device and not some other (possibly classical) model. This begs the question of whether alternative classical or semiclassical models would be equally effective at predicting the observed spectrum and thermal state populations. To check this, we consider a recently proposed classical rotor model with classical Monte Carlo updates, which has been successfully employed in describing features of earlier experiments involving the device. We also consider simulated quantum annealing with quantum Monte Carlo updates, an algorithm that samples from the instantaneous Gibbs state of the device Hamiltonian. Finally, we use the quantum adiabatic master equation, which cannot be efficiently simulated classically and which has previously been used to successfully capture the open-system quantum dynamics of the device. We find that only the master equation is able to reproduce the features of the tunneling spectroscopy experiment, while both the classical rotor model and simulated quantum annealing fail to reproduce the experimental results. We argue that this bolsters the evidence for the reported entanglement.
The availability of quantum annealing devices with hundreds of qubits has made the experimental demonstration of a quantum speedup for optimization problems a coveted, albeit elusive goal. Going beyond earlier studies of random Ising problems, here we introduce a method to construct a set of frustrated Ising-model optimization problems with tunable hardness. We study the performance of a D-Wave Two device (DW2) with up to 503 qubits on these problems and compare it to a suite of classical algorithms, including a highly optimized algorithm designed to compete directly with the DW2. The problems are generated around predetermined ground-state configurations, called planted solutions, which makes them particularly suitable for benchMarchking purposes. The problem set exhibits properties familiar from constraint satisfaction (SAT) problems, such as a peak in the typical hardness of the problems, determined by a tunable clause density parameter. We bound the hardness regime where the DW2 device either does not or might exhibit a quantum speedup for our problem set. While we do not find evidence for a speedup for the hardest and most frustrated problems in our problem set, we cannot rule out that a speedup might exist for some of the easier, less frustrated problems. Our empirical findings pertain to the specific D-Wave processor and problem set we studied and leave open the possibility that future processors might exhibit a quantum speedup on the same problem set.
We study the behavior of the entanglement entropy in (2 + 1)-dimensional strongly coupled theories via the AdS/CFT correspondence. We consider theories at a finite charge density with a magnetic field, with their holographic dual being Einstein-Maxwell-Dilaton theory in four dimensional anti-de Sitter gravity. Restricting to black hole and electron star solutions at zero temperature in the presence of a background magnetic field, we compute their holographic entanglement entropy using the Ryu-Takayanagi prescription for both strip and disk geometries. In the case of the electric or magnetic zero temperature black holes, we are able to confirm that the entanglement entropy is invariant under electric-magnetic duality. In the case of the electron star with a finite magnetic field, for the strip geometry, we find a discontinuity in the first derivative of the entanglement entropy as the strip width is increased.
Recent experiments with increasingly larger numbers of qubits have sparked renewed interest in adiabatic quantum computation, and in particular quantum annealing. A central question that is repeatedly asked is whether quantum features of the evolution can survive over the long time-scales used for quantum annealing relative to standard measures of the decoherence time. We reconsider the role of decoherence in adiabatic quantum computation and quantum annealing using the adiabatic quantum master equation formalism. We restrict ourselves to the weak-coupling and singular-coupling limits, which correspond to decoherence in the energy eigenbasis and in the computational basis, respectively. We demonstrate that decoherence in the instantaneous energy eigenbasis does not necessarily detrimentally affect adiabatic quantum computation, and in particular that a short single-qubit T_{2} time need not imply adverse consequences for the success of the quantum adiabatic algorithm. We further demonstrate that boundary cancellation methods, designed to improve the fidelity of adiabatic quantum computing in the closed system setting, remain beneficial in the open system setting. To address the high computational cost of master equation simulations, we also demonstrate that a quantum Monte Carlo algorithm that explicitly accounts for a thermal bosonic bath can be used to interpolate between classical and quantum annealing. Our study highlights and clarifies the significantly different role played by decoherence in the adiabatic and circuit models of quantum computing.
Recently the question of whether the D-Wave processors exhibit large-scale quantum behavior or can be described by a classical model has attracted significant interest. In this work we address this question by studying a 503 qubit D-Wave Two device in the "black box" model, i.e., by studying its input-output behavior. Our work generalizes an approach introduced in Boixo et al. [Nat. Commun. 4, 2067 (2013)], and uses groups of up to 20 qubits to realize a transverse Ising model evolution with a ground state degeneracy whose distribution acts as a sensitive probe that distinguishes classical and quantum models for the D-Wave device. Our findings rule out all classical models proposed to date for the device and provide evidence that an open system quantum dynamical description of the device that starts from a quantized energy level structure is well justified, even in the presence of relevant thermal excitations and a small value of the ratio of the single-qubit decoherence time to the annealing time.
We demonstrate that the performance of a quantum annealer on hard random Ising optimization problems can be substantially improved using quantum annealing correction (QAC). Our error correction strategy is tailored to the D-Wave Two device. We find that QAC provides a statistically significant enhancement in the performance of the device over a classical repetition code, improving as a function of problem size as well as hardness. Moreover, QAC provides a mechanism for overcoming the precision limit of the device, in addition to correcting calibration errors. Performance is robust even to missing qubits. We present evidence for a constructive role played by quantum effects in our experiments by contrasting the experimental results with the predictions of a classical model of the device. Our work demonstrates the importance of error correction in appropriately determining the performance of quantum annealers
We revisit the evidence for quantum annealing in the D-Wave One device (DW1) based on the study of random Ising instances. Using the probability distributions of finding the ground states of such instances, previous work found agreement with both simulated quantum annealing (SQA) and a classical rotor model. Thus the DW1 ground state success probabilities are consistent with both models, and a different measure is needed to distinguish the data and the models. Here we consider measures that account for ground state degeneracy and the distributions of excited states, and present evidence that for these new measures neither SQA nor the classical rotor model correlate perfectly with the DW1 experiments. We thus provide evidence that SQA and the classical rotor model, both of which are classically efficient algorithms, do not satisfactorily explain all the DW1 data. A complete model for the DW1 remains an open problem. Using the same criteria we find that, on the other hand, SQA and the classical rotor model correlate closely with each other. To explain this we show that the rotor model can be derived as the semiclassical limit of the spin-coherent states path integral. We also find differences in which set of ground states is found by each method, though this feature is sensitive to calibration errors of the DW1 device and to simulation parameters.
We study the unitary time evolution of photons interacting with a dielectric resonator using coherent control pulses. We show that non-Marchkovianity of transient photon dynamics in the resonator subsystem May be controlled to within a photon-resonator transit time. In general, appropriate use of coherent pulses and choice of spatial subregion May be used to create and control a wide range of non-Marchkovian transient dynamics in photon resonator systems.
Quantum information processing offers dramatic speedups, yet is famously susceptible to decoherence, the process whereby quantum superpositions decay into mutually exclusive classical alternatives, thus robbing quantum computers of their power. This has made the development of quantum error correction an essential and inescapable aspect of both theoretical and experimental quantum computing. So far little is known about protection against decoherence in the context of quantum annealing, a computational paradigm which aims to exploit ground state quantum dynamics to solve optimization problems more rapidly than is possible classically. Here we develop error correction for quantum annealing and provide an experimental demonstration using up to 344 superconducting flux qubits in processors which have recently been shown to physically implement programmable quantum annealing. We demonstrate a substantial improvement over the performance of the processors in the absence of error correction. These results pave a path toward large scale noise-protected adiabatic quantum optimization devices.
We present fluctuation theorems and moment generating function equalities for generalized thermodynamic observables and quantum dynamics described by completely positive trace preserving (CPTP) maps, with and without feedback control. Our results include the quantum Jarzynski equality and Crooks fluctuation theorem, and clarify the special role played by the thermodynamic work and thermal equilibrium states in previous studies. We show that for a specific class of generalized measurements, which include projective measurements, unitality replaces microreversibility as the condition for the physicality of the reverse process in our fluctuation theorems. We present an experimental application of our theory to the problem of extracting the system-bath coupling magnitude, which we do for a system of pairs of coupled superconducting flux qubits undergoing quantum annealing.
We present a first-principles derivation of the Marchkovian semi-group master equation without invoking the rotating wave approximation (RWA). Instead we use a time coarse-graining approach which leaves us with a free timescale parameter, which we can optimize. Comparing this approach to the standard RWA-based Marchkovian master equation, we find that significantly better agreement is possible using the coarse-graining approach, for a three-level model coupled to a bath of oscillators, whose exact dynamics we can solve for at zero temperature. The model has the important feature that the RWA has a non-trivial effect on the dynamics of the populations. We show that the two different master equations can exhibit strong qualitative differences for the population of the energy eigenstates even for such a simple model. The RWA-based master equation misses an important feature which the coarse-graining based scheme does not. By optimizing the coarse-graining timescale the latter scheme can be made to approach the exact solution much more closely than the RWA-based master equation.
Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution. Both analytical and numerical evidence suggests that under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing. Current engineered quantum annealing devices have a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. Do they effectively perform classical thermalization when coupled to a decohering thermal environment? Here we present an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization. Our experiment uses groups of eight superconducting flux qubits with programmable spin--spin couplings, embedded on a commercially available chip with >100 functional qubits. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.
Four dimensional gravity with a U(1) gAuguste field, coupled to various fields in asymptotically anti-de Sitter spacetime, provides a rich arena for the holographic study of the strongly coupled (2+1)-dimensional dynamics of finite density matter charged under a global U(1). As a first step in furthering the study of the properties of fractionalized and partially fractionalized degrees of freedom in the strongly coupled theory, we construct electron star solutions at zero temperature in the presence of a background magnetic field. We work in Einstein--Maxwell-dilaton theory. In all cases we construct, the magnetic source is cloaked by an event horizon. A key ingredient of our solutions is our observation that starting with the standard Landau level structure for the density of states, the electron star limits reduce the charge density and energy density to that of the free fermion result. Using this result we construct three types of solution: One has a star in the infra-red with an electrically neutral horizon, another has a star that begins at an electrically charged event horizon, and another has the star begin a finite distance from an electrically charged horizon.
We discuss how integration of back action into coupled rate equations describing dynamical biophysical processes can lead the identification of optimized structural features. This approach is applied to analyze neural receptor binding and function. In functional receptor studies, the influence of ligand binding to the receptor on free ligand concentration in the synaptic cleft is rarely considered, especially when the number of ligand molecules vastly exceeds the number of receptors. Here we evaluate the role of ligand binding/unbinding to the receptor on ligand concentration and the resulting change in receptor dynamics using the example of glutamate interaction with the AMPA receptor subtype of glutamate receptors. We find a significant difference for AMPA receptor-mediated current between the free diffusion case, where binding/unbinding is neglected, and the case when glutamate binding to AMPA receptors is taken into account for evaluating free ligand concentration. Furthermore, taking into account receptor binding/unbinding reveals new properties of the receptor/neurotransmitter system, and in particular, indicates the existence of an optimum receptor density profile with an optimal radius where the total charge and peak current are maximal, a property that cannot be captured by the free diffusion case. This May provide an explanation for the disposition of AMPA receptors and the synaptic geometry based on the optimization of the receptor-mediated current.
We develop from first principles Marchkovian master equations suited for studying the time evolution of a system evolving adiabatically while coupled weakly to a thermal bath. We derive two sets of equations in the adiabatic limit, one using the rotating wave (secular) approximation that results in a master equation in Lindblad form, the other without the rotating wave approximation but not in Lindblad form. The two equations make Marchkedly different predictions depending on whether or not the Lamb shift is included. Our analysis keeps track of the various time and energy scales associated with the various approximations we make, and thus allows for a systematic inclusion of higher order corrections, in particular beyond the adiabatic limit. We use our formalism to study the evolution of an Ising spin chain in a transverse field and coupled to a thermal bosonic bath, for which we identify four distinct evolution phases. While we do not expect this to be a generic feature, in one of these phases dissipation acts to increase the fidelity of the system state relative to the adiabatic ground state.
We examine strain-induced quantized Landau levels in graphene. Specifically, arc-bend strains are found to cause nonuniform pseudomagnetic fields. Using an effective Dirac model which describes the low-energy physics around the nodal points, we show that several of the key qualitative properties of graphene in a strain-induced pseudomagnetic field are different compared to the case of an externally applied physical magnetic field. We discuss how using different strain strengths allows us to spatially separate the two components of the pseudospinor on the different sublattices of graphene. These results are checked against a tight-binding calculation on the graphene honeycomb lattice, which is found to exhibit all the features described. Furthermore, we find that introducing a Hubbard repulsion on the mean-field level induces a measurable polarization difference between the A and the B sublattices, which provides an independent experimental test of the theory presented here.
We present the results of our studies of the entanglement entropy of a super- conducting system described holographically as a fully back-reacted gravity system, with a stable ground state. We use the holographic prescription for the entanglement entropy. We uncover the behavior of the entropy across the superconducting phase transition, showing the reorganization of the degrees of freedom of the system. We exhibit the behaviour of the entanglement entropy from the superconducting transition all the way down to the ground state at T=0. In some cases, we also observe a novel transition in the entanglement entropy at intermediate temperatures, resulting from the detection of an additional length scale.
Using holography, we study the entanglement entropy of strongly coupled field theories perturbed by operators that trigger an RG flow from a conformal field theory in the ultraviolet (UV) to a new theory in the infrared (IR). The holographic duals of such flows involve a geometry that has the UV and IR regions separated by a transitional structure in the form of a domain wall. We address the question of how the geometric approach to computing the entanglement entropy organizes the field theory data, exposing key features as the change in degrees of freedom across the flow, how the domain wall acts as a UV region for the IR theory, and a new area law controlled by the domain wall. Using a simple but robust model we uncover this organization, and expect much of it to persist in a wide range of holographic RG flow examples. We test our formulae in two known examples of RG flow in 3+1 and 2+1 dimensions that connect non-trivial fixed points.
We study the evolution and scaling of the entanglement entropy after two types of quenches for a 2+1 field theory, using holographic techniques. We study a thermal quench, dual to the addition of a shell of uncharged matter to four dimensional Anti-de Sitter (AdS_{4}) spacetime, and study the subsequent formation of a Schwarzschild black hole. We also study an electromagnetic quench, dual to the addition of a shell of charged sources to AdS_{4}, following the subsequent formation of an extremal dyonic black hole. In these backgrounds we consider the entanglement entropy of two types of geometries, the infinite strip and the round disc, and find distinct behavior for each. Some of our findings naturally supply results analogous to observations made in the literature for lower dimensions, but we also uncover several new phenomena, such as (in some cases) a discontinuity in the time derivative of the entanglement entropy as it nears saturation, and for the electromagnetic quench, a logarithmic growth in the entanglement entropy with time for both the disc and strip, before settling to saturation.
We study the dynamics of quenched fundamental matter in N=2^{*} supersymmetric large N SU(N) Yang-Mills theory at zero temperature. Our tools for this study are probe D7-branes in the holographically dual N=2^{*} Pilch-Warner gravitational background. Previous work using D3-brane probes of this geometry has shown that it captures the physics of a special slice of the Coulomb branch moduli space of the gAuguste theory, where the N constituent D3-branes form a dense one dimensional locus known as the enhancon, located deep in the infrared. Our present work shows how this physics is supplemented by the physics of dynamical flavours, revealed by the D7-branes embeddings we find. The Pilch-Warner background introduces new divergences into the D7-branes free energy, which we are able to remove with a single counterterm. We find a family of D7-brane embeddings in the geometry and discuss their properties. We study the physics of the quark condensate, constituent quark mass, and part of the meson spectrum. Notably, there is a special zero mass embedding that ends on the enhancon, which shows that while the geometry acts repulsively on the D7-branes, it does not do so in a way that produces spontaneous chiral symmetry breaking.
We study the dynamics of quenched fundamental matter in N=2^{*} supersymmetric large N_{c} SU(N_{c}) Yang-Mills theory, extending our earlier work to finite temperature. We use probe D7-branes in the holographically dual thermalized generalization of the N=2^{*} Pilch-Warner gravitational background found by Buchel and Liu. Such a system provides an opportunity to study how key features of the dynamics are affected by being in a non-conformal setting where there is an intrinsic scale, set here by the mass, m_{H}, of a hypermultiplet. Such studies are motivated by connections to experimental studies of the quark-gluon plasma at RHIC and LHC, where the microscopic theory of the constituents, QCD, has a scale, Λ_{QCD}. We show that the binding energy of mesons in the N=2^{*} theory is increased in the presence of the scale m_{H}, and that subsequently the meson-melting temperature is higher than for the conformal case.
We study the effects of an external magnetic field on the properties of the quasiparticle spectrum of the class of 2+1 dimensional strongly coupled theories holographically dual to charged AdS4 black holes at zero temperature. We uncover several interesting features. At certain values of the magnetic field, there are multiple quasiparticle peaks representing a novel level structure of the associated Fermi surfaces. Furthermore, increasing magnetic field deforms the dispersion characteristics of the quasiparticle peaks from non-Landau toward Landau behaviour. At a certain value of the magnetic field, just at the onset of Landau-like behaviour of the Fermi liquid, the quasiparticles and Fermi surface disappear.
We further consider a probe fermion in a dyonic black hole background in anti-de Sitter spacetime, at zero temperature, comparing and contrasting two distinct classes of solution that have previously appeared in the literature. Each class has members labeled by an integer n, corresponding to the n-th Landau level for the fermion. Our interest is the study of the spectral function of the fermion, interpreting poles in it as indicative of quasiparticles associated with the edge of a Fermi surface in the holographically dual strongly coupled theory in a background magnetic field H at finite chemical potential. Using both analytical and numerical methods, we explicitly show how one class of solutions naturally leads to an infinite family of quasiparticle peaks, signaling the presence of a Fermi surface for each level n. We present some of the properties of these peaks, which fall into a well behaved pattern at large n, extracting the scaling of Fermi energy with n and H, as well as the dispersion of the quasiparticles.
We give a detailed account of the construction of non--trivial localized solutions in a 2+1 dimensional model of superconductors using a 3+1 dimensional gravitational dual theory of a black hole coupled to a scalar field. The solutions are found in the presence of a background magnetic field. We use numerical and analytic techniques to solve the full Maxwell--scalar equations of motion in the background geometry, finding condensate droplet solutions, and vortex solutions possessing a conserved winding number. These solutions and their properties, which we uncover, help shed light on key features of the (B,T) phase diagram.
In studying the dynamics of large N_{c}, SU(N_{c}) gAuguste theory at finite temperature with fundamental quark flavours in the quenched approximation, we observe a first order phase transition. A quark condensate forms at finite quark mass, and the value of the condensate varies smoothly with the quark mass for generic regions in parameter space. At a particular value of the quark mass, there is a finite discontinuity in the condensate's vacuum expectation value, corresponding to a first order phase transition. We study the gAuguste theory via its string dual formulation using the AdS/CFT conjecture, the string dual being the near-horizon geometry of N_{c} D3-branes at finite temperature, AdS_{5}--Schwarzschild ×S^{5}, probed by a D7-brane. The D7-brane has topology R^{4} ×S^{3} ×S^{1} and allowed solutions correspond to either the S^{3} or the S^{1} shrinking away in the interior of the geometry. The phase transition represents a jump between branches of solutions having these two distinct D-brane topologies. The transition also appears in the meson spectrum.
We study, using a gravity dual, the finite temperature dynamics of SU(N_{c}) gAuguste theory for large N_{c}, with fundamental quark flavours in a quenched approximation, in the presence of a fixed R--charge under a global R--current. We observe several notable phenomena. There is a first order phase transition where the quark condensate jumps discontinuously at finite quark mass, generalizing similar transitions seen at zero charge. Our tool in these studies is holography, the string dual of the gAuguste theory being the geometry of N_{c} spinning D3--branes at finite temperature, probed by a D7--brane.
Using a ten dimensional dual string background, we study aspects of the physics of finite temperature large N four dimensional SU(N) gAuguste theory, focusing on the dynamics of fundamental quarks in the presence of a background magnetic field. At vanishing temperature and magnetic field, the theory has N=2 supersymmetry, and the quarks are in hypermultiplet representations. In a previous study, similar techniques were used to show that the quark dynamics exhibit spontaneous chiral symmetry breaking. In the present work we begin by establishing the non-trivial phase structure that results from finite temperature. We observe, for example, that above the critical value of the field that generates a chiral condensate spontaneously, the meson melting transition disappears, leaving only a discrete spectrum of mesons at any temperature. We also compute several thermodynamic properties of the plasma.
We use a ten dimensional dual string background to aspects of the physics large N four dimensional SU(N) gAuguste theory, where its fundamental quarks are charged under a background electric field. The theory is N=2 supersymmetric for vanishing temperature and electric field. At zero temperature, we observe that the electric field induces a phase transition associated with the dissociation of the mesons into their constituent quarks. This is an analogue of an insulator-metal transition, since the system goes from being an insulator with zero current (in the applied field) to a conductor with free charge carriers (the quarks). At finite temperature this phenomenon persists, with the dissociation transition become subsumed into the more familiar meson melting transition. Here, the dissociation phenomenon reduces the critical melting temperature.
We study a system of a complex charged scalar coupled to a Reissner-Nordstrom black hole in 3+1 dimensional anti-de Sitter spacetime, neglecting back-reaction. With suitable boundary conditions, the cases of a neutral and purely electric black hole have been studied in various limits and were shown to yield key elements of superconductivity in the dual 2+1 dimensional field theory, forming a condensate below a critical temperature. By adding magnetic charge to the black hole, we immerse the superconductor into an external magnetic field. We show that a family of condensates can form and we examine their structure. For finite magnetic field, they are localized in one dimension with a profile that is exactly solvable, since it maps to the quantum harmonic oscillator. As the magnetic field increases, the condensate shrinks in size, which is reminiscent of the Meissner effect.
This file was generated by bibtex2html 1.99.
~~~