Quantum information processing with finite resources

© Juan Palomino

The main focus of practical quantum information is the creation, manipulation and detection of large-scale entangled quantum systems. An important prerequisite for applications of quantum technologies is reliable verification, i.e. we must be able to reliably test the functionality of quantum devices. Recent quantum experiments dealing with a large number of particles, such as optical lattice simulations involving 103 - 104, experiments with hundreds of trapped ions or thousands of qubits in D-Wave systems, show the real potential for applications of quantum technologies in the near future. Nevertheless, in order to achieve the final goal, we have to benchmark those systems. There are several typical questions one would like to answer:

Q1: Does the prepared quantum state contain entanglement?
Q2: How far is the actual preparation from a desired (target) state?
Q3: Is there an efficient way to do a quantum state tomography?

These three questions are the main objectives of verification problem. There are several problems one has to face when dealing with multi-particle entangled states. Take the example of entanglement verification. The typical detection procedure [GT09] involves the extraction of the mean value of a certain witness operator W=∑i Wi , by measuring the means of the local observables Wi=A1⊗…⊗AN (N is the size of system). Therefore, in order to detect entanglement, one has to conduct different experiments (different measurement settings for each W_i), each of which requires a large number of identically prepared copies (i.i.d. states) such that the sample averages are close to the real mean values. This is extremely hard to achieve when dealing with large entangled system (when N is large), due to the lack of a good control and manipulation and the very low rate of produced copies of large correlated quantum systems. In such case, the conventional verification methods are not reliable, as they require large statistics. On top of this, one has to provide a reliable statistical analysis, which is highly non-trivial when dealing with large quantum systems (the post-processing of experimental data and statistical modeling is very demanding due to the large number of parameters involved). Our main goal is to overcome these difficulties by employing the methods and techniques from quantum information.

Probabilistic framework: verification as an information-processing task

Figure 1: Schematic representation of probabilistic entanglement detection [DD17]. A single copy of N-partite quantum state is prepared. The sequence of measurement settings m1,m2,…,mN is randomly drawn from distribution \Pi(m1,m2,…,mN). Each mk is locally executed on kth subsystem and the set of outcomes (i1,i2,…,iN) is obtained. The value of binary cost function F[N](i1,..,iN) prescribes either “success” (F[N]=1) or “failure” (F[N]=0) to the experimental run.

In our recent work [DD17], we introduced a probabilistic framework for the detection of quantum resources (in particular quantum entanglement). There, the presence of a resource is seen as the ability of a quantum system to perform a certain information-processing task (see Fig. 1).

The main difference to conventional methods is that our main focus is on a single experimental run, where the system computes a certain binary cost-function F, which serves as a figure of merit for verification. This is the way to establish a probabilistic verification through the probability of success Ps[F=1]. The cost-function is chosen with great care, such that Ps clearly separates quantum states having some property A (e.g. being entagled or „non-local“ from those that don't).

In our subseqent work [Sag18], we applied these ideas to develop a general translation procedure of the standard detetction methods (based on entnaglement witnesses) into a probabilistic framework. We have provided the following key features of our verification protocol:

a) detection with an exponentially growing confidence in the number of copies of the quantum state,

b) implementation via local measurements only, and

c) the procedure does not require the assumption of independent and identically distributed (i.i.d.) experimental runs.

Single-copy entanglement detection

Surprisingly, in [DD17], we have shown that in many situations, even a single copy of a quantum state (single-shot experiment) suffices to reliably (with high probability) verify entanglement in large quantum systems (see phys.org/news/2018-02-fingerprints-quantum-entanglement.html for the media coverage). For example, a single copy of a 24-qubit linear cluster state suffices to verify entanglement with more than 95% confidence. These numbers prove the relevance of our framework for next generation of quantum experiments, e.g. dealing with tens of qubits. Apart from the practical benefits, our result sheds a new light on the operational meaning of physical quantities in a single-shot scenario.


[GT09] O. Gühne, and G. Toth, Entanglement detection, Physics Reports 474, 1 (2009). 
[DD17] A. Dimić and B. Dakić, Single-copy entanglement detection, npj Quantum Information 4, 11 (2018). 
[Sag18] V. Saggio, A. Dimić, C. Greganti, P. Walther, and B. Dakić, Experimental few-copy multi-particle entanglement detection, arXiv: 1809.05455 (2018).