4  Observing probabilistic and measuring quantum systems

So far we only talked about the description of a probabilistic and a quantum system. We now look into observing/measuring those systems.

4.1 Observing a probabilistic system

Observing a probabilistic system is the process of learning the outcome from a probability distribution. If our probability distribution for example represents a coin flip, observing this distribution is equivalent to actually flipping the coin. In the probabilistic case, an observation is just about updating our knowledge or beliefs. This will be different in the quantum case.

Definition 4.1 (Observing a probabilistic system) Given a probability distribution \(d \in \mathbb{R}^n\), we will get the outcome \(i\) with a probability \(d_i\). The new distribution is then \[ e_i = \begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \leftarrow \text{1 at the $i$-th position} \]

The intuition for the new distribution is that we now after observing that \(i\) is the deterministic possibility for sure.

When observing a probabilistic system, the observation is just a passive process with no impact on the system. This means that there is no difference to the end result, whether we observe during the process or not. We take a look at an example to further understand this.

Example: Random 1-bit number

We use a random 1-bit number example similar to the random 2-bit example from Chapter 2. We have a distribution \(d_{\text{1-bit}} = \begin{pmatrix} \frac{1}{2} \\ \frac{1}{2} \end{pmatrix}\) which represents the probability distribution of generating a 1-bit number with equal probability. We also have a process \(A_{\text{flip}} = \begin{pmatrix} \frac{2}{3} & \frac{1}{3} \\[2pt] \frac{1}{3} & \frac{2}{3} \end{pmatrix}\) which flips the bit with a probability of \(\frac{1}{3}\).

We look at two different cases: For the first case, we observe only the final distribution and for the second case we observe after the generation of the 1-bit number and we also observe the final distribution.

Observing the final distribution

From Section 2.3 we know that the final distribution \(d\) is \[ d = A_{\text{flip}} \cdot d_{\text{1-bit}} = \begin{pmatrix} \frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{2}{3} \end{pmatrix} \begin{pmatrix} \frac{1}{2} \\ \frac{1}{2} \end{pmatrix} = \begin{pmatrix} \frac{1}{2} \\ \frac{1}{2} \end{pmatrix} \] We observe this distribution and will get outcome \(0\) and the new distribution \(d = e_0 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) with a probability of \(\Pr[0] = d_0 = \frac{1}{2}\) and the outcome \(1\) and the new distribution \(d = e_1 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) with a probability of \(\Pr[1] = d_1 = \frac{1}{2}\).

Observing after generation and the final distribution

We now observe the system after the generation of the 1-bit number and also observe the final distribution. After the generation, we will get outcome \(0\) and the new distribution \(d = e_0 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) with a probability of \(\Pr[0] = d_0 = \frac{1}{2}\) and the outcome \(1\) and the new distribution \(d = e_1 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) with a probability of \(\Pr[1] = d_1 = \frac{1}{2}\).

We now apply in each case the matrix \(A_\text{flip}\). This will give us the outcome \(A_\text{flip} \cdot \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} \frac{2}{3} \\ \frac{1}{3} \end{pmatrix}\) for the case of the outcome \(0\) and the outcome \(A_\text{flip} \cdot \begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} \frac{1}{3} \\ \frac{2}{3} \end{pmatrix}\) for the case of the outcome \(1\). If we observe the distribution \(\begin{pmatrix} \frac{2}{3} \\ \frac{1}{3} \end{pmatrix}\), we will get the outcome \(0\) and the new distribution \(d = e_0 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) with a probability of \(\Pr[0] = \frac{2}{3}\) and the outcome \(1\) and the new distribution \(d = e_1 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) with a probability of \(\Pr[1] =\frac{1}{3}\). If we observe the distribution \(\begin{pmatrix} \frac{1}{3} \\ \frac{2}{3} \end{pmatrix}\), we will get the outcome \(0\) and the new distribution \(d = e_0 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\) with a probability of \(\Pr[0] = \frac{1}{3}\) and the outcome \(1\) and the new distribution \(d = e_1 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\) with a probability of \(\Pr[1] =\frac{2}{3}\).

Combining these probabilities, we get the total probability \(\Pr[0]=\frac{1}{2}\frac{2}{3} + \frac{1}{2}\frac{1}{3} = \frac{1}{2}\) for the outcome \(0\) and the probability \(\Pr[1]=\frac{1}{2}\frac{1}{3} + \frac{1}{2}\frac{2}{3} = \frac{1}{2}\) for the outcome \(1\). This is the same as observing the final distribution.

4.2 Measuring a quantum system

Unlike in the probabilistic system, the “observation” of a quantum system is called measuring. The definition is similar to the observation of a probabilistic system, except that we need to take the absolute square of the amplitude to get the probability and that the state after measuring is called post-measurement-state (p.m.s.).

Definition 4.2 (Measuring a quantum system) Given a quantum State \(\psi \in \mathbb{C}^n\), we will get the outcome \(i\) with a probability \(|\psi_i|^2\). The post-measurement-state (p.m.s.) is then \[ e_i = \begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} \leftarrow \text{1 at the $i$-th position} \] This is called a complete measurement in the computational basis.

With this similarity to the probabilistic observation in the definition, one might assume that measuring a quantum state has also no impact on the system. This is not the case, measuring a quantum state changes the system! We can see this effect with an example:

Example: Measuring a quantum system

Let \(\psi = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{pmatrix}\) be a quantum state and \(H=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\) be a unitary transformation. We look at two different cases: First we apply \(H\) immediately and then measure the system. As a second case, we do a measurement before the application of the \(H\) unitary and then a measurement after applying it.

Measure the final state

We first calculate the state after applying \(H\): \[ H\psi = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \]

Measuring this state will get the outcome \(0\) with probability \(\Pr[0] = |\psi_0|^2 = 1\) and have the post-measurement-state \(\begin{pmatrix} 1 \\ 0 \end{pmatrix}\).

Measure the initial and the final state

Measuring \(\psi\) with no further unitary matrices applied can have the outcome \(0\) or \(1\). We will look at the final measurement for each case:

The first measurement will have outcome \(0\) with probability \(\Pr[0] = |\psi_0|^2 = \frac{1}{2}\) and the post-measurement-state will be \(\begin{pmatrix} 1 \\ 0 \end{pmatrix}\). \(H\) applied to this post-measurement-state will be \(H\begin{pmatrix} 1 \\ 0 \end{pmatrix} = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{pmatrix}\). When measuring this state, we will get the outcome \(0\) with probability \(\Pr[0] = |\frac{1}{\sqrt{2}}|^2 = \frac{1}{2}\) and outcome \(1\) with with probability \(\Pr[1] = |\frac{1}{\sqrt{2}}|^2 = \frac{1}{2}\).

The outcome \(1\) will appear at the initial state with probability \(\Pr[1] = |\psi_1|^2 = \frac{1}{2}\) and the post-measurement-state will be \(\begin{pmatrix} 0 \\ 1 \end{pmatrix}\). \(H\) applied to this post-measurement-state will be \(H\begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{pmatrix}\). When measuring this state, we will get the outcome \(0\) with probability \(\Pr[0] = |\frac{1}{\sqrt{2}}|^2 = \frac{1}{2}\) and outcome \(1\) with with probability \(\Pr[1] = |-\frac{1}{\sqrt{2}}|^2 = \frac{1}{2}\).

So independent of the outcome of the first measurement, at the second measurement the outcome \(0\) and \(1\) have a probability of \(\frac{1}{2}\). This shows that when measuring before applying \(H\), we will receive different probabilities for the second measurement, then when measuring only at the end. This proves that measurements can change the system.

4.3 Elitzur–Vaidman bomb tester

This section will be updated later on.