## Trends in Quantum Information Processing

This week the scientific advisory board of the CQT is coming to town, so we’re having lots and lots of talks.

Right now Michele Mosca, deputy director of the Institute for Quantum Computing in Waterloo, Canada, is speaking about “Trends in Quantum Information Processing” from a computer science / mathematics perspective.

(Why mathematics? He says he will always consider himself a mathematician…)

#### Alice and Bob

Mosca began by noting some of the cultural / linguistic differences that become important in large interdisciplinary collaborations. For example, computer scientists like to talk about people named Alice and Bob playing bizarre games. These games are usually an attempt to distill some aspect of a hard problem into a simple story.

#### Impossibility

Mosca also made a remark on “impossibility”. Namely: proofs that things are impossible, often called “no-go theorems”, are very important, but we shouldn’t overinterpret them. Just because one approach to doing something doesn’t work, doesn’t mean we can’t do it! From an optimistic viewpoint, no-go theorems simply tell us what assumptions we need to get around. There is, however, a difference between optimism and insanity.

For example: “We can’t build a quantum computer because of decoherence…”

Knill, Laflamme and Milburn initially set out to prove that linear optics alone would not give universal quantum computer… but then they discovered it could.

• E. Knill, R. Laflamme, and G. J. Milburn, A scheme for efficient quantum computation with linear optics, Nature 409 (2001), 46. Also see their paper Efficient linear optics quantum computation on the arXiv.

Another example: “Quantum black-box algorithms give at most a polynomial speedup for total functions…”

If this result — proved by whom? — had been discovered before Shor’s algorithm, it might have discouraged Shor from finding this algorithm. But his algorithm involves a partial function.

Yet another: “Nuclear magnetic resonance quantum computing is not scalable…”

So far each argument for this can be gotten around, though nobody knows how to get around all of them simultaneously. Attempts to do this have led to important discoveries.

#### From theory to practice

Next, Mosca showed a kind of flow chart. Abstract algorithms spawn various models of quantum computation: circuit models, measurement-based models, adiabatic models, topological models, continuous-time models, cellular automaton models, and so on. These spawn specific architectures for (so far hypothetical) quantum computers; a lot of work still needs to be done to develop fault-tolerant architectures. Then come specific physical realizations that might implement these architectures: involving trapped ions, photon qubits, superconducting circuit qubits, spin qubits and so on. We’re just at the beginning of a long path.

#### A diversity of quantum algorithms

There are still lots of people who say there are just two interesting quantum algorithms:

Shor’s algorithm (for factoring integers in a time that’s polynomial as a function in their number of digits), and

Grover’s algorithm for searching lists in a time that’s less than linear as a function of the number of items in the list.

Mosca said:

The next time you hear someone say this, punch ’em in the head!

For examples of many different quantum algorithms, see Stephen Jordan’s ‘zoo’, and this paper:

• Michele Mosca, Quantum algorithms.

Then Mosca discussed three big trends…

#### 1. Working with untrusted quantum apparatus

This trend started with:

• Artur Ekert, Quantum cryptography based on Bell’s theorem,
Phys. Rev. Lett. 67 (1991 Aug 5), 661-663.

Then:

• Dominic Mayers and and Andrew Yao, Quantum cryptography with imperfect apparatus.

And then much more, thanks in large part to people at the CQT! The idea is that if you have a quantum system whose behavior you don’t entirely trust, you can still do useful stuff with it. This idea is related to the subject of multi-prover interactive proofs, which however are studied by a different community.

This year some of these ideas got implemented in an actual experiment:

• S. Pironio, A. Acin, S. Massar, A. Boyer de la Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Random numbers certified by Bell’s theorem, Nature 464 (2010), 1021.

(Yay! Someone who had the guts to put a paper for Nature on the arXiv!)

#### 2. Ideas from topological quantum computing

The seeds here were planted in the late 1990s by:

• Michael Freedman, P/NP, and the quantum field computer,
Proc. Natl. Acad. Sci. USA 95 (1998 January 6), 98–101.

This suggested that a topological quantum computer could efficiently compute the Jones polynomial at certain points — a #P-hard problem. That would mean quantum computers can solve NP problems in polynomial time! But it turned out that limitations in precision prevent this. With a great sigh of relief, computer scientists decided they didn’t need to learn topological quantum field theory… but then came:

• A. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. 303 (2003), 2-30.

This sort of computer may or may not ever be built, but it’s very interesting either way. And in 2005, Raussendorf, Harrington and Goyal used ideas from topological quantum computing to build fault-tolerance into another approach to quantum computing, based on “cluster states”:

• Robert Raussendorf, Jim Harrington, and Kovid Goyal, Topological fault-tolerance in cluster state quantum computation, 2007.

Subsequently there’s been a huge amount of work along these lines. (Physics junkies should check out the Majorana fermion codes of Bravyi, Leemhuis and Terhal.)

#### 3. Semidefinite programming

The seeds here were planted about 10 years ago by Kitaev and Watrous. There’s been a wide range of applications since then. If you search quant-ph for abstracts containing the buzzword “semidefinite” you’ll get almost a hundred hits!

So what is semidefinite programming? It’s a relative of linear programming, an optimization method that’s widely used in microeconomics and the management of production, transportation and the like. I don’t understand it, but I guess it’s like a quantum version of linear programming!

I can parrot the definition:

A semidefinite program is a triple $(\Phi, a,b)$. Here $\Phi$ is a linear map that sends linear operators on some Hilbert space $X$ to linear operators on some Hilbert space $Y$. Such a linear map is called a superoperator, since it operates on operators! We assume $\Phi$ takes self-adjoint operators to self-adjoint operators. What about $a$ and $b$? They are self-adjoint operators on $X$ and $Y$, respectively.

This data gives us two problems:

Primal problem: find $x$, a positive semidefinite operator on $X$ that minimizes $\langle a,x \rangle$ subject to the constraint such that $\Phi(x) \ge b$.

Dual problem: find $y$, a positive semidefinite operator on Y that maximizes $\langle b, y\rangle$ subject to the constraint $\Phi^*(y) \le a$.

This should remind you of duality in linear programming. In linear programming, the dual of the dual problem is the original ‘primal’ problem! Also, if you know the solution to either problem, you know the solution to both. Is this stuff true for semidefinite programming too?

Example: given a bunch of density matrices, find an optimal bunch of measurements that distinguish them.

Example: ‘Quantum query algorithms’ are a generalization of Grover’s search algorithm. Reichardt used semidefinite programming to show there is always an optimal algorithm that uses just 2 reflections:

• Ben W. Reichardt, Reflections for quantum query algorithms.

There are lots of other trends, of course! These are just a few. Mosca apologized to anyone whose important work wasn’t mentioned, and I apologize even more, since I left out a lot of what he said….

### 17 Responses to Trends in Quantum Information Processing

1. Bill Kaminsky says:

In regard to the boldfaced claim…

• Michael Freedman, P/NP, and the quantum field computer,
Proc. Natl. Acad. Sci. USA 95 (1998 January 6), 98–101.

This suggested that a topological quantum computer could efficiently compute the Jones polynomial at certain points — a #P-hard problem. That would mean quantum computers can solve NP problems in polynomial time! But it turned out that limitations in precision prevent this. With a great sigh of relief, computer scientists decided they didn’t need to learn topological quantum field theory…

… was this result ever published? Skimming through the titles and abstracts of the 26 papers that the Science Citation Index lists as citing the above paper, none obviously looked to me like proofs of unphysically stringent accuracy requirements for topological field theory quantum computers to solve #P hard problems.

Or, are you referring to the discussion of accuracy that Freedman himself makes in this article? (I ask because that I always interpreted as him saying it was an open question whether or not an implementable quantum computer could have adequate accuracy. Am I wrong?)

Thanks.

• John Baez says:

Unfortunately I am not the one who can answer your question; I was just reporting Mosca’s talk. It’s an interesting question. Let’s hope that someone reading this blog can answer it.

• John Baez says:

Michele said Freedman hadn’t made any mistake in his paper: he had merely left open the question whether an implementable topological quantum computer could have enough accuracy to solve #P-hard problems. Later it became clear that they do not. But Michele said he didn’t know whether this result had been published: he had only heard it from a colleague.

• Bill Kaminsky says:

Thank you very much John for cornering Michele and asking him!

Through my own research since asking the question, I have some further thoughts on the issue. Since I have a deep neurotic drive for completeness, I divide my comments into two versions:

=== THE SHORT VERSION ===

I’ve since come across the topological quantum computing lecture notes of one of Freedman’s frequent co-authors, Zhenghan Wang:

Zhengfan Wang, Topological Quantum Computation, CBMS Regional Conference Series in Mathematics, Number 112 [American Mathematical Society: Providence, RI 2010].

Most of it is freely viewable* on Google Books, and most conveniently this free portion includes the entirety of Wang’s review of the matters pertinent to my question.

Chapter 3: Approximation of the Jones Polynomial

This chapter nicely summarizes the case that what the naive** type of topological quantum algorithm Freedman proposed in 1998 actually accomplishes is the approximation of the Jones polynomial to within an additive error ε in time O[poly(1/ε)]. Alas, as Freedman already laid out in his original 1998 proposal, one would need to produce such an approximation exponentially more quickly — that is, in time O[polylog(1/ε)] — in order to solve #P-hard problems in polynomial time. Of course, the O[poly(1/ε)] result is nothing to be sneezed at: it suffices to reproduce any polynomial time quantum computation — that is, it’s BQP-complete — and it’s superpolynomially faster than the best known classical algorithm.

Footnotes:

* Annoyingly, the freely viewable part does not include the bibliography. However, I reproduce the relevant portion of Wang’s bibliography below.

** I say “naive” since all one’s doing (!) is directly implementing or computationally simulating a physical system obeying a SU(2) Chern-Simons theory and determining the expectation values of certain system observables by just directly measuring those observables as accurately as one can. There’s no additional quantum dynamics/quantum information processing before measurement.

=== THE LONG VERSION ===

My fuller understanding of the chronology is as follows (NB: references follow Wang’s abbreviations):

*** 1998 ***

Freedman’s 1998 Proc. Nat. Acad. Sci. paper that you reference above [Fr1] made 2 key points

i) From Witten, we know that the expectation values of certain observables of a physical system governed by a SU(2) Chern-Simons theory will be values of the Jones polynomial at nontrivial roots of unity. This fact immediately suggests an algorithm for approximating the Jones polynomial at these nontrivial roots. Namely, just build the system and then repeatedly measure these observables as accurately as one can. Now, this possibly is a naive algorithm since all the quantum “processing” is the system just evolving under its SU(2) Chern-Simons theory without any extra quantum dynamics/algorithmic processing being performed. But, who knows? Potentially, it’s optimal. In any case, analyzing this possibly naive algorithm is nontrivial in and of itself, so let’s start here.

ii) On that note, how accurate would your measurements have to be in order to solve the #P-hard problem of exactly evaluating the Jones polynomial at nontrivial roots? Freedman’s conclusion was you needed to measure with an accuracy threshold that shrinks *exponentially* in the size of the problem, though he phrased it in terms of needing a “linear” number of “reliable bits”. Specifically, he wrote:

Thus the number of reliable bits in each observation required to error correct to the exact values of J(ξa) [that is, the exact values of the Jones polynomial at integer powers a of the rth root of unity ξ = 2 π i / r ] is linear in the size of the problem #K [that is, the number of crossings in the knot K to which this Jones polynomial J applies] … In other contexts, such as quantum computing, this scaling of accuracy requirements has been considered reasonable although a better goal for accuracy of individual measurements is poly(log K) bits or even a small constant number of bits. Because r ≈ 4#K observations are required to solve the linear system, a total of quadratically-many bits must be collected before all observational errors can be corrected.

Now, while the exponentially fine accuracy afforded by a linearly growing number of bits is a routine task in classical digital computing (e.g., the arbitrary precision arithmetic done by Lisp, Python, Mathematica, Maple, etc.), it’s a pretty grim omen for the odd hybrid of analog and digital computing that is quantum computing. But as Freedman notes, there are some quantum computing contexts where exponential such accuracy is achievable with polynomially-bounded memory and time resources. For example (and, of course, I’m not sure whether this is the example to which Freedman was alluding), the quantum phase estimation algorithm can measure the phase of an eigenvalue from a unitary operator down to n bits of accuracy with just O(n) qubits and O(n2) time.

*** 2000 ***

Freedman and coworkers establish that topological quantum computation isn’t fundamentally more powerful than standard quantum computation (i.e., circuits with unitary operators for gates). Each can simulate the other with only polynomial overhead [FKW, FLW1]. Though there wasn’t yet an explicit argument for why the exponential accuracy required to solve #P problems exactly would be intractable, I have a feeling that this was one of the times that, as you put it John, “with a great sigh of relief, computer scientists decided they didn’t need to learn topological quantum field theory.”

*** 2001 ***

Freedman and coworkers present an explicit analysis [FKLW] of a topological quantum computer on which one can perform the naive algorithm of just implementing the system and measure the relevant observables as accurately as possible. *Implicit* in the answer they get for the probability of a certain of measurement outcome is that approximating the Jones polynomial at the 5th root of unity to within an additive error ε will require time of O[Poly(1/ε)] — that is, if J(ξ) is the true value of the Jones polynomial at the 5th root of unity ξ = 2 π i / 5, then producing an estimate Jest(ξ) ∈ [J(ξ) – ε, J(ξ) + ε] requires time O[Poly(1/ε)]. As such, it’s *implicit* in [FKLW] that achieving the exponential accuracy demands set out in [Fr1] requires exponential time.

However, Freedman, et al. do *not* make this explicit (at least not in published work) for another 4 years.

*** 2005 ***

It’s at this point that it becomes explicit that efficient “additive approximation” of the Jones polynomial is really what this naive algorithm accomplishes and not efficient solution of #P-hard problems.

First, Freedman and coworkers sketch out the connection between topological quantum computing and “approximate counting” problems in general [BFLW].

And soon after, two researchers from the standard quantum computation community, Dorit Aharonov and Zeph Landau, along with Vaughan Jones himself [AJL] make it clear that in order to devise efficient quantum additive approximation algorithms for Jones polynomial, you really don’t need to know any topological quantum field theory and instead just a little algebraic knot theory. Computer scientists definitely breath a “great sigh of relief”.

*** 2006 and later ***

With topological quantum field theory no longer a necessary prerequisite, various researchers from the standard quantum computing community publish results on efficient quantum additive approximations of various knot polynomials [e.g., AAEL, SJ, WY].

And at least 2 people who aren’t obviously crackpots ;) hold out hope that a more advanced use of topological quantum resources (processing with a supposedly efficient non-Abelian quantum Fourier transform?) can produce better results for #P-hard problems [MR].

=== BIBLIOGRAPHY ===

[AAEL] Dorit Aharonov, Itai Arad, Elad Eban, and Zeph Landau, Polynomial quantum algorithms for additive approximations of the Potts model and other points of the Tutte plane http://arxiv.org/abs/quant-ph/0702008.

[AJL] Dorit Aharonov, Vaughn Jones, and Zeph Landau, A polynomial quantum algorithm for approximating the Jones Polynomial, Proc. 38th Ann. ACM Symp. on Theory of Computing (STOC ’06) 427-36, http://arxiv.org/abs/quant-ph/0511096.

[BFLW] Magnus Bordewich, Michael H. Freedman, Laszlo Lovasz, Dominic J.A. Welsh, Approximate counting and quantum computation, Combin. Probab. Comput. 14 (2005) 737-754, http://arxiv.org/abs/0908.2122.

[Fr1] Michael H. Freedman, P/NP, and the quantum field computer, Proc. Natl. Acad. Sci. USA 95 (January 6, 1998), 98–101, http://stationq.cnsi.ucsb.edu/~freedman/Publications/65.pdf.

[FKW] Michael H. Freedman, Alexei Kitaev, and Zhenghan Wang, Simulation of topological field theories by quantum computers, Commun. Math. Phys. 227 (June 2002) 587-603 http://arxiv.org/abs/quant-ph/0001071.

[FKLW] Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang, Topological Quantum Computation, Bull. Amer. Math. Soc. (N.S.) 40 (January 2003) 31-38, http://arxiv.org/abs/quant-ph/0101025.

[FLW1] Michael H. Freedman, Michael J. Larsen, and Zhenghan Wang, A modular functor which is universal for quantum computation, Commun. Math. Phys. 227 (June 2002) 605-622, http://arxiv.org/abs/quant-ph/0001108.

[FLW2] Michael H. Freedman, Michael J. Larsen, and Zhengfan Wang. The two-eigenvalue problem and density of Jones representation of braid groups, Commun Math Phys 228 (June 2002) 177-199, http://arxiv.org/abs/math/0103200.

[MR] Annalisa Marzouli and Mario Rasetti, Towards a quantum algorithm for the permanent, Int. J. Quant. Info. 5 (2007) 223-228.

[SJ] Peter W. Shor and Stephen P. Jordan, Estimating Jones polynomials is a complete problem for one clean qubit, Quant. Info. Comput. 8 (September 2008) 681-714, http://arxiv.org/abs/0707.2831.

[V] Dirk Vertigan, The computational complexity of Tutte invariants for planar graphs, SIAM J. Comput. 35 (March 2005) 690-712.

[Wel] Dominic J.A. Welsh, Complexity: Knots, Colourings, and Counting, London Mathematical Society Lecture Note Series 186 [Cambridge University Press: Cambridge 1993].

[WY] Pawel Wocjan and Jon Yard, The Jones polynomial: quantum algorithms and applications in quantum complexity theory, Quant. Info. Comput. 8 (January 2008) 147-180, http://arxiv.org/abs/quant-ph/0603069.

• Bill Kaminsky says:

Ooops, just in case anyone’s reading the “Long Version” above, I should correct one error that stemmed from the fact that I initially numbered the references and then changed to Wang’s abbreviations for them. Namely, when I wrote

As such, it’s *implicit* in [4] that achieiving the exponential accuracy demands set out in [1] requires exponential time.

I meant to write

As such, it’s *implicit* in [FKLW] that achieving the exponential accuracy demands set out in [Fr1] requires exponential time.

.

In short, Ref [4] = [FKLW] and Ref [1] = [Fr1], and “achieving” doesn’t have an extra “i” ;)

• John Baez says:

Bill wrote:

Since I have a deep neurotic drive for completeness…

You’re my kind of guy!

I’ve always thought that usenet newsgroups, blogging and the like get more interesting when people take them really seriously and put work into their writing. That’s how it worked on sci.physics.research and the n-Category Café when I was there — and I hope it still does. And that’s what I want here. Your comment is a marvel of detail and clarity, which takes my rushed summary of topological quantum computation and expands it to a really gripping story. Thanks!

I have taken the liberty of polishing your typography a bit, and also changing one sentence of yours. You had written

if J(xi^5) is the true value of the Jones polynomial at the 5th root of unity

which seems to conflict with your earlier usage of the letter xi:

J(ξ^a) [that is, the exact values of the Jones polynomial at integer powers a of the rth root of unity ξ = 2 π i / r ]

So, I’ve changed the former passage to:

J(ξ) is the true value of the Jones polynomial at the 5th root of unity ξ = 2 π i / 5…

I hope that’s okay — if I misunderstood, please let me know.

• Bill Kaminsky says:

I’m so glad like you liked my summary of the literature topological computing for approximating the Jones polynomial (and thus #P problems).

Thanks for prettying up the formatting and correcting that error about “ξ^5”. I indeed incorrectly notated it, meaning to type “ξ_5” since it’s indeed meant to signify the 5th root of unity e^{2πi/5}.

Also, I just noted one other error. I opened the “Long Version” of my comments saying:

From Witten, we know that certain observables of a physical system governed by a SU(2) Chern-Simons theory…

I meant to write:

From Witten, we know that the expectation values of certain observables of a physical system governed by a SU(2) Chern-Simons theory…

All the best!

–Bill

• John Baez says:

I’ve fixed the other little errors you noticed.

2. Tim van Beek says:

For example, computer scientists like to talk about people named Alice and Bob playing bizarre games.

Ahhh, yes, Alice and Bob usually try to communicate in a safe way, that is in a way that the nasty Gina either cannot understand, or where Alice inevitably notices that Gina intercepted the message

With regard to semidefinite programming John wrote:

I don’t understand it, but I guess it’s like a quantum version of linear programming!

I don’t understand it either, but the name simply comes from a “semidefinite” condition on a linear problem: find an operator X for a given operator A that minimizes trace(XA), subject to linear constraints $trace(XB_i) = b_i$. Up to this point, the problem is a purely linear programming problem.

Now we add the nonlinear, but convex, constraint that X must be selfadjoint, positive semidefinite, hence the name, semidefinite programming is therefore simply a subset of convex optimization.

In linear programming, the dual of the dual problem is the original ‘primal’ problem! Also, if you know the solution to either problem, you know the solution to both. Is this stuff true for semidefinite programming too?

No, duality theory is explained e.g. in the book Aspects of semidefinite programming. Wikipedia also cites a duality theorem.

• John Baez says:

John wrote:

In linear programming, the dual of the dual problem is the original ‘primal’ problem! Also, if you know the solution to either problem, you know the solution to both. Is this stuff true for semidefinite programming too?

Tim casually replied:

No…

No? Which part isn’t true? Both parts?

• Tim van Beek says:

Sorry for being snippy, it’s because I think I don’t understand this stuff well enough to try to explain it. But I can try:

A) Is the dual of the dual problem the original problem?

Let’s see what I know about this (everything in finite dimensions): Convex optimization problems can be reformulated in a standard form, where the objective function is linear and the set of allowed values for the variables (also called the feasibility set) is the intersection of an affine subspace and a cone.
There are cones that are selfdual, we have

1. positive orthant: linear programming,

2. second order cone: second order cone programming,

3. positive semidefinite cone: semidefinite programming.

1. and 2. are special cases of 3.

Since the cones are selfdual, the original problem and its dual problem can be cast in exactly the same form, so I’d say: Yes, in this sense the dual of the dual is the original problem.

B) In linear programming, if you know either solution (one of the original problem or one of the dual problem) you know both. Is the same true for semidefinite programming?

No, there are examples where the original problem does not have a solution, but the dual one has. If both have solutions and you can deduce one from the other, people say that strong or perfect duality holds. For this, one needs additional assumptions, these are listed in the statement of the duality theorem by Wikipedia. One assumption is that of strict feasibility, that means that the feasibility set (the set of values for the variables that fulfill the constraints) has interior points, strict feasibility is also called strong feasibility, Slater’s constraint qualification or Slater’s regularity condition, the assumption of it holdings is also called the interior point assumption.

The other assumption is that the feasibility set is bounded from below resp. bounded from above (dual problem).

• John Baez says:

Thanks! Sorry to be unclear and/or rude: I wasn’t complaining about your being “snippy” or not explaining stuff you don’t understand. (Most people are very happy to explain stuff they don’t understand — that’s what really bugs me.) I was just frustrated that I asked two questions and you answered them both with one “No”, leaving me uncertain as to your meaning.

I feel happy if A) the dual of the dual is the original problem (if it weren’t, we shouldn’t use the term ‘duality’!) but B) sometimes a problem has a solution even if its dual doesn’t.

(However, your counterexample to B) seems to come from linear programming rather than this ‘semidefinite programming’ that I was asking about. I think that the ‘interior point assumption’ is pretty common in linear programming, and now I know why! So, I’m still wondering if given analogous niceness assumptions the primal problem has a solution iff the dual one does in semidefinite programming. It’s not a very urgent question, given how little I know or, umm, care about semidefinite programming. But I would like to pose it here, in case an expert floats by who deigns to tell us the answer!)

• Strong duality:

It is true generically, in the sense that by changing the problem very little you can enforce it to hold. But there are degenerate cases where it fails.

This can be made completely precise but then the simplicity of the situation as in LP is lost.

3. Amitabha says:

Just a simple-minded remark, about something you have probably understood already: The `semidefinite program’ as you defined looks very much like something from a category, with $\Phi$ being an arrow from $a$ to $b$. So you have classes of self-adjoint operators on Hilbert spaces, and (linear) morphisms between them …

• John Baez says:

Hi, Amitabha! I’m in your neighborhood now!

I actually haven’t thought at all about these ‘semidefinite programs’. I was just taking notes on this talk. But I guess you’re right: there is a category where:

• an object $(X,a)$ is a Hilbert space $X$ equipped with self-adjoint operator $a$, and

• a morphism $\Phi$ from $(X,a)$ to $(Y,b)$ is a semidefinite program: that is, a superoperator from operators on $X$ to operators on $Y$ that happens to send $a$ to $b$.

So then the question is: have people thought about composing semidefinite programs? Is it useful to do so?

The duality for semidefinite programs might also have a nice categorical interpretation.

Thanks!

• Amitabha says:

Yes John, I noticed you are rather close by. Hopefully we should be able to meet up before you pack your bags again! :)

Yes, of course those two are categories, the second looks like a double category to me …

Don’t know anything about composing programs, except for the physicists’ view that breaking a problem into pieces usually helps solve it faster.

Do linear programming and its duality have categorical interpretations?

• Amitabha says: