Saturday, 22 May 2010

Public Service Announcement

What’s taking so long, Mr. Babbage?

If you, or someone you know, may have been affected by any of the issues raised in my recent Book Review: Quantum Computation and Quantum Information, you might find it helpful to consult an article published exactly one hour earlier, by the one person who perhaps gets asked more than anyone else (he estimates at least 300 times per day), When can we expect to see the scalable quantum computer?

That person is Scott Aaronson, and this is his answer:

Notice that Mr Babbage has already replied in the comments section, in of course his typically detached and dispassionate, but clear and concise fashion.

Book Review: Quantum Computation and Quantum Information

The Enigmatic Qubit

The first thing an engineer wants to do, having learned about quantum computational "qubits", is to simulate them. After all, digital simulations of arbitrary analog computers (of which the Gristleizer musical effects pedal is an example) can be realised in any modern synthesizer. The state of an analog machine is nothing more than the list of instantaneous voltages at the outputs of its various amplifiers, integrators, differentiators. Isn't the quantum computer just like such an analog machine, in every essential way?

Further investigation into the mathematical nature of the qubit initially appears to yield more encouragement of this kind. There's a model known as the Bloch Sphere which describes the qubit's state 'ψ' as a tethered vector of standard length, pointing out in some random spatial direction. The surface of the generated sphere is obviously two dimensional. So, just as we would treat each node voltage in a given analog setup as a single floating point number in our digital simulation, doesn't this mean we can model a qubit by an ordered pair of such floating point numbers (such as φ and θ in the diagram below)?

The Bloch Sphere (Wikipedia)
Reading the qubit causes it to "collapse" to an arrow pointing (say) vertically either straight up or straight down. Can't we just simulate this operation statistically, in well-known, classical ways?

Realistically, the answer to all of this is "No." Such simulations involve unbelievably unrealistic magnitudes of repetition, sampling, and hardware scaling. Often the end result is still a hopelessly crude approximation. And as soon as we make the first obvious upgrades, from a single qubit to a pair, admitting as this does the possibility of entanglement, then even this model vapourises; while ascending to larger numbers of qubits, classical approximations rapidly become impossible in principle, even as research simulations - see e.g. the Quantiki List of QC Simulators for coverage, or alternatively Steven Peil's Imitating quantum mechanics: Qubit-based model for simulation (Phys. Rev. A 79, 042320, 2009) for one thoroughly detailed treatment.

Fortunately, it doesn't take a lot of expertise in quantum fundamentals to prove this. And once accepted, you will find yourself already at such a level, and with sufficient theoretical tools of quantum modelling and investigation at your disposal, that you will no longer have any interest in following such a dead end path anyway!

A Comprehensive Introduction

Michael A. Nielsen (University of Queensland) and Isaac L. Chuang (IBM / Stanford University) first published their masterwork in 2000. Since then it has been reprinted better than annually. It is a considerable testament to their foresight, and to their thoroughness, that ten years later, and in the present environment of almost daily revelations and astounding leaps of progress in the field, their work has remained so current, relevant, authoritative, and easy to recommend as either comprehensive primer or full-year course.

The book is unapologetically mathematical in its hardcore technical approach, after all this is a serious course textbook, not a "popular science" title! There are several tables and figures of nomenclature and convention, distributed throughout the early sections of each main book part, to be encountered as and when these are required. The generous use of diagrams is particularly helpful, given the sheer density of topics likely to be completely new to the reader.

As the title implies, this book seeks to furnish a sound working fundamental basis in the theory and practice of both quantum computation, and quantum information systems. Each of these fields has spawned an expository industry of its own, and more modern publications tend to specialise in one field or the other (the cover blurbs include one from a certain Peter Shor of AT&T Research, saying The field is already too big to cover in one book...). It is therefore quite natural and unsurprising to discover that this venerable tome comprises three main parts, once the common fundamental concepts, necessary to make a start in either of these main areas, are factored out into their own introductory section.

Part I: Fundamental Concepts

This is further subdivided into three parts.

Chapter 1, a general Introduction and Overview, provides both basic historical context and remarks about future directions, followed by more in-depth introductions to quantum bits, isolated and multiple; qubit states, gates, circuits and measurements; quantum teleportation, parallelism and algorithms; practical information processing considerations, and information theory in a wider context.

Chapter 2 is a serviceable introduction to prerequisite mathematical subjects, including aspects of linear algebra; the postulates of quantum mechanics; and the nature and evolution of, and measurements within, quantum state space. The chapter ends with an assortment of more specialised discussions on density operators and the Bell Inequality.

Chapter 3 rounds off the introductory material from the perspective of computer science, with Turing Machines, logic circuits, resources and complexity. Here we are introduced to reversible computation, as a means of bringing some important, universal computation gates (Fredkin and Toffoli) into the discourse.

Part II: Quantum Computation

This section of the book deals mostly with the dynamics of closed quantum systems, i.e., those without any unwanted interactions with the outside world.

Chapter 4: Quantum Circuits kicks off the detailed exploration of quantum computation with a survey of the few essentially quantum algorithms known at the time of writing (January 2000, although there has by no means been an interim explosion in these). Next, single qubit operations are explored, and modelled using 2x2 matrices of complex coefficients. This allows the countenancing of compositions, and the introduction of quantum circuit components such as the common single qubit "gates": the Hadamard, H; the Pauli rotations, X, Y and Z; the Phase, S; and the π/8, T.

The general notion of a controlled operation, and particularly the controlled-NOT or CNOT (previously mentioned in the introductory material), then serves as our first example of a multi-qubit gate. The ensuing section, on the analysis and synthesis of higher-order operations such as the Toffoli and Fredkin gates, is particularly rigorous and lucid.

This chapter concludes with useful treatments of topics like quantum measurement, gate universality, computational complexity, the quantum circuit model of computation, and finally some illustrative examples of quantum simulation (an advanced topic).

Chapter 5: The Quantum Fourier Transform and its Applications, after a brace of introductory limericks, focuses down on one of the most spectacular successes (to date) in the search for uniquely quantum algorithms: the Quantum Fourier Transform, an efficient quantum algorithm for performing a Fourier transform of quantum mechanical amplitudes. Here the authors begin to disabuse us of faulty preconceptions engendered by the popular science press, as this is not a faster version of the classical Fourier transform; rather, certain problems thought to be quite intractable on classical systems can be attacked through it. Various applications such as phase estimation, order-finding, the hidden subgroup problem, and of course good old fashioned factoring, are demonstrated.

Chapter 6: Quantum Search Algorithms reintroduces the notion of the oracle, a black box previously alluded to in the introductory sections, showing how its application can be used to speed up a process (such as factoring, although this example has expository rather than practical advantages) in probabilistic ways, effectively by allowing an already "known" answer to become "recognised". Illustrations of Grover Iteration are derived, rigorously presented, and analysed; while one particular full-page diagram, illustrating the quantum addressing of a classical strip of memory, takes the biscuit for effective communication of principles.

Chapter 7: Quantum Computers: Physical Realization is inevitably the least current of all the subject subdivisions on show here. Not a week goes by without one or several new and revolutionary potential mechanisms or improvements being heralded as the vanguard of the ultimate scalable quantum computer. As important and significant as these advances certainly are, they are nevertheless matters of degree, while for theoretical purposes you'd be just as well using any of the implementations detailed here.

Comparisons are made among the various qubit design types - spin, charge, and photon - on the basis of decoherence, and the maximum expected number of operations. Hamiltonian analysis of simplified quantum harmonic oscillators will surely remain extremely important and useful for the foreseeable future (even today, any engineer should know how to solve the Schrodinger wave equation for various sample potential profiles). Then too are considered optical photons and cavities, ion traps, NMR, and some other types that once, a decade ago, must have appeared a lot more speculative than now.

Part III: Quantum Information

The last and largest section of the book covers various aspects of noise, i.e., unwanted interactions between real quantum systems and the world external to these.

Chapter 8: Quantum Noise and Quantum Operations breaks the ice with a brief description of stochastic changes to classical states, traditionally modelled by cascaded independent Markov processes under the assumption that is termed Markovicity, before going on to introduce a powerful tool for describing the evolution of quantum states: the quantum operations formalism. The particularly elegant operator-sum representation (using operators only on the Hilbert space of the main system) is developed and applied to several example cases.

Chapter 9: Distance Measures for Quantum Information gets a bit esoteric, tackling questions like: How close are two quantum states? How well does a quantum channel preserve information? Trace distance and fidelity are discussed (the former being found to have a particularly simple geometric interpretation in the case of a single qubit, namely half the Euclidean distance between the corresponding points on the Bloch Sphere illustrated above). One significant takeaway is a satisfyingly short proof of Uhlmann's Theorem.

Chapter 10: Quantum Error-Correction covers three broad topics relating to the reliable processing of quantum information in the presence of noise; to wit (the basic theory of) quantum error-correcting codes, fault-tolerant quantum computation, and the threshold theorem. Starting with classical error-correction, we are shown some conceptual challenges on the road to simple quantum error-correcting codes, subsequently generalised into a theory of quantum error-correcting codes. Then, some musing on the classical theory of linear codes gives rise to the fascinating Calderbank-Shor-Steane code system, and eventually we reach the so-called stabilizer codes. By this point, the authors' poetic sub-chapter introductions have evolved into sonnets! No, I'm serious. A refreshingly practical treatment of fault tolerance winds up chapter 10.

Chapter 11: Entropy and Information contains rather a lot of mathematics for such a short chapter, but looks as if it just might be of interest to someone, somewhere, some day. Who can say? Not me anyway. Stephen Hawking maybe?

Chapter 12: Quantum Information Theory brings it all nicely together. Yes, even the entropy stuff: high entropy implies low fidelity, after all. Other topics include the Holevo bound; Shannon's (classical) and Schumacher's (quantum) noiseless channel coding theorems; information over noisy quantum channels; entanglement distillation; and of course quantum cryptography.

Appendices mitigate the forementioned prerequisites by furnishing basic coverage of number theory; the theories of basic probability, and of groups; self-contained treatments of the Solovay-Kitaev and Lieb Theorems; and public key / RSA cryptography.


If any chapter(s) would be seen as the Achilles' Heel of the book, then the concluding chapters of parts II and III are surely most at risk, concerned as they are with the state of the art of quantum computing and/or IT at the turn of the millennium. But with help from the certitude of mathematics, the breadth of scope (and consequently implied shallowness of treatment), and expert exposition and presentation, upon inspection it passes the test of time rather well.

Quantum Computation and Quantum Information
Michael A. Nielsen and Isaac L. Chuang

Cambridge University Press

First published 2000

ISBN 0-521-63235-8 (hardback)

ISBN 0-521-63503-9 (paperback)

Friday, 14 May 2010

Sacramento Credit Union

Working on the assumption that everyone reading this blog will get the joke, the following is excerpted from the Sacramento Credit Union's Online Access General FAQs:

Why are the Security Questions used?

The first time you login and enroll in Protection Plus, you will be asked to enter five Security Questions and corresponding answers. The Security Questions are used if you do not want to register the computer you are currently using. With the Security Questions, we can make sure it is you logging in when you use different computers, such as, a internet bar computer.

The answers to your Security Questions are case sensitive and cannot contain special characters like an apostrophe, or the words “insert,” “delete,” “drop,” “update,” “null,” or “select.”

Why can’t I use certain words like "drop" as part of my Security Question answers?

There are certain words used by hackers to try to gain access to systems and manipulate data; therefore, the following words are restricted: "select," "delete," "update," "insert," "drop" and "null".

Saturday, 8 May 2010

Holiday Snaps

Embo, April 2010

Incredibly, Linda got up & at 'em around 4 am, to capture the likes of this (click to embiggen + appreciate):

Thursday, 6 May 2010

Algebra of Functions, Part 3: Comprehensions

Query Comprehensions

In answer to a colleague recently: fluent method calls are primary; query comprehension syntax is the sugar.

We're talking about LINQ queries in C#, and what I'm saying is that this kind of thing,
from order in orderList
where order.Number == 123456
select order.Date;
gets transformed pronto into this kind of thing:
.Where(order => order.Number == 123456)
.Select(order => order.Date);
quite early in the compilation process. So early in fact that the bits between the from, where and select keywords, and the rest, are hardly analysed at all. In his Fabulous Adventures In Coding blog article, Query transformations are syntactic, Eric Lippert has some crazy fun with this situation.

Now, combine this fact with one from Part 2, where I briefly mentioned that LINQ's IEnumerable exemplifies the List monad, if you construe SelectMany as its Bind operation. You might then leap to the conclusion that any monad could be represented in this cool and sweet comprehension syntax. And you'd be right! Rather than composing our monadic functions using the After operation, we can simply rename the monad's Bind operation to SelectMany, and use much cooler, nested from clauses instead.

Comprehending a Monad

Let's rework the example of the Maybe monad more thoroughly. We'll get rid of that value type constraint by using this custom generic class to represent any type of data:
class Maybe<T>
public T Value { get; private set; }
public bool IsEmpty { get; private set; }
public static readonly Maybe<T> Nil = new Maybe<T>();

Maybe() { IsEmpty = true; }

public Maybe(T value)
Value = value;
IsEmpty = false;

public override string ToString()
return IsEmpty ? "<empty>"
: Value == null ? "null"
: Value.ToString();
This extremely simple class just encapsulates a Value, of any type T, and an IsEmpty flag to indicate when the Value is absent. Note incidentally that we are allowing null as a value (assuming of course that T is a reference type).

Next, we rewrite Bind as SelectMany. Okay, actually there is a little more to it than that. You'll notice both Select and SelectMany methods below. The latter, accepting the extra function 'g', is just an optimization that C# insists on being present, to help it avoid an inefficient, too-deep nesting of lambdas. It's quite safe to ignore the details of this for our current purposes. Whatever your monad, just follow the pattern, and provide the extra method; it's needed to enable the use of comprehension syntax.

Oh, and one more thing I haven't mentioned yet. As well as a Bind operation, monads have a so-called Unit or Return operation, whose job is to amplify an unamplified value. Ours is named ToMaybe, and as you can see it simply calls the appropriate constructor.
static Maybe<U> Select<T, U>(this Maybe<T> t, Func<T, Maybe<U>> f)
return t.IsEmpty ? Maybe<U>.Nil : f(t.Value);

static Maybe<V> SelectMany<T, U, V>(this Maybe<T> t, Func<T, Maybe<U>> f, Func<T, U, V> g)
return t.Select(x => f(x).Select(y => g(x, y).ToMaybe()));

static Maybe<T> ToMaybe<T>(this T value)
return new Maybe<T>(value);
That's all we need. Now we can reimplement those up and down functions from the previous example using our new generic class type, and build the climb composition using the awesomest comprehension syntax on the planet:
static void Main()
Func<int, Maybe<int>>
up =
x => x < int.MaxValue ? (x + 1).ToMaybe() : Maybe<int>.Nil,
down =
x => x > int.MinValue ? (x - 1).ToMaybe() : Maybe<int>.Nil,
climb =
x => from y in up(x) from z in down(y) from w in down(z) select w;
Console.WriteLine(" '{0}' '{1}' '{2}' ",
climb(0), climb(int.MaxValue), climb(int.MinValue));
// Displays '-1' '<empty>' '<empty>'
Well, awesomest apart from the comprehension syntax of any fully functional language. But this is a C# series!

Security Digest #8

For software designers, developers and testers, April's Computer Security news was obviously dominated by the SDL 5 release; now we return you to your regular programming, albeit with a slight and curious aura of Sinophobia this month...

My God It's Full Of Eyes

Apparently that's what they call the holes in a Jarlsberg cheese. Hefting a pack* at my local Waitrose last Sunday, I was momentarily distracted by its spongiform... ah, it's sold by weight, that's ok then. Wake up, brain.

Google's new app is similarly riddled with security holes, so they've called that Jarlsberg too. It's the basis of a codelab, a small, cheesy web application that allows its users to publish snippets of text and store assorted files.

The holes in this cheese comprise security bugs, such as cross-site scripting and request forgery, remote code execution, information disclosure, and DoS. Organised by vulnerability type, the codelab guides you through the techniques used to discover and repair these. Both black- and white-box hacking are involved, so the challenges are also tagged by their level of opacity. Some can be solved just by using black box techniques; others require some specific knowledge of Jarlsberg; still others will require access to the Jarlsberg source code itself.

The fun starts here:

* Update: turns out it was actually a pack of Leerdammer, sorry. I know you'd immediately want to know that.
Pic: genuine Jarlsberg from The Cheese Store of Beverly Hills.

All Your Base

Anti-spam and anti-hacking software, data backup and recovery systems, smart cards, secure routers, database and network security systems; these are a few of the thirteen categories of computer encryption and other security products, whose most intimate inner workings and secrets Beijing wants to know all about. Or else: no billions of dollars in trade for you!

The draconian set of rules - originally postponed from their 2009 implementation schedule, thanks to stern US complaints, and scaled back in scope from all Chinese sales to just government procurement - finally took effect on Saturday 1st May.

Or did they? Some are disputing the rules.

The EU and Washington want Beijing to scrap or once more postpone these demands, pointing out that no other nation imposes such a "protectionist" regime. On Tuesday 27th April the visiting EU trade commissioner Karel De Gucht revealed that he'd raised this issue with Chinese Commerce Minister Chen Deming. The requirement "has no real base in reality," he said, "We cannot see what they see in regard to security, so we are in fact disputing this."

And in an e-mail the following day, U.S. Trade Representative spokeswoman Nkenge L. Harmon called for Beijing to "follow global norms", noting that American officials in last month's meeting "pressed China to address the concerns of foreign governments and industry before implementing the testing and certification rules."

It's certainly not just a matter of "national security". The communist government is of course less than happy to rely on managing its secrets via foreign technology, but Beijing's desire to help Chinese companies catch up and compete with global rivals is another spur. Their own admission includes the statement that the rules are meant to develop Chinese industry. Beijing is using regulations to support its companies at the expense of foreign rivals.

Chinese Government panels would require disclosure of trade secrets such as encryption codes, and foreign companies are worried about these being leaked to competitors. Many industry researchers see these fears as justified, and further point out that:
  1. Certain countries prohibit security product sales to customers such as banks, once sensitive details such as encryption codes are revealed to Chinese regulators.
  2. Employees of rival Chinese companies are included on the very government panels who would review foreign products.
  3. Beijing's extensive system of Internet filtering and monitoring would also be strengthened by the acquisition of such data, giving their security forces the ability to pry into encrypted messages.

The First Worldwide Cybersecurity Summit

Launched with a lunch tagged The Cybersecurity Awareness Dinner on Monday May 3rd, this Dallas Texas event organised by The EastWest Institute tried to be the first truly international conference directly and actively addressing international co-operation on contemporary security issues.

The Summit proper occupied Tuesday and Wednesday, and was much smaller than other security conferences. Annual conferences focusing on hackers' demonstrations of their latest research, such as Black Hat and DefCon in Las Vegas, have generally come to expect thousands of attendees. But the EWI event did play host to more than 400 government officials, industry executives and other players from 30 countries, and generally succeeded in its prime stated aim of bringing together leaders of governments, businesses and civil society from around the world to determine new measures to ensure the security of the world’s digital infrastructure.

From its pre-publicity:

Electronic attacks around the world have compromised confidential information, crippled official web sites and have exposed the vulnerability of financial data. They have heightened fears that criminals or terrorists could use cyberspace to paralyze communications infrastructure, international financial systems or critical government services.

Despite the severity of these threats, the international community has not come to agreement on how best to deal with them. EWI’s Worldwide Cybersecurity Summit will work to fill this gap, bringing together leaders from the public and private sectors to reframe cybersecurity concerns and to devise collective strategies to address them.

The mere idea of being able to discuss cybercrime and similar shared threats even informally, by getting the right officials together, is quite a challenging one. Such people are inevitably and understandably suspicious of any organiser's motives, and do not generally have immediate and unrestricted access even to their own counterparts in other countries. So the fact that such a summit can be organised at all, and in the event prove to be comparatively well attended, is in itself a considerable achievement.

Speakers included White House's cybersecurity coordinator Howard Schmidt (above) and senior officials from Canada, China, the EU, India and Japan; CEOs from AT&T and Dell (the event's top sponsor); and executives from both Microsoft, and China's largest telecoms manufacturer, Huawei Technologies.

Presentations were drawn from categories: Information and Communications Technology, Financial Services, Essential Governmental Services, Energy, Transportation, National Security, and the Media. Matters covered included: dialog and concern about computer warfare between nations; the potential damage that can be caused by computer attack, particularly in sensitive areas such as online banking; and possible remote control of power plants, or other critical infrastructure.

Highlighted were comparatively recent cases including last year's revelation of spies hacking the US electric grid, leaving behind malware to let them disrupt service, and the more recent attacks on Google, compromising its e-mail service, which caused their move out of China.

Reports on Tuesday's proceedings:
Wednesday's business is well summarised by AFP's tagline Cybersecurity meet ends with calls for global cooperation, but also don't miss their China Daily update:

That's all for now. Clean up that mess before you go to bed.

Wednesday, 5 May 2010

Associativity of '+' in C#

Okay, so whose bright idea was this?

const int
x = 1, y = 2, z = 4, answer = x + y + z;

Console.WriteLine("the answer is " + answer);
Console.WriteLine(x + y + z + " is the answer");
Console.WriteLine("the answer is " + x + y + z);
Console.WriteLine("the answer is " + (x + y) + z);
Console.WriteLine("the answer is " + x + (y + z));
Console.WriteLine("the answer is " + (x + y + z));


the answer is 7

7 is the answer
the answer is 124
the answer is 34
the answer is 16
the answer is 7

How did I get as far as C#4.0 without noticing this? And why didn't you tell me sooner? Where are my car keys?

There must have been a moment, during my first reading of the C#1.0 spec, when I read about the object concatenation operator '+' and its ability to join e.g. strings to numerics. Guess I shrugged it off, perhaps thinking crazy Javascript behaviour, certainly won't be using that! Now that it's resurfaced in a colleague's code, I feel so repulsed that - at least until they remove it from the compiler - I'm considering a career change.

No, not to another language, platform, or whatever. I mean maybe to something outdoors, like highway maintenance. Or else... a laundromat. Yes. Yes, that would be cleansing.

One of the best things about this language has always been the number of gotchas that it has removed from our daily lives, when compared with something like C++. Like unintentional fall-through in a switch block. Unfettered operator overloads. Mandatory destructors. Dereferencing with a choice of operators! Multiple inheritance. Arrays as unsafe pointers. Header files and multiple points of declaration. Et cetera, ad nauseam.

But I mean, really. Failure of associativity adding integers. Sheeesh.

Saturday, 1 May 2010

Tweets - April 2010