A Response to Kevin Marks' Anti-DRM Argument

| 13 Comments | 1 TrackBack
Kevin Marks recently posted an argument against Digital Rights Management on his weblog and apparently has submitted it to a working group in the British House of Parliament. When I read his argument, I was astounded. The entire argument is founded on an error, a miscomprehension of a fundamental theorem of Computer Science.
I could summarize Marks' statement into two basic arguments:
1. DRM is futile, it can always be broken.
2. DRM is a perversion of justice.
Marks opens his argument with a huge misstatement of facts:
Firstly, the Church-Turing thesis, one of the basic tenets of Computer Science, which states that any general purpose computing device can solve the same problems as any other. The practical consequences of this are key - it means that a computer can emulate any other computer, so a program has no way of knowing what it is really running on. This is not theory, but something we all use every day, whether it is Java virtual machines, or Pentiums emulating older processors for software compatibility.
How does this apply to DRM? It means that any protection can be removed. For a concrete example, consider MAME - the Multi Arcade Machine Emulator - which will run almost any video game from the last 30 years. It's hard to imagine a more complete DRM solution than custom hardware with a coin slot on the front, yet in MAME you just have to press the 5 key to tell it you have paid.
Unfortunately, Marks has completely misstated the Church-Turing Thesis. It is a general misconception that the Church-Turing Thesis states that any computer program can be emulated by any other computer. This fallacy has come to be known as "The Turing Myth." This is a rather abstract matter, there is a short mathematical paper (PDF file) that fully debunks the misstatement Marks uses as the fundamental basis of his argument.
To cut to the core of The Turing Myth, there has come to be a widespread misunderstanding that The Turing Thesis means that any sufficiently powerful computer can emulate any other computer. The Turing Thesis is much narrower, in brief, it states that any computable algorithm can be executed by a Turing Machine. This in no way implies that any computer can emulate any other computer. Perhaps Turing inadvertently started this misunderstanding by a bad choice of nomenclature; he labeled his hypothetical computer a "Universal Machine," which we now call a "Turing Machine." However, a Turing Machine is not a universal device except in regards to a limited spectrum of computing functions.
One joker restated the Turing Thesis as "a computer is defined as a device that can run computer programs." This may seem obvious now, but in Turing's day, computers were in their infancy and the applications (and limitations) of computers were not obvious. As one example of these limits, there is a widespread category of "incomputable algorithms" that cannot be computed by any computer, let alone a Turing Machine. For example, a computer cannot algorithmically produce a true random number, it can only calculate pseudo-random numbers. This fundamental application of The Turing Thesis has founded a whole field of quantum cryptography, encoding methods based on incomputable physical processes, such as random decay of atomic particles. Quantum cryptographic DRM would be unbreakable, no matter how much computer power could be applied to breaking it.
I contacted Marks to inform him of the Turing Myth, in the hopes that he might amend his argument, since it all springs forth from a fallacy. He responded briefly by emphasizing the case of emulating MAME, and cited Moore's Law. Apparently Marks is arguing that since computers are always increasing in power, any modern computer can break older DRM systems that are based on simpler computers. He also appears to argue that emulated computers can simulate the output device, and incorporate a device to convert it on the fly to an unencrypted format, for recording.
Unfortunately, Marks chose a terrible example. The original game systems that are emulated by MAME had no DRM whatsoever. It was inconceivable to the game manufacturers that anyone would go to the trouble and expense to reverse-engineer their devices. The code inside these game systems was designed to run on a specific hardware set, any identical hardware set (or emulated hardware set) could run the unprotected code. At best, these devices used "security by obscurity," which any computer scientist will tell you is no security whatsoever.
Ultimately, DRM systems must not be so cumbersome as to be a nuisance to the intended user. This has lead to a variety of weaker DRM systems that were easily broken, for example, the CSS encryption in DVDs. However, this is no proof that truly unbreakable DRM is impossible or unworkable. As computer power and mathematical research advances, truly unbreakable DRM will become widespread.
Having dispensed with Marks' first premise, let us move on to the second, that DRM is a "perversion of justice." I cannot speak to British Law, as does Marks, however it seems to me that his arguments invoke the aura of British heroes like Turing and Queen Anne, to pander to unsophisticated British Parliamentarians. While his remarks are addressed to Parliament, he has attempted to argue from "mathematical truth" that DRM is futile. I would have expected that his legal argument would have attempted to base itself on more universal international copyright agreements, such as the Berne Convention. But I will not quibble over the scope of the argument, and instead attempt to deal with the argument itself. Marks states:
The second principle is the core one of jurisprudence - that due process is a requirement before punishment. I know the Prime Minister has defended devolving summary justice to police constables, but the DRM proponents want to devolve it to computers. The fine details of copyright law have been debated and redefined for centuries, yet the DRM advocates assert that the same computers you wouldn't trust to check your grammar can somehow substitute for the entire legal system in determining and enforcing copyright law.
It appears that Marks' fundamental complaint with DRM is that it puts restrictions in place that prevents infringement before it occurs. Current copyright laws only allow the valid copyright-holders to sue for damages after infringement occurs. Marks asserts this prior restraint is a violation of due process. However, he is mistaken, the DRM end-user has already waived his rights. When a user purchases a product with DRM, he is entering into a private contract with the seller, he explicitly accepts these restraints. If the user does not wish to subject himself to these restrictions, he merely needs to reject the product and not purchase it, and not enter into that contract with the seller.
I can find no legal basis that would prohibit the use of prior restraint in private contracts. It would seem to me that this would be a common occurence. For example, I might sign a Nondisclosure Agreement when dealing with a private company, agreeing that I would not disclose their secrets. A company might even distribute encrypted private documents to NDA signatories.
Ultimately, Marks' arguments do not hold up to scrutiny. They are based on false premises, and thus cannot lead to valid conclusions. Let me close by following Marks' answers to the questions posed by Parliament:
Whether DRM distorts traditional tradeoffs in copyright law. I submit that it does not. It merely changes the timing of the protection afforded by copyright law. It merely prevents infringement before it occurs, rather than forcing the copyright-holder to pursue legal remedies after the infringement occurs.
Whether new types of content sharing license (such as Creative Commons or Copyleft) need legislation changes to be effective. Current copyright laws are effective in protecting individual artists as well as corporate interests. Amendments to private distribution contracts such as CC or Copyleft are unproven in court. There is no compelling reason to change current copyright laws.
How copyright deposit libraries should deal with DRM issues. Since all DRM-encumbered materials originated as unprotected source material, it is up to the owner to archive this material as they see fit. Certainly the creators and owners have no reason to lock up all existing versions of their source material, this would impede any future repurposing of their content. Since a public archive of copyrighted material has no impact on the continued existence of original source material, it is up to the libraries to establish their own methods for preservation of DRM playback systems.
How consumers should be protected when DRM systems are discontinued. How were consumers protected when non-DRM systems were discontinued? They were not. I cannot play back Edison Cylinder recordings with modern equipment, yet I could continue to play them back on original Edison Phonographs. Vendors can not be required to insure their formats continue forever, this would stifle innovation.
To what extent DRM systems should be forced to make exceptions for the partially sighted and people with other disabilities. Disabilities are as varied as the multitude of people who have them, no DRM system could possibly accommodate all disabled persons. Some accomodations make no sense, for example, an exhibit of paintings or photography will always be inaccessible to the blind. "Accessibility" is a slippery slope, there will always be someone who complains they need further exceptions. Forcing owners to provide exceptions for disabilities will only lead to increasingly costly demands for accommodations upon content providers, which would stifle their ability to provide products for mass audiences.
What legal protections DRM systems should have from those who wish to circumvent them. DRM systems should be afforded protections available under whatever private contracts they license their work, just as the law exists today. End-users who are entitled to Fair Use already have the ability to request source material from the owners.
Whether DRM systems can have unintended consequences on computer functionality. This is a design issue, not a legal or political issue. Nobody can doubt that any computer program can have unintended consequences.
The role of the UK Parliament... I abstain. Parliament is not my bailiwick.

In summary, I believe that Marks' argument is based on two fallacies, and that his conclusions are based on a political wish, not a legal or technical argument. DRM is a compromise, some people (even me) may consider it a poor compromise, but I cannot see any technical or legal reason to burden content providers with even more ill-conceived compromises.

1 TrackBack

Web 2.0 is a veritable nebula of neologisms and clever phrasing: End of Cyberspace InfoCloud Disinfotainment tvPod (these are also good posts)... Read More

13 Comments

Nice post. Speaking as someone with very much the same political position on DRM as Kevin Marks, I'd like to thank you for making it clear that it is a political position: debate is only hampered by appealing to supposedly incontrovertible precedents. (And thanks for demolishing some bad but superficially plausible argument - always a service!)

The Edison Cylinder example is bogus when the content is protected by a license that has to be renewed from a license server when you re-install your operating system (again); When the license server goes away, so does your content.

[Perhaps you are advocating that it is the duty of a public institution, perhaps libraries, to maintain the license servers after they are discontinued? That seems burdensome. In your example, the content does not go away, merely your license to use it. It could still be available through other channels. The DRM is not the content, losing the license does not mean the originals are destroyed. I advocate preserving the original content, not the DRM encoded files. --Charles]

I thought the argument for DRM being "breakable" was, that in any case somewhere at the end of the "content pipe" there has to be unencrypted raw data (like pcm audio or the currents that drive the pixels on a display).
ie, you could maybe even integrate the DRM into the D/A, but there has to be some little spot where unencrypted data flows. And theoretically, someone could get it.
And that point where you get the unencrypted data just moved to the end of the pipe over the development of DRM until, with the new display links it is inside the display electronics...

[Right, that's what Marks appears to be arguing. I disagree. This argument appears to be an evolution of "the analog hole," that the raw unencrypted signal appears at some point in the output stream. But there is no technical obstacle to producing a system to prevent this. For example, I can conceive of a system to encode video so the raw signal never appears at any point, it would use a convoluted interlacing system that relies on the phenomenon of Persistence of Vision, it would only be reassembled as an image in the mind of the viewer. --Charles]

Maybe I'm wrong, but as I recall there were Turing Machines (those which could execute any properly described algorithm, even those which would never end), and then there was the Universal Turing Machine which would emulate any other Turing Machine by first reading the description of the Turing Machine to be emulated.

Roger Penrose gave a wonderful description of these machines in his book The Emperors New Mind.

I beleive all our modern computers can be considered Universal Turing Machines.

[I think you're falling for The Turing Myth here. The set of problems that can run on a Turing Machine is extremely limited. Certainly nothing approaching the complexity of quantum cryptography could run on such a machine. But I Am Not A Mathematician, so I will defer to those who are. --Charles]

Sorry to say, this text needs technical fixes.

-----

"The Turing Thesis is much narrower, in brief, it states that any computable algorithm can be executed by a Turing Machine."

-----

Actually, this is not well expressed. As a "thesis" (nonprovable but resonable assumption) CTT states that function that "we" (as humans) consider "computable" (by whatever means) can be executed not only by a human but also by a Turing Machine (TM). Conversely, as there are functions in mathematics which are considered non-computable by TMs ("most" functions in a mathematical sense are of this kind), one also considers that humans are unable to run those.

-----

"Turing inadvertently started this misunderstanding by a bad choice of nomenclature; he labeled his hypothetical computer a "Universal Machine," which we now call a "Turing Machine.""

-----

No. A TM in technically narrow sense is a special-purpose machine executing a specific algorithm (e.g. it computes sqrt(x)). Some TMs are "Universal Turing Machines", which means they are able to impersonate/run any other TM, including themselves - for example, an UTM can run the TM that computes sqrt(x). Modern computers correspond to UTMs.

Additionally, when saying "For example, a computer cannot algorithmically produce a true random number" then this is true when the "computer" is the original TM (which is fully deterministic) but NOT if it is a modern PC. The PC can just measure the thermal noise across a diode or the delays in the user's keyboard ministrations. This is the difference between /dev/urandom (fast, pseudo) and /dev/random (slow, should use bits that 'come from outside'). I once wanted to kick NewScientist because they didn't get that difference either.

-----

"This fundamental application of The Turing Thesis has founded a whole field of quantum cryptography, encoding methods based on incomputable physical processes, such as random decay of atomic particles."

-----

No. Quantum cryptography is based on the fact that there exist physical systems from which one cannot extract classical bits without modifying them in the process. In a TM, bits can be arbitrarily copied without disturbing the original ones of course.

DRM based on QM would still be fat breakable simply because at the end of the day you have to present the result to the "user" which may be a copying machine. But that's beside the point.

[I have long wanted to revise this essay and add some slight corrections, however, I don't believe in editing one's blog post facto, it seems dishonest to bury one's mistakes, it is better to clarify and defend them. However, grant me that my restatement of Church-Turing is at least more accurate than Marks'.
I cannot address your remarks as fully as I would like, I can merely refer you to the links in my essay that lead to a fuller explanation of The Turing Myth than I am capable of producing. That's what links are for.
Some quick remarks: Yes, one of the edits I wished to make was that I should have said "any Turing-computable algorithm can be executed to completion by a Turing Machine." But I am not sure if this more accurate restatement supports either side of the argument. Nonetheless, nothing in Church-Turing supports Marks' idea that any sufficiently powerful computer can emulate any other computer.
Additionally, I would argue that your random generator based on thermal noise is not a computing device, as the seed number is not generated algorithmically.
Your final argument on the futility of quantum crypto relies, once again, on the "analog hole." I have addressed this in a previous comment. --Charles]

I can conceive of a system to encode video so the raw signal never appears at any point, it would use a convoluted interlacing system that relies on the phenomenon of Persistence of Vision, it would only be reassembled as an image in the mind of the viewer

And I can build a system that emulates the behavior of the human eye and persistence of vision to re-create the video stream... (how about a nice simple sample and hold circuit per pixel? Floating weighted average per pixel?)... NEXT.

By the way, there are some cameras that are capable of something like that. Look up video slow shutter sync. You use it to video anything with a CRT to avoid the moving bars in the recording.

The underlying point, is that as technology and knowledge improves to get better encryption/DRM systems, so does the technology and knowledge to break the same systems. I point to two things as 'examples in action'. The increasingly faster rate at which in-use and proposed DRM systems are being broken and the fact that the breakers of such systems are often working in the blind, without even knowing the underlying algorithm (security through obscurity). These people have to figure out how the system is secured and then use that knowledge to break the system. The first part is often the hardest.

[You're arguing the old saw "any lock invented by man can be picked by man." I doubt this can be argued on any level of mathematical proof, Gödel notwithstanding.
But I still stand behind my argument. Marks argues that the decoded data can always be intercepted within an emulation of the computer. I proposed a method where it the data is not presented raw at any point within the system, and thus cannot be intercepted within the computer. --Charles]

[You're arguing the old saw "any lock invented by man can be picked by man." I doubt this can be argued on any level of mathematical proof, Gödel notwithstanding.
But I still stand behind my argument. Marks argues that the decoded data can always be intercepted within an emulation of the computer. I proposed a method where it the data is not presented raw at any point within the system, and thus cannot be intercepted within the computer. --Charles]

Yes it is. How would the monitor know which pixels to display, unless alongside the colour information was also a set of coordinates telling it which pixel it corresponds to. Unless you're planning on refreshing the screen at several hundred frames per second, in which case all that needs to be done is to model how the eye would perceive each pixel.

Far from inventing a solution to the Analogue Hole, you've just come up with a very convoluted and inefficient colourspace, but one that's trivial to convert from.

Marks argues that the decoded data can always be intercepted within an emulation of the computer. I proposed a method where it the data is not presented raw at any point within the system, and thus cannot be intercepted within the computer.

If we assume that the emulated machine exists but outputs your PoV stream instead of a normal video stream, surely it would be possible to intercept the PoV stream and convert it back.

After all the stream is derived from the raw stream using some algorithm, the basic ideas of PoV are understood and could presumably be emulated (as Ranting General says). Why couldn't an algorithm be designed that would do the reverse of what yours does?

Isn't your PoV stream essentially the same data as the raw stream, just in a different format? Both "formats" can be read by the human eye, in order for it to work it would have to be possible to convert data in one direction, what stops it being converted in the other direction? If it can be converted in the other direction then it can be copied entirely within a computer.

You're arguing the old saw "any lock invented by man can be picked by man." I doubt this can be argued on any level of mathematical proof, Gödel notwithstanding.

I would argue that anything which can be seen and/or heard is recordable, hence DRM is ultimately futile. It's like the analog hole, perhaps one can call it the "optical/audio hole"? There is no lock to pick.

One could perhaps try to exploit the technical differences between the human eye/ear and the video/audio recording apparatus to differentiate between the two (perhaps that was what you were suggesting above) but I suspect (but cannot prove) that that approach will also fail. The human vision system is capable of many fancy tricks, but none of them are necessary for making a copy of a moving picture. As for adding DRM to audio files, that makes even less sense. A good speaker pointed at a good microphone will replicate music well enough for most people to buy it. I can't tell the difference between an orchestra and a CD-quality recording of one, can you?

This is a good criticism of a poor argument. However, you miss one crucial point about DRM systems (at least, I don't see it addressed).

DRM systems not only enforce copyright - they exceed it. Now, it is true that a purchase of DRM-protected material constitutes a private contract, but consider that this is an implicit, not an explicit contract, and that it is entered into blindly and usually without informed consent. That is, the consumer is not informed that this product will reduce their legal rights. Indeed, I have never seen a proper contract (or any contract at all) on a DRM-protected CD or DVD. Consider also that no contract is enforceable unless explicitly assented to - and shrink-wrap "contracts" are of dubious legality.

But the main point is that, in the USA at least, the right of fair use is explicitly granted by copyright law - end users are explicitly allowed to reproduce portions of the material for certain purposes, including review and satire. The use of DRM actually removes these government-mandated rights. As such, the introduction of DRM by private interests (for that is what the media corporations are) is here trumping the laws of the democratic government.

Now, whether one thinks that DRM is a good thing or not, this intrusion of corporations into government is surely a serious concern.

[Thanks for these comments, they seem to address a different argument than others have. I will argue that the contract is implicitly encoded in the software and hardware player. We stick a DVD in the player, we immediately learn that we get an FBI warning and probably movie trailers we cannot fast forward through, and that we cannot copy the disc without extraordinary measures, etc. We all know this, or at least, quickly learn the implications of the contract we have entered into. You may call this an ex post facto contract, but only the first time you play a DVD. Whether those contractual terms are burdensome or illegal is a political issue. --Charles]

A "convoluted interlacing system that relies on persistence of vision" sounds a lot like the simple pull-down systems that have been used for many years to convert progressive film to interlaced video. The IVTC process is well-understood and easy to apply by anyone with a modicum of knowledge. A more complicated pull-down process would merely require a more complicated form of IVTC. So I don't think your hypothetical idea carries much weight with regard to preventing the reconstruction of raw content.

The one issue that you have skated over here is that DRM attacks are not the same as attacks on other forms of cryptography. The classical 'third man' attack refers to a system in which the attacker has neither the content nor the key. An attack on DRM is a 'second man' attack, in which the attacker has the key, but is prevented from using that key in the way that is desired. This prevention is enforced by a functional algorithm (which falls under the remit of a Turing Machine). This is why Marks' paper, while committing an error in terms of global computational principles, is correct with regard to the impact on DRM.

There's a good reason that Marks talks about MAME, it's that a hypervisor attack is the ultimate mechanism for disrupting a DRM system. No current DRM system has a defence against a well-crafted hypervisor attack, and I find it unlikely that any future system would discover one. Note I said 'well-crafted' - it's possible to make hypervisor attacks difficult, but not impossible.

It's not surprising that the attacks on the latest forms of DRM (AACS and BD+) have been relatively simplistic. Obviously attackers will seek to disrupt the DRM through the easiest means possible. But there exists ample motive for the attackers to escalate the quality of their efforts if it becomes needed.

[OK, I see what is happening here, and I've fallen right into the trap. You guys raise an objection, I tack on another apparatus to a hypothetical DRM scenario, which only leaves another avenue of attack. Then I can repeat the cycle ad infinitum. This is an old game I should know better than to play. Perhaps it is best expressed (albeit rather cryptically) by Hofstadter's essay 'Edifying Thoughts of a Tobacco Smoker,' a competition between Crab's Record Player X, and Tortoise's record "I Cannot Be Played On Record Player X." Crab creates improved Record Player Y, then Z, and on to Omega, while Tortoise defeats him every time. This is a fools game, which can literally go on to infinity. This game is useless rhetoric.
DRM opponents in this thread have sought to widen the scope of my argument, and I have fallen for it. I seek only to narrow the argument, focusing on Marks' incorrect understanding of Church-Turing. And others in this thread have explicitly described Turing-Incomplete processes that are already used in DRM, one need only require external input (such as a license server) to make an algorithm Turing Incomputable. Even if Strong Church-Turing were true, it would not imply that a Turing Incomputable process could always be emulated, let alone provide an opportunity for interception of the final output. --Charles]

Even if the contract is implicit in the hardware or software, it's still an implicit contract not an explicit one. In other words, it is not a contract that the end user has explicitly assented to. As such, it's not legally binding - you can't sneak contracts past people without telling them that the contract even exists, let alone what the terms and conditions are.

And, yes, this is a legal and political issue. That's the point. DRM may or may not be technically feasible to a given degree of protection. But whether it's legal to use it or to get around it is the important question.

Quantum cryptography is not cryptography at all, and it has nothing to do with incomputable algorithms. It's a method for intrusion detection on an optical fibre link. Eavesdropping results in destruction of the signal, which can be detected by the intended receiver, who can signal to the sender that the transmission should be halted.

Unfortunately "quantum interception detection" is not a very catchy name, so some people are left with the impression that the technology involves some sort of magic quantum device which encrypts the signal in such a way that only another magic quantum device can decrypt it. Receiving the signal is just a matter of setting up some optical gear and a sensitive light detector. The entire process, including intrusion detection, can be simulated on a computer to arbitrary precision.

[Thank you for your very interesting input. But perhaps we are dealing with the same issue from different angles. Marks argues that Strong Church-Turing is true and thus a signal can always be intercepted by an emulator. I disagree, and brought the support of mathematicians who say Strong Church-Turing is a myth, regular Church-Turing does not imply that any given computer program can even be emulated by another, let alone present an opportunity for intercepting a decoded DRM'ed signal. Then you describe Quantum Interception Detection as a method that could make it impossible to intercept a DRM'ed signal without destroying it. This is what I understood about Quantum "cryptography," and why I invoked it. I interpret your remarks as supporting mine, Quantum Cryptography is beyond the realm of Church-Turing, so appeals to Strong Church-Turing as an inviolate law, as Marks has done, undermines his fundamental argument. --Charles]

Leave a comment