Date created: 5/4/05
Last modified:5/4/05
Maintained by: John Quiggin
John Quiggin

Mirages don't win polls

Australian Financial Review

10 February 2005

Among the great cliches of the early Internet age was the claim that the age of text was on the way out, to be replaced by a Web-based age of multimedia. Closely associated with this was the commercial project of “convergence”, the idea that TV, telephones and computers would all merge into a single package. Meanwhile conventional forms of literacy would become largely obsolete, being replaced by an ill-defined notion of “digital literacy”. Prophets of the multimedia age like Nicholas Negroponte become global celebrities.

A few sceptics objected that all this was little more than warmed-over McLuhan, and that the Internet was essentially a text-based medium. But the 1990s were an age of belief, and sceptics were not welcome. The death of text is still being predicted. In a much-cited essay on “the magic of images” recently, Camille Paglia repeats the standard McLuhan line on the dominance of images. At the beginning of the 21st century, she can scarcely ignore the computer completely, but she treats it exactly as if it was a supercharged TV set, saying:

“The extraordinary technological aptitude of the young comes partly from their now-instinctive ability to absorb information from the flickering TV screen, which evolved into the glassy monitor of the omnipresent personal computer. Television is reality for them: nothing exists unless it can be filmed or until it is rehashed onscreen by talking heads. The computer, with its multiplying forums for spontaneous free expression from e-mail to listservs and blogs, has increased facility and fluency of language but degraded sensitivity to the individual word and reduced respect for organized argument, the process of deductive reasoning. The jump and jitter of U.S. commercial television have demonstrably reduced attention span in the young. The Web too, with its addictive unfurling of hypertext, encourages restless acceleration.”

Paglia’s assertions about “degraded sensitivity to the individual word” are (as is often the case) not backed up with either argument or evidence. What is more significant, however, is the fact that the examples she cites (email, listservs and blogs) are almost exclusively text-based, relying less on images than large numbers of printed books. Yet she somehow assimilates them to an argument about images and talking heads, simply because they are viewed on a monitor rather than on paper.

Meanwhile, commercial attempts to promote convergence have continued, notably including a recent Microsoft announcement of a version of Windows XP for PC-connected televisions. Enthusiasm for such attempts has, however, been dampened by the hundreds of billions of dollars lost on such ventures during the dotcom boom. In particular, attempts to make money out of the provision of “content” on the Internet have failed, and, in most cases, failed miserably.

Advocates of convergence assume that the problem is primarily one of technological limitations, and this is still an important factor. A picture may be worth a thousand words, but in computer terms it takes up the same space as a hundred thousand. The cost difference is even more dramatic with video. A few minutes of talking head video, with perhaps two hundred words of information content, can take up the same space (and require the same transmission time) as a shelf of books. The time-lag and bandwidth charges associated with downloading video discourages most Internet users from relying primarily on this source of information. Over time, however, the steady reduction in the cost of computing and communications has eroded the importance of this factor and will continue to do so.

But the focus on download costs has distracted attention from the more fundamental and durable problem of differences in the cost of production material. A single minute of an average Hollywood movie costs hundreds of thousands of dollars to produce. Even talking-head news and current-affairs items are expensive. A five-minute interview will typically take an hour or more of work from several people to produce. The production of video at anything above home movie standard requires lots of time, technical skill and expensive equipment. Improvements in computer technology have done little to change this.

For most of the 20th century, the high cost of producing video was offset by the availability of cheap and instantaneous distribution through broadcast and cable TV networks. With a relatively limited number of channels, the high fixed costs of producing video could be spread over a large number of viewers. The primary problem was one of maximising the value of scarce broadcasting channel capacity.

The scarcity of channel capacity was greatly reduced by the advent of cable and satellite TV. With 50 or so channels, there is already more channel capacity than there is broadcast-quality content to fill it (I’m using quality here to refer to production quality rather to any notion of merit). Devices like TiVo increase effective capacity even further by making it easy to reschedule viewing (videotape already allowed this, but it was too cumbersome to be used routinely by most people for this purpose).

The advent of the Internet has expanded channel capacity, but hasn’t done anything about the cost of content. Its most alluring promise, as far as video is concerned, appears to be the capacity (still not fully realised) to dial up our favorite episodes of Leave it to Beaver or Gilligan’s Island whenever we choose. Video rental stores have already explored this market and have found that, except for a few cult shows, the demand for TV reruns is too small to be profitably served.

The only areas where the end of the bandwidth constraint is likely to make a big difference are those where social and legal barriers constrain the competition. The two big examples are pornography and file-trading. In both cases the anonymity of the Internet is its chief attraction.

Technical progress has also undermined one of the principal assumptions underlying the convergence hypothesis, the assumption that people would like to use the same screen for TV and Internet. The implicit assumption seems to be that screens and monitors are expensive capital items on which people want to economise. This was in fact true a couple of decades ago, when TV sets were commonly used as computer monitors, and it is still true for games consoles. But the average middle-class household these days contains at least two TV sets as well as one or more computers. Given the different characteristics of the media and the different ways they are used (sitting up close to a computer as opposed to couch-based remote control for a TV set) the use of a single screen for both seems like a pointless compromise that will perform neither function well.

In summary, contrary to the predictions of Negroponte and others, the rise of the Internet has done almost nothing for video or multimedia, and is unlikely to do much any time soon.

By contrast, the Internet has made a huge difference to the distribution of text, by liberating it from the confines of print. Academics, who pioneered both the Internet and the Web, were the first to benefit. Bulky, inaccessible and often out-of-date reference volumes were replaced by instant on-line access to databases. An academic journal process that took a couple of years to publish articles was challenged and, in some fields, supplanted by preprint distribution networks that take seconds. In the process, the universities avoided the looming disaster associated with steadily rising subscription costs for academic journals.

The same dynamic has gradually worked through the entire public sector. Whereas the microeconomic reforms of the 1980s had begun a process of seeking full cost recovery for government publications, the movement to online government has reversed the process. A far wider range of government publications than ever before is available with easy access and, in nearly all cases, free of charge.

The private sector was last to the Internet party and has had most trouble in finding a workable model. While dreams of vast profits have faded with the dotcom booms, advertising revenue seems to be sufficient to support an online edition for most major newspapers and magazines, as well as a handful of purely online publications such as Salon.

In all of these applications, text, rather than video, audio or Flash animations has been the dominant medium. Compared to earlier computer-based media, the Web allows for far more attractive presentation of text, comparable with the best of printed media, and supported by accompanying (mostly static) graphics.

These presentational benefits are, however, secondary to those that are now emerging. The Internet is liberating text in the way envisaged decades ago by Ted Nelson, who coined the term ‘hypertext’ (HTML, the basic language of the Web means Hypertext Markup Language)>

Text, unlike video, is an inherently nonlinear medium. A book or a newspaper can be skimmed or browsed, read in many different orders. But the nonlinearity of text has been constrained by the limitations of print. The academic article, with its array of footnotes, cross-references and citations is an elaborate attempt to surmount these problems. Hypertext allows footnotes that are as long as the writer desires, cross-references that can be checked instantly (and in parallel with the point in the text where the reference is made) and replaces citations by immediate access to the material being cited.

It is, as Camille Paglia observed, easy enough to be seduced by the jump and jitter of hyperlinks. While reading a page on, say, Nelson Mandela, you can jump to a description of the main tribal groupings in South Africa or on cultural changes in the townships. Then, if you are sufficiently disciplined you can return to the original page. Alternatively, you can wander off into pages on world music or anthropology (with graphics, sound and maybe video clips, but still organised around text).

Nothing like this is feasible with video. A string of loosely connected video clips makes, at best, a music video or an art film and, at worst, a mess. Admittedly, the best multimedia artworks can give the viewer a feeling of free movement while maintaining some degree of coherence. But the effort involved in constructing such works is immense, and the freedom of movement is illusory compared to that of hypertext.

More importantly, hypertext offers other possibilities more friendly to organised argument and deductive reasoning, many of which are illustrated by the rise of weblogs (‘blogs’). Blogs serve many purposes, from online diaries to open software. But the bloggers who have attracted the most public interest are those engaged in political and cultural debate. These online debates show all the passion and enthusiasm of the dormroom discussions fondly remembered by Camille Paglia from her college days in the 1960s. Just as in college dormrooms, the quality of debate is highly variable, with posers and pointscorers often outnumbering those with a genuine passion for the issues at hand.

But the blog format offers lots of discipline that is not present in a college bull session, or even, to a large extent in traditional print media. An erroneous or dubious factual claim can elicit an instant refutation, either as a comment on the blog in question or in a critical post on another blog. Bloggers have learned to be wary of the kind of factoids that can circulate in popular discussion for years, and can have a lengthy run in traditional media. A classic example is the assertion that “Al Gore claimed to have invented the Internet’. An anti-Gore blogger of any prominence who repeated this myth would be deluged with comments and links exposing its falsehood. By contrast, repetition of such a myth in a magazine or newspaper article might eventually produce a letter of objection and perhaps eventually a retraction. Also, blogs are a more-or-less permanent record (blogs can be deleted, but others may already have archived the relevant pages). So it’s much easier to be confronted with your own words and asked to explain inconsistencies with your current position or the failure of predictions to materialise. Although the human reluctance to admit error is ever-present, bloggers who want to maintain credibility are forced to make such admissions from time to time, something most traditional pundits almost never do.

Finally, there is the possibility of a ‘McLuhan moment’, similar to the scene in Annie Hall when Woody Allen produces McLuhan from behind a screen to refute the moviegoer next in line who is pontificating on his theories. It regularly happens that a discussion of the views of some prominent authority will attract the attention of the person concerned, and lead to a direct intervention in the debate. For those inclined to dubious arguments from authority, this is a frightening prospect.

It has been said of blogs that “never has so much been written by so many to be read by so few’. But this is the reaction of broadcast mass media to an interactive discussion that is outside their frame of reference. The fact that it is now possible to publish what is, in effect, a personal magazine for a few hundred, or a few dozen readers, or even just for yourself is a sign that we are entering the golden age of text.

John Quiggin is an Australian Research Council Federation Fellow in Economics and Political Science at the University of Queensland.

Read more articles from John Quiggin's home page

Go to John Quiggin's Weblog