The library within us

[What follows is my prepared talk for this week's Books in Browsers conference, taking place for the third time at the Internet Archive in San Francisco. It follows and hopefully builds on two earlier talks, "Context first" and "The opportunity in abundance", that also got their first hearings at the conference. I'm grateful to the Internet Archive for the opportunity to return this year. A video of the presentation is available on the O'Reilly web site. An expanded and somewhat edited version of this talk, "Community organizers" was given at a November 7 TOC event in Charleston and is also posted on this blog.]

Last spring, when Peter Brantley and Kat Meyer started to collect ideas for presentations to include in this year’s program for “Books in Browsers”, I sent along a description of what at the time was admittedly was a loose idea: “The library within us”. In it, I claimed that:

It’s time to think about content, not as a product or a service, but a vehicle to reach an outcome. Literacy is important as a step toward informing and empowering individuals, groups and communities, but on its own it is not enough. As reading experiences become both portable and increasingly universal, we need to reshape our sense of publishing and build "the library within us": a collection of tools and resources that individuals can draw upon to connect with and change the world around us.

Okay, utopian, but you can cut me some slack on that front. I grew up in Massachusetts, proudly the only state to vote for George McGovern in 1972. Though it was not his style, may Senator McGovern enjoy a last laugh in his new home.

But I think the idea can be more than utopian. Marc Andreessen, who has bet on five pretty big ideas in the last 20 years, recently told Wired:

“The Internet has spread to the size and scope where it has become economically viable to build huge companies in single domains.”

We typically think of these “huge companies” as intermediaries – the platforms that now dictate the landscape for digital distribution. But I believe we’re at a point at which we can do more than just send our content to aggregators. To get there, though, we’re obligated to reframe our models and focus on the needs of individuals.

In at least one sense, this was not a new idea for me. I’m on the record as being somewhat skeptical about the long-term efficacy of containers, or at least of containers as a starting point for content. In an era of content abundance, we’re going to have to compete on more than just availability.

And, the idea of content as a service is already moving along reasonably well, at least in some quarters. Just recently, when Springer announced an interface redesign, it described its goals as “speed, simplicity and (customer) optimization.” Those don’t sound like container qualities to me.

In fact, they are qualities consistent with the underpinnings of the “lean consumption” model that James Womack and Daniel Jones described in 2005 for the Harvard Business Review. I’ve invoked their work before, most directly in the initial drafts of “Context first”. When that presentation grew too long for its own good, I deleted all but a passing mention of the source, but I think it’s worth returning to their six principles or characteristics of lean consumption:

  1. Solve the customer’s problem completely by insuring that all the goods and services work, and work together.
  2. Don’t waste the customer’s time.
  3. Provide exactly what the customer wants.
  4. Provide what’s wanted exactly where it’s wanted.
  5. Provide what’s wanted where it’s wanted exactly when it’s wanted.
  6. Continually aggregate solutions to reduce the customer’s time and hassle.

If we apply these principles to traditional publishing, we can start to see why market power has shifted from providers to networked platforms.

Before Amazon introduced the Kindle in 2007, there was no interoperable solution for digital content. Fearing piracy, publishers favored DRM (its own story) and took little interest in the generally miserable experience of readers who tried to read books on early e-reading platforms.

That people bought digital books and struggled to load, read and maintain them on third-party systems was of no consequence to publishers. The market was small, fragmented and of little interest to trade publishers.

During the last decade, some professional publishers did try to organize content to better match the needs of their client base. Bloomberg and Thomson-Reuters maintained their own platforms, deepened customer relationships and over time increased their returns.

Publishers who wanted to provide what customers wanted, where and when they wanted it – in professional terms, “as part of workflow” – also joined up with companies like Silverchair, which works with scientific, technical and medical publishers to deliver information in ways that make it useful and readily accessible to those publishers’ clients.

Still, most publishing activities remain focused on the development and dissemination of containers. Whether they are physical or digital, they remain publisher-determined, relatively immutable and almost always one-way. To the extent that systems exist to change this dynamic, they have been developed by third parties, most of whom have to beg for access to published content.

Hugh McGuire has written that the book and the Internet will soon merge, and I agree, but I wonder if any traditional publishers will be part of the parade.

Over the past decade, Clay Christensen, perhaps the person most closely associated with the ideas behind disruptive innovation, has been exploring the idea that consumers “hire” products and services to fulfill an identified need. This solution-based approach to creating and delivering results ultimately breaks apart prevailing media business models, many of which are more or less container-driven.

There’s no stopping consumers from hiring the products and services they want. There are jobs to be done, problems to be solved, needs to be filled. If we’re going to be disrupted, though, it would help to see it coming and develop a few promising alternatives.

As Peter Brantley recently wrote, “The embedding of books in a networked environment is something we have not seen before.” In his view, it presents opportunities and challenges at the level of an individual reader: what platforms to support, how much access to provide, and what levels of privacy to expect or demand, as examples.

But it also gives content providers a new and wholly transformative opportunity to meet the needs of a market that does not yet consciously exist. As I said last year, we need to become more outcomes- based. Continuing to create and distribute static books, in whatever format you choose, is a strategy that is likely to result in declining prices, lower margins, less income to reinvest and … you get the idea.

Alternatives start with data-gathering. Companies like Innosight, which consults with businesses facing disruptive innovation, make three recommendations:

  1. Understand the criteria customers apply in choosing between solutions. There are plenty of assumptions about why people acquire content, and there is even some reasonable data about book-buying behavior. But there’s precious little data about content-consumption behavior. To the extent that it exists, the data is often held captive by platforms whose interests align only loosely with those of the people producing that content.
  2. Pinpoint an important job that isn't being done adequately. This can include things that I once called “the consequence of a bad API”: workarounds, compensating behaviors and expressed dissatisfaction with products and services
  3. Unlock markets by eliminating barriers for customers. Work done by NISO to create and hopefully implement single sign-on standards is a good example here.

A year ago, I was optimistic that publishers and supply-chain partners would soon see their mutual need for a data-driven reconsideration of why publishing exists and the purposes it can serve. I’m no longer optimistic.

A year spent wrangling over the role of libraries, a year spent kicking the can down the road with respect to the use of DRM, a year spent fostering the idea that we really have embraced “digital”: these things and more have convinced me that the “opportunity in abundance” will not accrue to the incumbents.

This became all too clear to me last summer. In January, I had made the somewhat ambitious pledge to “post something useful every day”. By June, 180 or so posts in, the optimism well had run dry. I just didn’t believe my own hype any more.

Around that time, Helmut von Berg, who works with Klopotek to plan its annual conference in Berlin, asked if I might be interested in developing a short address about “networked publishing”, to be delivered at the Frankfurt Book Fair. My first question was (perhaps naturally) “What do you mean when you say ‘networked publishing’?”

As ever, Helmut was very prepared, and soon I was swimming in a stack of documents that I found informative, challenging and a bit daunting. Here’s a selection of some of the things Helmut had written:

“Traditional publishing is consequently a ‘gatekeeper-defined culture’, while that of networked content provision [is] a cultural network.”

And …

“These content units, whether we call them items, entities or chunks, must be prepared, so that they can be found and used in accordance with expectations.”

And …

“If we go a step further … we discover that we are dealing with two different areas with regard to content: firstly with creation and preparation and secondly with distribution and use. The first can perhaps best be summed up as ‘content clusters’, the second as ‘market’ … Market power is created not by sales strength, but by the quality of usability in non-predetermined user environments.”

Pretty good stuff there. It made me wonder if I could avoid the work and just ask Helmut to deliver the keynote. I didn’t ask him, though, because his thinking, along with that of others whose work he had sent us, started to restore my native optimism for things publishing.

Now, we know that publishing has always been networked, in the sense that getting your book or article published depended on who you knew. Quality mattered, or at least it helped to differentiate the work of an author, but the economics of publishing necessarily fostered what Helmut aptly pointed out is a ‘gatekeeper’ role.

So, the interesting thing about ‘networked publishing’ is not just the fact that publishing is networked, but which networks are now dominant.

As the Nieman Foundation’s James Allworth noted, “If history is our guide, the platforms do gain an edge” when the business models change. You know the short list: Amazon. Apple. Barnes & Noble. Kobo. On a clear day, Google. With respect to digital containers, the platform ship has sailed.

Some publishers have turned their thinking to focus on how they might build platforms of our own. If the goal is to create a way to distribute eBooks directly, I say, good luck with that. What we need to do starts with the recognition that the platform is one to build on, not just build.

To illustrate: I typically think of authoring, repository and distribution as the three primary functions that underpin any sort of publishing.

Although there are plenty of new tools available for authors to use, “authoring” itself is not that much changed. An idea must at some point be turned into a work of interest (or non-interest), suitable for distribution to interested parties. But barriers are now lower, authoring has been democratized.

“Repository” once meant things like plates, then film, then files. Old economics dictated the “minimum viable product”, typically a book, as the package that could be created and sold. Now, the “minimum viable product can be a book, a chapter, a component, an extract, a snippet – anything that can be monetized, as well as some that can’t, or won’t.

This is a sea change from an era that ended in the last decade. New skills are required to manage content at a much different level.

To manage the new repository, many publishers, particularly larger ones, have invested in a plethora of systems, making what are typically large-scale, specialized investments that can be difficult or expensive to maintain.

These investments were made before an era in which cross-platform data mining was considered a competitive weapon. Highly specialized systems perform better for certain uses, but they don’t always play well with others. The ability to look broadly is critical: as a Google executive recently said, “We don’t have better analytics. We have more data.”

These sizable investments included spending on ERP and related IT systems. That spending paved paths – sometimes, it paved cowpaths – and made it harder for traditional publishers to adjust to a world in which the minimum viable product might not have an identifier at all.

Now, services are increasingly available in the cloud. Think about the evolution of backup systems. These were once conceived of as tapes or disk arrays, locally situated, with periodic shipments to secondary, offsite locations. Today, with significant bandwidth, many computer backups take place as hourly background updates, beamed to a location most users could not name. The service prevails.

In 1998, then Wired executive editor wrote “New Rules for the New Economy: 10 Radical Strategies for A Connected World”. Almost 15 years later, the book remains a worthwhile read. In it Kelly says several things that are truer today than they were when he first observed them:

  • “Value is carried by abundance, not scarcity, inverting traditional business propositions.”
  • “As networks entangle all commerce, a firm’s primary focus shifts from maximizing the firm’s value to maximizing the network’s value.”
  • “As innovation accelerates, abandoning the highly successful in order to escape from its eventual obsolescence becomes the most difficult and yet most essential task.”

Think about these things for moment. “Value in abundance” – clearly, proprietary platforms that offer content from a small set of providers are at a disadvantage. “Maximizing the network’s value” favors those who can cost-effectively use existing tools to address a market need.

And “abandoning the highly successful” … well, that’s why I lost my optimism.

But it is the first of his 10 rules that is most sobering for publishers considering a networked reality: “As power flows away from the center, the competitive advantage belongs to those who learn how to embrace decentralized points of control.”

Kelly’s thinking here starts with work done in the mid 1980s by Saltzer, Reed and Clark, who wrote “End to end arguments in system design”. They observed: “The intelligence that matters most exists in boundless variety at the ends of a network, rather than in the mediated systems in the middle”.

They went on to claim: “Therefore, network protocols should be designed primarily as means for those ends, rather than to serve the parochial interests of intermediary operators.”

If I could translate loosely and simply: the “ends of the network” are readers, information consumers and recombinant opportunities – communities of identified and latent content requirements.

For our purposes, the “mediated systems in the middle” are traditional publishers, though they are also the telcos and ISPs who face similar, perhaps equally daunting challenges.

And the “network protocols designed primarily as means for those network ends” … that’s what we got with the Internet. That’s our Sea of Stories.

This is why we struggle with networked publishing: we want to apply our models to the network, and what we need to do it apply the networked models to our business. At the level of a user, networked publishing tells a story. It can be utilitarian, aspirational, inspirational or reflective, but it is not constrained.

A shift to networked publishing lowers barriers to the creation of content, but it amplifies the return for content providers who can leverage two-way communication and create, refine and evolve content products around the needs of the readers they serve. Rather than focus on filling shelves full of books, in physical or digital form, we can open up new markets by filling those shelves with solutions.

Some of those solutions will remain what we have come to know as books, but many more will be conceived, developed and delivered in forms and for purposes that we have yet to fully grasp. If "agile" affords an opportunity to improve discovery, it also supports the ability to deliver what Helmut von Berg called “the quality of usability in non-predetermined user environments.”

We are already evolving from a world in which we decided what would be published, toward one in which we think about how what is published will be discovered, evaluated and consumed. If this is a future we want to enable, we have to be mindful in how we go about it.

Waiting until the market shows itself is a high-risk strategy. We need to prepare for the networked present in at least three ways:

Standards. Beyond product-level identifiers, we'll need a much more robust and extensive use internal tagging. RDF, ISNI and ISTC provide some examples, but we need greater clarity to guarantee access and interoperability.

Structure. If we’re serious about creating, managing and delivering a minimum viable product that meets market-determined requirements, we are going to have to develop, partner or adapt to systems and structures that make content acquisition and monetization possible at levels more granular than most publishers have ever considered.

Sense. Because success is no longer emanates from a series of well-planned, top-down efforts, publishers will need to develop a market understanding that helps them prepare for and address consumer needs that are not yet articulated.

These initiatives demand broad-scale change, but not universally. Authors and customers are not the ones at risk. Rather, publishers and in some cases technology vendors face the greatest threat, as authors and consumers develop their own coping strategies around slower-moving content providers.

Investing in discovery standards, content structure and the maintenance of a market-facing (rather than a product-centered) sensibility are hedges toward long-term sustainability.

The traditional functions of a publishing repository – preparation, management and monetization of content – will need to be maintained at a much greater level of detail and supplemented by a new skill that may be available only to those with sufficient scale: market insight. Without it, authors will have much more reason to sell directly.

Fixed-format sales – physical and digital containers – will persist, but prices will likely decline, with margins shifting to publishers who can monetize the components of a customer-valued minimum viable product.

At the start of my talk, I invoked Marc Andreessen and his view that “the Internet has spread to the size and scope where it has become economically viable to build huge companies in single domains.” I alluded to five bets that Andreessen had made in the last 20 years. At a high level, these are his bets:

  • “Everyone will have the web” (1992)
  • “The browser will be the OS” (1995)
  • “Web businesses will live in the cloud” (1999)
  • “Everything will be social” (2004)
  • “Software will eat the world” (2009)

I don’t think it’s hard to see what happened with the first four ideas. By 2015, we expect that half the world’s population will be connected to the web. At this conference, in particular, I need not make a case for browsers.

Cloud computing, the impact of social – these are givens.

Which brings us to Andreessen’s most recent bet: “Software will eat the world.” His idea reminds me of something Richard Nash asked in Boston last week: “What if the book is the algorithm?”

We have the tools now to serve widely dispersed, networked audiences in ways that would never have scaled in an earlier era. These tools are not products; they are vehicles to reader-valued outcomes.

As Kevin Kelly claimed, “abandoning the highly successful in order to escape from its eventual obsolescence becomes the most difficult and yet most essential task.” It’s hard to imagine that this is the week you need to decide, but … maybe it is.

In his recent book, “The Intention Economy”, Doc Searls captured the principles of network design in three simple statements:

  • Nobody owns it
  • Everyone can use it
  • Anybody can improve it

Think about those ideas for a moment. For decades, perhaps centuries, the primary platform for publishers and their supply-chain intermediaries relied on the ability to exclude. Now, we’re starting to see the dominance of a platform that includes everything and excludes nothing. In return, we get access to global communities and the ability to meet latent desires.

We can “pre-empt and co-opt”, resist the change, buying time and perhaps some short-term wins. Or we can learn the new rules and prepare for the opportunities inherent in networked publishing.

I hope we do the latter, because there are plenty of boneyards we don’t want to end up in.

About Brian O'Leary

Founder and principal of Magellan Media Consulting, Brian O’Leary helps enterprises with media and publishing components capitalize on the power of content. A veteran of more than 30 years in the publishing industry and a prolific content producer himself, Brian leverages the breadth and depth of his experience to deliver innovative content solutions.