"When will we STOP talking about metadata?"
I wrote last week to describe the metadata workshop that Laura Dawson and I presented on Thursday at the Frankfurt Book Fair. Laura and I were also invited by Mark Dressler to spend a half hour on Thursday answering questions tied to the theme, "When will we STOP talking about metadata?"
Dressler had organized a series of public discussions as part of the Sparks Stage program that took place in Hall 8. The program featured people talking about topics that were generally more interesting than metadata, but Laura and I gave it a go.
The audience for that session included Publishers Weekly's Calvin Reid, who captured much of the conversation in his piece, "Metadata: The Whack-A-Mole of Book Discoverability". In the title, Reid picks up on a phrase that Laura used to explain how hard it is to fix metadata once it is out in the universe.
Over the course of the half hour, Laura and I were able to return to several points made during the morning workshop:
- Content abundance makes complete and relevant metadata a vehicle for discovery and evaluation
- Metadata will only grow in importance over time
- Updating metadata will become a competitive weapon and a process that does not end
- Roles will blend, and everyone involved in creating, managing and selling content will have a role in managing metadata
I was lucky to have a couple of chances in Frankfurt to work with Laura on this topic. Before the Sparks Stage interview, I described Laura as "the Queen of Metadata; I'm something akin to the Crown Prince". We've worked together a number of times on projects that included the use of XML in book publishing and workflow redesign for a smaller education publisher, and I learn new things every time we connect.
I wonder, Brian, how you see SEO and the Google Panda and Penguin updates to their algorithms as influencing the role of metadata. It seems to me, as metrics of engagement with content is weigh into what is returned by search terms, maintaining robust metadata *may* be only part of the challenge.
Thanks; you’re asking a useful question that Laura Dawson, in particular, is giving a lot of thought to. She might weigh in here or on her own blog, but from my perspective:
- algorithm changes are a given; the Googles of the world are always going to be tweaking
- to the extent that the implications are disclosed (Google favoring snippets, a variant of RDF, for example), content producers, including book publishers, need to conform
- independent of what the search engines say they seek, complete title-level metadata (particularly ISBN) is a useful way to promote titles
- it’s likely that approaches that tag components of longer-form content, structurally as well as contextually, will help highlight content that may have been missed
- while search is an important consideration, it’s not the only one; the utility of content in user-managed systems will depend on metadata that may not directly drive discovery but does improve relevance or value
I agree with you that maintaining robust metadata is part of the challenge, not all of it. What publishers can’t do is assume (or advocate) that it is someone else’s role to maintain metadata, understand search, etc. I think it is migrating to a core competency for publishers.
Agreed with Brian - because of the sophistication of search, and the fact that it is forever evolving, doubling down on describing your content really well is turning out to be a primary publishing function. In other words, if you’re not publishing effectively, you may as well not be publishing at all. If you’re not drawing attention to your work, then it is by definition private.
Metadata standards will change and evolve, but it’s wholly safe to say that by structuring your content in some way, and by indicating which concepts are most important, you can adapt to whatever evolution brings us.
I just had the opportunity to view an application that uses search but is not itself a search engine, which compiles books from publicly-available sources. Structured, linked data offers this application opportunities to gather not just more content, but more APPROPRIATE content for what the user wants.
Structure and context are not diametrically opposed - publishers have to be good at both if they’re going to survive.
Thank you, Brian. That’s helpful.
Thanks, Laura. I’m particularly interested in how the quality of engagement plays a role in discoverability, as that stikes me as a profoundly relevant metric. How do you see that in particular influencing metadata that accompanies the book’s content?
This is really interesting, because there are so many systems that sort-of-but-don’t-really talk to each other. What it boils down to is the web is a text medium, and text can be expressed in many, many ways. And it’s a matter of getting systems to understand that “this thing is the same as that thing” and “this thing is not the same as that other thing”.
We tried that with BISAC and BIC categories, but they are really not granular enough to express meaning for discovery purposes - they are, unfortunately, all we’ve got. And there are mappings between the two schemas. Painful mappings where meaning gets lost. This happens a lot as well on retail sites - they get a data feed from a publisher and have to map the BISAC categories to their own. And things show up in the wrong place.
As content gets atomized, and entire texts are available for search all over the world, it’s becoming clear that the metadata won’t necessarily accompany the book but that the book itself will be marked up semantically. Which, you know - if a bunch of publishers can’t even agree on what a “pub date” is, imagine trying to find agreement on a structured vocabulary. And translating that vocabulary internationally.
But it’s necessary to have interoperable standards. I’ve kind of abandoned the “one standard to rule them all” approach in favor of Tim Berners-Lee’s view that small, flexible, interoperable bits of text (tags, http itself) and a lot of input from EVERYONE (publishers, consumers, retailers, librarians, critics) will build a better book.
Thank you, Laura and Brian. Really fascinating stuff.
Meta-data is marketing. Period. I can recall stumbling onto a certain online retailer’s making a change from only allowing one BISAC for a given eBook to allowing recursive, hierarchical use of categories for eBooks. In other words, you could go three BISACs deep and be “shelved” in all subjects. Some fast dancing by marketing and managing editorial based on that revelation yielded some very nice incremental sales for that publisher, who shall remain unnamed.
In my opinion, the blend is the key here. I can remember advocating for marketing “owning” meta-data conception, sales having a say, and then managing ed. owning it’s integrity from a data completeness standpoint.
It’s funny how much things change and how much they stay the same. I remember cover meetings, where basically all of the “meta-data” concepts were dissected and turned into the plan. In many ways, publishers just need to re-aim those efforts away from brick and mortar stores and toward machines used by people in very certain ways.
Thanks for both the real-world examples and the historical perspective. I agree with your sense that we need to reframe what we do, not necessarily reinvent it. We categorize books from the outset, but we haven’t always captured that information in a way that we’d say is “machine-readable.”
I think you’ll like the work Laura is doing at Bowker these days. She gets a day a week to dream about metadata. Of course, she dreams about metadata even when she isn’t being paid to do so
Great stuff. I will keep my eyes out! I also dream about meta-data, for better or worse.
It’s for the better. Trust me!
I confess, my dreams are more like nightmares; of logarithmically growing haystack burying needles of increasingly suspect quality. And it’s that q-word that worries me most. I’m no ludite or romantic but I strongly suspect the human role in delivering quality to readers is going to resist a technological fix.
I’d distinguish between the quality of the content, which is a function of many things human, and the quality of the metadata, whose improvement helps increase discovery, access and utility. In that latter case, machine-readable is a good thing.
Brian, +1. There are some things that humans actually DON’T do so well, and standardization/consistency is one of them. In the case of metadata, that’s a critical issue.
Peter, I once had a self-published author client who contracted with me to scrub up his metadata, make his book vastly discoverable, etc. - which I did. What I could NOT do was improve the quality of his book, which the excerpt of the book clearly demonstrated. (Well, I suppose I could have, but that wasn’t in the scope of the contract.) It was an interesting lesson in the idea that while good metadata leads to discovery, once that discovery was made, the sale might still not happen.
Thanks, Brian and Laura. This goes to the heart of the question: If quality of metadata won’t help ensure quality of content what will? Human curation and human evaluation seems necessary and within “smart communities” where the results may be more trustworthy.
Quality differs depending on whom you talk with. I think that’s more easily captured in reader responses - ratings, reviews, etc., some of which can be codified and measured, some of which cannot. (“Show me the top-rated novels that take place in the 1920s in Paris” could lead to a horrendously-written historical fiction that’s fascinating in its attention to detail; or a romance, which you might not be interested in; or a literary gem. Each of which has value in itself, different from the others.)
The fact is, given the quantity of books that are out there, the accessibility to them, the number of readers - it’s no longer a good idea to unequivocally state (as metadata does - it is all about unequivocating): “This is a good book” or “This is a bad book”. Someone will always disagree and make a meme out of you.
As a digitally-inclined marketer, I love meta-data. I see parallels in Google’s quality scoring methodology for an ad. They take into account how well the ad unit’s content matches the destination page’s content—which is a big way they eliminate spammy ads. After that, it’s a “popularity contest,” which is actually a good thing for ad quality as the unit that delivers best on its promise wins (lower cost per click because of more clicks and then virtuous cycle). It isn’t a perfect system and it favors big players. But, they are often simply the ones with the optimized…wait for it…meta-data, which is primarily what Google is looking at to determine the quality score of a book ad.
I can definitely envision a world where the meta-data must “line up” with the quality/content of the underlying work and then be governed by a democratic, consumer “popularity contest.” In other words, no lies in the meta-data. What is promised must be delivered on or “quality score,” which for books would be all of sorts of things like search ranking in-store, etc. would drop. I think—but don’t know for sure—that this would get the cream to the top. Or, at least, surface those books that those readers actually wish to read. Just typing out loud…
This is a very interesting discussion, everyone, and I want to jump in on a point, as well.
Over the course of the last couple years thinking on metadata has evolved from being the responsibility of operations to marketing to almost everything in between. Yet most, if not all, publishers are yet to implement a meaningful metadata and discovery strategy across the board for their digital properties.
It occurs to me that the flaw in the “metadata is marketing” argument is that it becomes an unscalable proposition, just as our current marketing campaign strategies of treating each book as an island. Thankfully, some marketing groups working within publishing, like Fauzia Burke’s FSB Associates, has moved on to category marketing and leveraging the possibilities of cultivating context over individual slices and untenable campaigns which work best for the already established.
But, digression aside, back to metadata. In my view, the biggest, most significant shift in thinking on metadata is that it is a technology issue. DBAs should be dealing in metadata, not markting or PR groups with cursory understanding of SEO. This issue will only continue to plague us until we start thinking about implementing logical technology strategies that revolve around (meta)data flow and wherein data is constructed with the purpose of driving discovery.
In the (hopefully, not so distant) future, a book will inherently include this “backend” functionality which will be a requirement to compete in the marketplace. As Laura said above, are you publishing at all if you don’t tackle this essential function of the publishing process?
Laura: I guess I question that quality is purely subjective, especially when it comes to nonfiction. And the new nearly nonexistent threshold to publishing really pushes the q-question to the front of the line when it comes to problems to solve.
Pete: I think the metaphor of the cream rising to the top of the milk is perfect. The bucket is turning into an ocean. But more to the point, cream rises to the top because it’s richer and is inclined by itself to float on top of less fatty milk. It has nothing to do with user reactions. I know you know this, so forgive me, but I’m not sure folks always grasp what we’re loosing: a process that selects for quality publications based on taste, discernment, and an instinct for what matters.
Brett: The point I’m taking away from your comment is that whatever counts as marketing must be scale-able. Discernment then data may be the way to go.
Lots of great points being made. Thanks, all. I’ll share this, which seems relevant…
When I worked for a publisher— for the purposes of this discussion I will conveniently forget which one—I “went the other way” to engineer a system to auto-generate ads that would be relevant to the content by algorithmically generating tags from multiple sources (only some from the books’ contents and only under certain scenarios). These “if…then” rules fed a system a bunch of “yes” words for a given title. These words were sure to at least accurately correspond with what was housed within the title and/or the ways in which it had been described throughout its creation.
A marketer then combined the “yes” words into, in this case, ad copy. If they needed combining. Loo at book ads and you can see how easy they are to generate. You do need a human, though, or you wind up with off-putting ads (hi Amazon) like “Buy Henrietta Lacks.” Not cool.
Gut, I digress. The system worked well and was low-touch for the marketers. Given another six months, we could easily have been producing other types of marketing data at scale using the same core system—and for all I know this publisher is doing exactly this.
Basically, if you can do your marketing/sales thinking up front, thinking systematically and cross-title and then define rules, the brunt of the work can be machine-performed in adherence to these one time-defined human specs. At the last point before the data touches the consumer, a little human touch-up work is performed to give that little something that is great marketing (and avoid the inevitable “whoops!”). It’s another way to scale, I think. Love this stuff…
Implemented properly metadata is infinitely scaleable. Metadata generation is a natural outgrowth of other publishing processes: editorial development, marketing, copy editing, tagging, reviewing, etc.
I think publishers get stymied when they see metadata as a discrete function. This however is a necessary part of the learning curve. As much as some of us wish we could pretend otherwise, the machinations of metadata are fairly complex, particularly for non-technical staff. There’s no way around that.
So in a publisher’s early encounters with metadata s/he sees it as a discrete task, a unique aspect of the new electronic publishing paradigm.
Over time publishers begin to recognize that metadata is a “module”—one of many tasks that are part of a state-of-the-art digital workflow. It doesn’t become easy at that point, but certainly less intimidating and onerous.
Great point, Thad. As a very, very bright former colleague of mine once put it to me: “Well understood does not mean easily implemented.”
It does seem like it’s a—if not *the*—better mousetrap worth building circa Q4 2012. As you say, it will become less intimidating and onerous over time. I equate that with commoditized. So any advantage is speed-to-scale-to-market based. A little like the digitization/rights clearing gold rush for eBooks over the past, well, for a long time now!
All I know is I’ve seen the data, and done the studying from the pov of the marketer; better meta-data sells books. Truly innovative, great meta-data really, really sells books At scale, it’s as powerful a lever as pricing, promotion, or placement.
Then again, of course, I would say that. But I truly believe it and know it to be the case. This is fun stuff—am going to share more broadly to see if we can the other proud geeks to chime in.
Thanks, Brian (and Laura) for kicking this off with a great piece and to Peter for looping me in.
I’ve become very leary of the statement “great metadata really sells books” only because it’s tossed around without enough meaning attached to the phrasing.
What is “great metadata”? Copious amounts? Accurate? Full of reviews? A great cover illustration?
How does it “sell books”? Automatically? Via SEO? Through recommendation engines?
I see a lot of publishers turning against metadata because they’re promised magic. As I noted in my last blog post Is Metadata Magic? (http://thefutureofpublishing.com/2012/09/is-metadata-magic/) I’ve learned that while metadata can be enchanting, its powers are more down to earth.
Creating realistic expectations for what metadata can and can’t do is what will get publishers onboard.
Thanks, Thad. I see from where you’re coming and I think I can offer a few examples to illustrate from where I’m coming…But…my response is TK as I have to take my daughter to the park before dark (she’s my other job, which starts at 5:00 Eastern). So, response ater dark Eastern.
Thad and Pete: I’d want to offer into the mix a question about the limits of metadata when search is being stressed by the ongoing tsunami of content and the unrelenting gaming of SEO.
I feel at least somewhat ready/qualified to respond to the myriad themes and questions here so here goes…Warning: data + marketing is where I live so this may run long. I hope it offers value commensurate with its length.
I’d offer that creating great meta-data that organically increases sales through digital channels is what separates a great publisher from an average publisher. This is a function of how well their marketer knows his or her stuff from both the consumer-eye view of discovery, the “real,” “abstracted,” and/or the “promise” of the author/title he or she is marketing, and the data.
Get consumer data: Google Keyword Suggestion tool, for example, to figure out popular searches surrounding your core search (from your fine essay, Thad, let’s go with “how to bake bread”). If I were working the title, I’d find a cluster of search terms, the frequency of searching, and its seasonality (Google Trends for that last piece). Looking for golden book-related terms that have both high frequency, relevance to my content, and—speaking generally—holiday and Q4 seasonal spikes. I find those words and I know that to do with a lot of my meta-data. White Hat SEO. A bit of gaming, of course, and I wouldn’t take it too far as, like fashion, it rarely works if one tries.
Facebook ad targeting tool (not to advertise—to research). People who like baking bread…what else do they like. Maybe they’re into efficiency (I’ve totally made this up but it doesn’t seem too far-fetched). Well, is there a way that our cookbook saves time? If so, gold. If not, I’d keep moving to…
Google results that don’t seem related but can be useful. The first result I get from Google when searching for “how to make bread” has an SERP title of “How To Make Bread (without a bread machine).” As a marketer that “without a bread machine” jumps out. This is the page that Google has declared most authoritative with regard to our phrase—it is the one our core consumers prefer over all others. Is there any way our book keeps the reader away from the (apparently) dreaded bread machine? If so, let’s use it. The third result mentions “beginners.” Is our content good for beginners and experts alike? If so, we’ve got our title, subtitle, and lead line for our blurb. Only if truly relevant, of course. Neither consumers nor machines want to be gamed—and both will reject it if they are.
BISAC codes should flow from the above—perhaps Cooking > Baking > Bread > Handmade and Cooking > Baking >Bread > Beginners and so on (these may not be real). Three levels last time I checked. Being virtually shelved in the “right” three sections of the store sells. I’ve watched books properly shelved as I have described above—based on consumer insights – lay claim to slots on category bestseller lists. Own the tops of charts and you’ve got above the fold placement on all screens. Much of Amazon’s rank (which determines what I call “auto-merchandising”) is based on the conversion percentage of the given title page. So, in a sense, if you can convert the right people at the outset, you outsize into other areas (top of general baking searches, for example). Call it machine word of mouth. But you’ve got to get in the top 5 or 10—just like with Google SERPs, no one scrolls or “next pages.” They may search again…uh-oh funnel leak.
So, to me, the magic lies in the ability of the marketer to do this creatively and scientifically at the same time and to always be thinking like a consumer and a machine. Thissis what I was always trying to do and then turn all of that it into something, to use your phrase, Thad, “magical.” At least that was what I was endeavoring to do! It worked often. I can’t throw numbers around. But you can ask folks. It worked. And, with APIs and clever coders, much of this can be automated and scaled to a degree unthinkable at the moment. But there will always need to be that creative scientist marketer quant to supply the magic.
Thanks, all. Great fun!
Peter: I totally see, get, and live your point. There is a big ocean of published stuff and a sort of SEO arms-race. But, I think, like any good business, the best, most up to date, honest but crafty marketer, with the best collaborators, tools, and content will win. The key for publishers is to take that marketer’s brain and turn it into a system. Based on my experiences, the good publishers can get there, if they overcome fear and accept the machine component without just trying to game it. It seems to me the good ones do it naturally. It’s just how they think. Again, like fashion, the stylish ones seem like they’re barely trying. And, yet, they remain the most stylish year in and year out.
With that, I’ll take my corduroy and gray hoodie-wearing self and get back to marketing books. Again, thanks all. I’ll be checking back and continuing to spread the word. This is terrific stuff. At least to me.
I’ve enjoyed reading all the comments on this and indeed want to thank Brian and Laura for kicking off what has evolved into a really wide-ranging discussion on the topic. The comments here have been really insightful, so thank you all for those.
I did want to jump in on the theme of metadata as marketing vehicle as it relates to discoverability. It gets back to something Peter Turner said, and also Thad, your question about what it is that we mean when we say “good metadata.”
In looking at this issue around metadata and how it drives discoverability and sales, if one assumes it is part of a marketing function, I would also then propose that we can’t look at it in a vacuum. If the role of metadata is to drive discoverability, there are other items in that toolkit that drive consumer understanding of quality and value (as Laura alluded to, perhaps the metadata brings a reader to the book, but the excerpt sells them). It’s a matter of quality throughout the book marketing and sales process, and that translates to “good” material in every instance, where metadata is one piece of that puzzle.
But to Brett’s point, I do also see how this then also further spreads outward into editorial and other areas (for instance, can a really well-written author bio help to sell a book? By validating the author’s expertise and track record in publishing, perhaps it can. This goes hand-in-hand with excerpts). And then production as well, as we think of cover design and packaging. All of this comes together as a complete whole to convey to a reader that they are getting something of value. The metadata brings them to it, but the entirety of what a publisher offers for discoverability is what will sell them.
You ask “can a really well-written author bio help to sell a book?” Sure it can. I think we all agree that nothing helps sells a book better than publishing a good book in the first place. But we shouldn’t confuse a well-written book OR well-written metadata with a different creature called “great metadata.”
Metadata is a predefined set of data fields that (a) get filled in by the publisher or don’t and (b) are displayed or otherwise employed by etailers to help sell books or aren’t.
For me the most frustrating fact of metadata life is the unwillingness of the biggest resellers, Amazon, B&N and Apple, to use a metadata standard called ONIX with anything resembling a respectful embrace of that standard. Publishers that put a major effort into ONIX metadata are slapped in the face.
It turns me all Biblical: dreams of plagues, devastation and retribution. So I co-wrote a metadata book instead. Therapy.
@Adam and @Thad
I agree with you, Adam; the early-in-the-process “creatives”—authors, editors, jacket designers, copywriters, managing editors, etc.—must conjure great “raw material.” And also with you, Thad, that the XML-slingers must stare down the odds and ensure the proper meta-data reaches its destinations on time and intact.
It’s been my experience that this process works more often than not (often despite a house’s “culture,” stated beliefs, and, er, workflows and, um, systems). Once done, a savvy “creative quant” style marketer can add a little magic. The final file should be the great one—the one that is consumer- *and* content-aligned. It’s the one constructed by the right minds, doing what they do better than most, and using killer tools. It is the one with the best shot of reaching/influencing the right consumers and moving books and bytes.
I’ve noticed that many talented marketers who work with equally talented creative teams actually influence the meta-data very little and based primarily on consumer- and marketplace-specific insights, data, and the opportunities they present.
I actually feel very strongly about the collaborative creative process point and what it creates and for what ultimate use case. Basically, I don’t think marketers need to read the books they market. In fact, based on what I’ve outlined above working as it should, I believe it is a major mistake for marketers to read the books and that their *not reading the books positions them to execute much better consumer marketing.
I’ve been eating my own dog food on this for years and on nearly every campaign, intentionally focusing on the marketing materials and metadata, not the book’s content. I have many reasons for this…some of which I outline in a post here:
I would truly welcome your and anyone else’s feedback.
One disclaimer: meta-data was heavily on my mind while writing it, though I get geeky enough to mention it by name. This despite being a proud geek and having meta-data on the brain. Obviously. Thanks, all.
Partly to provoke, I have to ask if metadata isn’t just inherently limited because it only delivers content via search. If the problem of poor metadata is solved, the content consumer isn’t necessarily better off. A vast ocean of content, all with brilliantly complete metadata doesn’t help the content consumer find quality content, which is his/her goal to begin with. Social curation, social recommendation, expert reader recommendations—on social platforms these are the drivers of discovery.
The answer here depends on your definition of metadata. If you restrict yourself to the “core 31” fields in ONIX, you’re right that metadata has an upper limit. But I would not hold yourself to just those fields.
Going back to the initial post, one of the observations Laura and I made in the workshop was “Updating metadata will become a competitive weapon and a process that does not end”. One of the things we suggested was “capture everything that’s being said about a book”. This isn’t clear in the post, but we feel that the conversation about a book is also metadata.
Structural and contextual tagging within a text is also metadata, not at all addressed by ONIX. People looking for greater insight or the ability to buy a portion of a work could come to rely on these kinds of tags.
Thanks, Brian. I hope that this broader definition of metadata is embraced as it sounds like it can help return content that is likely to be of greater relevance and quality.
I recently mentioned to a client that a web property they owned could perhaps be made “more aware of the goings on outside its own universe in which it has a stake.” It was difficult for me to tell whether or not they planned on taking that particular suggestion…
But, what i was getting at (without actually knowing it at the time) was “living” meta-data as you describe it, Brian. Meaningful,fresh, contextual to the world today. Now. Nothing I can think of that can be done at scale and automated would go further to align innate consumer wants and needs with the books to suit and make it plain as day to the consumer that, yes, this is the right book.
Also, I couldn’t agree more with this statement.
“Updating metadata will become a competitive weapon and a process that does not end”
Really well put, Brian.
I am of the opinion that pre-work is enormously important. But where the world has changed the most is in the data we can collect, analyze, and use to measure our success or failure to achieve goals in (nearly) real time. I suspect near-future marketers will spend more time on titles post-launch, responding to data and making changes to do more of what’s working, less of what’s not. Launch, learn, adjust. If I ran a house, that’s what my consumer marketers would do most of the time.
Boy would living meta-data fit nicely into that model. Increasingly relevant and alive marketing, which self-improves. What a lovely thought….and one I think can and will happen. Sort of has to, doesn’t it? Someone will realize that, as you put it, “updating metadata will become a competitive weapon.”
Not to get too buzzy and confuse matters but Big Data capabilities will necessarily play a huge (or, perhaps, just big) role in all of this. As will APIs and “the graph: and just how open or closed the aggregators will wish to be…
To Brian’s point above, when we talk about metadata we’re not just talking about the basic descriptors of a book that we’ve all come to know and love.
As more shopping is done via search, and search engines index full texts of books, we’re going to have to describe those books (and chapters, and passages) more granularly. As Brian mentions, this is not something that can ever quite be “finished”.
As T.S. Eliot says in “Tradition and the Individual Talent”, “No poet, no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists. You cannot value him alone; you must set him, for contrast and comparison, among the dead.”
Our perceptions of what’s been written change over time, and our ways of describing those perceptions evolve as well. We see this every time a writer is “rediscovered”. So metadata about a book, or embedded within the book, is never “baked”.
There’s a group associated with W3C which is now working on semantic markup for web pages concerned with books. This will inevitably lead to semantic markup for books themselves. They are called Schema Bibex, and are developing an extension of Schema here: http://www.w3.org/community/schemabibex/ I’m a. part of this group and I’m really excited about the work we’ve just kicked off.
I love that T.S. Eliot quote, Laura.
This conversation is immensely valuable, so thank you all.
A couple of questions for the group:
How does new metadata beyond the ONIX 31 (or 200 depending on how you look at it) become part of any “standard” send/receive process as it travels end-to-end-and-back through the publishing ecosystem? How does something new and brilliant and useful establish itself as a new currency of sorts in the reading world?
I can envision a world in which metadata itself is a form of entertainment, if designed and delivered to end users in the right way. So forget about book discoverability for a second—how does new metadata itself get discovered and become relevant?
Thank you, Laura. Great quote from Eliot. Great discussion but I keep coming round to wishing, though, it was more focused on what readers want out of books and text: quality and relevance to *them* not people in general or if x then y relations.
In a post last May I made a distinction between Findability, Discoverability and Marketing (http://thefutureofpublishing.com/2012/05/findability-discoverability-and-marketing/). To me this is key.
@Ed DeCaria asks “how does new metadata itself get discovered and become relevant?”
Without a seeker, nothing can be found. I bisect seekers as (a) just browsing or (b) those with something specific in mind (perhaps just a broad subject).
Those browsing are mostly subject to pre-planned marketing efforts and recommendation engines.
Metadata’s more deliberate role arises when the seeker has at least a vague idea of what they’re looking for. As Peter McCarthy
Oct 17, 2012/10:40 PM describes above, metadata optimization, such as BISAC codes and SEO techniques, really come into play more for the seeker than for the browser.
Thad: I wonder what you think about the dynamic of discovery of books being driven by personal recommendation, both on and offline, often in social spaces, like Goodreads.
As far as I can determine, Word of Mouth remains the single most powerful sales tool for books. (In a sense it transcends metadata. In another sense, metadata is often a key source of W.O.M.)
Its power resides in direct relation to the credibility of “the mouth”—your friends are more potent than strangers. Amazon does a far better job than Goodreads by encouraging votes on reviews, top reviewers, etc. You probably don’t know the reviewer by name, but you can get a sense of their credibility.
I think Goodreads remains promising, but is somehow missing the mark. (As for the other book etailer review systems, the less said the better.)