Friday, March 19, 2010

"This linked data went to market...wearing lipstick!?!"

This post originally appeared on the Wordpress.com version of this blog

Paraphrasing the nursery rhyme,

This linked data went to market,
This linked data stayed open,
This linked data was mashed-up,
This linked data was left alone.
And this linked data went...
Wee wee wee all the way home!

In his recent post Business models for Linked Data and Web 3.0 Scott Brinker suggests 15 business models that "offer a good representation of the different ways in which organisations can monetise — directly or indirectly — data publishing initiatives." As is our fashion, the #linkeddata thread buzzed with retweets and kudos to Scott for crafting his post, which included a very seductive diagram.

My post today considers whether commercial members of the linked data community have been sufficiently diligent in analysing markets and industries to date, and what to do moving forward to establish a sustainable, linked data-based commercial ecosystem. I use as my frame of reference John W. Mullins' The New Business Road Test: What entrepreneurs and executives should do before writing a business plan. I find Mullins' guidance to be highly consistent with my experience!

So much lipstick...
As I read Scott's post I wondered, aren't we getting ahead of ourselves? Business models are inherently functions of markets --- "micro" and "macro" [1] --- and their corresponding industries, and I believe our linked data world has precious little understanding of the commercial potential of either. Scott's 15 points are certainly tactics that providers, as the representatives of various industries, can and should weigh as they consider how to extract revenue from their markets, but these tactics will be so much lipstick on a pig if applied to linked data-based ecosystems without sufficient analysis of either the markets or the industries themselves.

Pig sporting lipstick

To be specific, consider one of the "business models" Scott lists...

3. Microtransactions: on-demand payments for individual queries or data sets.
By whom? For what? Provided by whom? Competing against whom? Having at one time presented to investment bankers, I can say that "microtransactions" is no more of a business model for linked data than "Use a cash register!" is one for Home Depot or Sainsbury's! What providers really need to develop is a deeper consideration of the specific needs they will fulfill, the benefits they will provide, and the scale and growth of the customer demand for their services.

Macro-markets: Understanding Scale A macro-market analysis will give the provider a better understanding of how many customers are in its market and what the short- and long-term growth rates are expected to be. While it is useful for any linked data provider, whether commercial or otherwise, to understand the scale of its customer base, it is absolutely essential if the provider intends to take on investors, because they will demand credible, verifiable numbers!

Providers can quantify their macro-markets by identifying trends, including demographic, socio-cultural, economic, technological, regulatory, natural. Judging whether the macro-market is attractive depends upon whether do the trends work in favour of the opportunity.

Micro-markets: Identifying Segments, Offering Benefits Whereas macro-market analysis considers the macro-environment, micro-market analysis focuses on identifying and targeting segments where the provider will deliver specific benefits. To paraphrase John Mullins, successful linked data providers will be those who deliver great value to their specific market segments:

  • Linked data providers should be looking for segments where they can provide clear and compelling benefits to the customer; commercial providers should especially look to ease customers' pain in ways for which they will pay.
  • Linked data providers must ask whether the benefits their services provide as seen by their customers are sufficiently different from and better than their competitors, e.g. in terms of data quality, query performance, more supportive community, better contract support services, etc.
  • Linked data providers should quantify the scale of the segment just as they do the macro-environment: how large is the segment and how fast is it growing?
  • Finally, linked data providers should ask whether the segment can be a launching point into other segments.
The danger of falling into the "me-too" trap is particularly glaring with linked data, since a provider's competition may come from open data sources as well as other commercial providers: think Encarta vs. Wikipedia!

Having helped found a start-up in the mid-1990s, I am acutely aware of the difference between perceived and actual need. The formula for long-term success and fulfillment is fairly straightforward: provide a service that people need, and solve problems that people need solved!

Notes:

References

  1. John W. Mullins, The New Business Road Test (FT Prentice Hall, 2006)

DOIs, URIs and Cool Resolution

This post originally appeared on the Wordpress.com version of this blog.

The art of happiness is to serve all -- Yogi Bhajan


Once we get beyond the question of the basic HTTP URI-ness of the digital object identifier (DOI) --- since for each DOI there exists DOI-based URIs due to the dx.doi.org and hdl.handle.net proxies, this issue is moot --- and old-skool questions of "coolness" based on the relative brittleness over time of creative URI encoding [1], we are then left with the more substantial question of whether DOI-based HTTP URIs really "behave" themselves within the "Web-of-Objects" universe. The purpose of this post is to identify the problem and propose a potential solution, implementation of which will require certain changes to the current Handle System platform. I believe that if the proposed changes are made, lingering questions concerning the "URI-ness" of DOIs (and Handles) will disappear, once and for all.

Note: It is beyond the scope of this post to present all of the gory background details regarding the Handle System, the DOI, and the 1998 and 2008 versions of "Cool URIs." If there is enough interest in a stand-alone article, I will happily consider writing a longer version in the future, perhaps as piece for D-Lib Magazine.

With the increasing influence of semantic web technologies there has been strong interest in assigning actionable HTTP URIs to non-document things, ranging from abstract ideas to real world objects. In the case of URI-named, Web-accessible physical items --- sensors, routers and toasters --- this is sometimes referred to as The Web of Things. Until 2005 the community disagreed as to what an HTTP URI could be assumed to represent, but a June 2005 decision by the W3C TAG settled the issue: If a server responds with an HTTP response code of 200 (aka a successful retrieval), the URI indeed is for an information resource; with no such response, or with a different code, no such assumption can be made. This "compromise" was said to have resolved the issue, leaving a "consistent architecture." [3]

The result of this decision was to force consensus on how to apply the long-established principles of HTTP content negotiation in more consistent ways. In particular, "human" and "machine" requests to a given entity URI --- a top-level URI representing a "thing" --- should be treated differently; for example, there should be different responses to requests with HTTP headers specifying Accept: text/html (for an HTML-encoded page) versus Accept: application/rdf+xml (for RDF-modeled, XML-encoded data). This is most often seen in the semantic web and linked data worlds, where it is now common to have both textual and machine readable manifestations of the same URI-identified thing.

Modern web servers including Apache have been engineered to handle these requests through content negotiation [4]. Through standard configuration procedures, site administrators specify how their servers should respond to text/html and application/rdf+xml requests in the same way they specify what should be returned for alternate language- and encoding- requests; "en," "fr," etc. Typically, when media-specific requests are made against entity URIs representing concepts, the accepted practice is to return a 302 Found response code with the URI to a resource containing a representation of the expected type, such as an html-encoded page or an XML document with RDF-encoded data.

Many readers of this post will be familiar with the basic idea of HTTP proxy-based Handle System name resolution: A HTTP resolution request for a DOI-based URI is made to a proxy --- a registration-agency run proxy such as dx.doi.org or the "native" Handle System proxy hdl.handle.net --- the appropriate local handle server is located, the handle record for the DOI is resolved, and the default record (e.g. a document information page) is returned to the client as the payload in a 302 Found response. In a Web of Documents this might make sense, but in a universe of URI-named real-world objects and ideas, not so much.

The 2008 document provides two requirements for dealing with URIs that identify real world objects:

  1. Be on the Web: Given only a URI, machines and people should be able to retrieve a description about the resource identified by the URI from the Web. Such a look-up mechanism is important to establish shared understanding of what a URI identifies. Machines should get RDF data and humans should get a readable representation, such as HTML. The standard Web transfer protocol, HTTP, should be used.
  2. Be unambiguous: There should be no confusion between identifiers for Web documents and identifiers for other resources. URIs are meant to identify only one of them, so one URI can't stand for both a Web document and a real-world object.

In the post-2005 universe of URI usage as summarised above and detailed in [2], if DOI-based URIs are used to represent conceptual objects these rules will be broken! For example, Handle System proxies today cannot distinguish between Accept: codes in the request headers; the only possible resolution is the default (first) element of the Handle record. (For hackers or merely the curious out there, I encourage you to experiment with curl at your command line or Python's urllib2 library, hitting the DOI proxy with a DOI-based URL like http://dx.doi.org/10.1109/MIC.2009.93.) This problem with how proxies resolve DOIs and Handles is a lingering manifestation of the native Handle System protocol not being HTTP-based and the system of HTTP-based proxies being something of a work-around, but the vast majority of DOI and Handle System resolutions occur through and rely on these proxies.

One possible solution would be to enable authorities --- Registration Agencies --- who operate within the Handle System to configure how content negotiation within their Handle prefix space is handled at the proxy. For document-based use of the DOI an example of this would be to return the URI in the first element of the Handle record whenever a text/html request is made and (for example) the second element whenever an application/rdf+xml is made. When a request is made to the proxy, request-appropriate representation URIs would be returned to the client along with the 302 Found code. This approach treats the DOI-based URI as a conceptual or entity URI and gives the expected responses as per [2]. pax vobiscum...

Readers familiar with the Handle System will appreciate that there are many potential schemes for relating HTTP content type requests to elements of the Handle record; in the example above I use position (index value), but it is also possible to use special TYPEs.

Handle servers are powerful repositories and can implement potentially many different models other than redirection as described above. Sometimes, for example, the desire is to use a Handle record as the primary metadata store. In that case, the preferred application/rdf+xml might very well be to return an RDF-encoded serialisation of the Handle record. How this is handled should be a feature of the Handle server platform and a decision by registration agencies based on their individual value propositions, and not locked in by the code.

I eagerly look forward to your comments and reactions on these ideas!

Update 1: In a comment to this post, Herbert Van de Sompel argues that the real question is, what should DOIs represent? Herbert asserts that DOI-based URIs should model OAI-ORE resource aggregations and that Handle System HTTP proxies should behave according to OAI-ORE's HTTP implementation guidelines. Herbert's suggestion doesn't conflict with what I've written above; this is a more subtle and (arguably) more robust view of how compound objects should be modeled, which I generally agree with.

Here's how OAI-ORE resolution would work following the Handle proxy solution I've described above: Assume some DOI-based HTTP URI doi.A-1 identifies an abstract resource aggregation "A-1" (In OAI-ORE nomenclature doi.A-1 is the Aggregation URI). Following the given HTTP implementation example, let there be two Resource Maps that "describe" this Aggregation, an Atom serialization and an RDF/XML serialization. Each of these Resource Maps is (indeed MUST be) available from different HTTP URI's, ReM-1 and ReM-2, but the desired behaviour is for either to be accessible through the DOI-based Aggregation URI, doi.A-1. Let these two URIs be persisted in the Handle record, preferably using TYPEs which distinguish how they should be returned to clients based on the naming authority's configuration of the HTTP proxy. By the approach I describe above, the Handle System proxy would then respond to resolution requests for doi.A-1 with 303 See Other redirects to either ReM-1 or ReM-2 depending upon MIME-type preferences expressed in the Accept: headers of the requests.

Update 2: Complete listing of MIME types for OAI-ORE Resource Map serializations. Follow-up conversations with Herbert Van de Sompel, Carl Lagoze and others have reminded me I neglected to mention how the OAI-ORE model recommends handling "HTML" (application/xhtml+xml and text/html) requests! This is not a minor issue, since the purpose of ORE is to model aggregations of resources and not resources themselves, and so it is not immediately clear what such a page request should return. My solution (for the purposes of this blog post) is for Handle System HTTP proxies to respond to these requests also with 303 See Other redirects, supplying redirect URIs that map to appropriately-coded "splash screens."


For completeness, the table below (repeated from [5]) lists the standard MIME types for Resource Map serializations. Continuing with the major theme of this post, Handle System HTTP proxies resolving requests for DOI-named ORE Resource Maps should follow these standards so the clients may request appropriate formats using HTTP Accept: headers.


Resource Map TypeMIME type
Atomapplication/atom+xml
RDF/XMLapplication/rdf+xml
RDFa in XHTMLapplication/xhtml+xml

If a client prefers RDF/XML but can also parse Atom then it might use the following HTTP header in requests:

Accept: application/rdf+xml, application/atom+xml;q=0.5

The table below list the two common MIME types for HTML/XHTML Splash Pages following the W3C XHTML Media Types recommendations.

Resource Map TypeMIME type
XHTMLapplication/xhtml+xml
HTML (legacy)text/html

Thus, if a client wishes to receive a Splash Page from the Aggregation URI and prefers XHTML to HTML then it might use the following HTTP header in requests:

Accept: application/xhtml+xml, text/html;q=0.5


As noted in [5] there is no way to distinguish a plain XHTML document from an XHTML+RDFa document based on MIME type. It is thus not possible for a client to request an XHTML+RDFa Resource Map in preference to an RDF/XML or Atom Resource Map without running the risk of a server correctly returning a plain XHTML Splash Page (without included RDFa) in response.

The Handle record for a given DOI or Handle identifying an ORE aggregation would therefore contain a set of URIs reflecting the mappings in the tables above. A content-negotiation-savvy Handle System HTTP proxy would then return the appropriate URI in the 303 Found response, based on its configuration and policies.

References:

See the ensuing comments at my Wordpress.com version of this blog...

Coomunity as a Measure of Research Success

This post originally appeared on the Wordpress.com version of this blog

In his 02 Feb 2010 post entitled Doing the Right Thing vs. Doing Things Right Matthias Kaiserswerth, the head of IBM Research - Zurich sums up his year-end thinking with this question for researchers...

We have so many criteria of what defines success that one of our skills as research managers is to choose the right ones at the right time, so we work on the right things rather than only doing the work right...For the scientists that read this blog, how do you measure success at the end of the year?

Having just “graduated” after a decade with another major corporate research lab, this is a topic that is near and dear to my heart! My short answer was the following blog comment...

I can say with conviction that the true measure of a scientist must be their success in growing communities around their novel ideas. If you can look back over a period of time and say that you have engaged in useful discourse about your ideas, and in so doing have moved those ideas forward — in your mind and in the minds of others — then you have been successful...Publications, grad students and dollar signs are all artifacts of having grown such communities. Pursued as ends unto themselves, it is not a given that a community will grow. But if your focus is on fostering communities around your ideas, then these artifacts will by necessity follow...

My long answer is that those of us engaged in research must act as stewards of our ideas; we must measure our success by how we apply the time, skills, assets, and financial resources we have available to us to grow and develop communities around our ideas. If we can look back over a period of time — a day, a quarter, a year, or a career — and say that we have been “good stewards” by this definition, then we can say we have been successful. If on the other hand we spend time and money accumulating assets, but haven't moved our ideas forward as evidenced by a growing community discourse supporting those ideas, then we haven't been successful.

A very trendy topic over the past few years has been open innovation, as iconified by Henry Chesborough's 2003 book by the same name. Chesborough's "preferred" definition of OI found in Open Innovation: Researching a New Paradigm (2006) reads as follows...

Open innovation is the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively. [This paradigm] assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as they look to advance their technology.

In very compact language Chesborough (I believe) argues that innovators within organisations can best move their ideas forward through open, active engagement with internal and external participants. [1] Yes, individual engagement could be conducted through closed "tunnels," but for the ideas to truly flourish (think Java) this is best done through open communities. I believe the most important --- perhaps singular --- responsibility of the corporate research scientist is to become a "master of their domain," to know their particular area of interest and expertise better than anyone, to propose research agendas based upon that knowledge, and to leverage their companies' assets to motivate communities of interest around those ideas. External communities that are successfully grown based on this view of OI can become force multipliers for the companies that invest in them!

To appreciate this one needs only to consider the world of open source software and the ways in which strong communities contribute dimensions of value that no single organisation could... I'll pause while you contemplate this idea: open-source like communities of smart people developing your ideas. Unconvinced? Then think about "Joy's Law," famously attributed to Sun Microsystems co-founder Bill Joy (1990):

No matter who you are, most of the smartest people work for someone else

Bill Joy's point was that that best path to success is to create communities [2] in which all of the "world's smartest people" are applying themselves to your problems and growing your ideas. As scientists, our measure of success must be how well we leverage the assets available to us to grow communities around our ideas.

Peter Block has given us a profound, alternative perspective on the role of leaders in the context of communities [3]. In his view, leaders provide context and produce engagement. In Block's view, leaders...

  • Create a context that nurtures an alternative future, one based on gifts, generosity, accountability, and commitment;
  • Initiate and convene conversations that shift peoples' experience, which occurs through the way people are brought together and the nature of the questions used to engage them;
  • Listen and pay attention.

Ultimately, I believe that successful researchers must first be successful community leaders, by this definition!

Update: In a 4 Feb 2010 editorial in the New York Times entitled Microsoft's Creative Distruction, former Microsoft VP Dick Brass examines why Microsoft, America’s most famous and prosperous technology company, no longer brings us the future. As a root cause, he suggests:

What happened? Unlike other companies, Microsoft never developed a true system for innovation. Some of my former colleagues argue that it actually developed a system to thwart innovation. Despite having one of the largest and best corporate laboratories in the world, and the luxury of not one but three chief technology officers, the company routinely manages to frustrate the efforts of its visionary thinkers.

I believe Mr. Brass' analysis is far too inwardly focused. Never in his editorial does Mr. Brass lift up the growing outreach by Microsoft Research, especially under the leadership of the likes of Tony Hey (CVP, External Research) and Lee Dirks (Director, Education & Scholarly Communications), to empower collaboration with and sponsorship of innovative researchers around the world. Through its outreach Microsoft is enabling a global community of innovators and is making an important contribution far beyond its bottom line. I think Mr. Brass would do well to focus on the multitude of possibilities Microsoft is helping to make real through its outreach, rather than focusing on what he perceives to be its problems...

Notes:

  1. One version of the open innovation model has been called distributed innovation. See e.g. Karim Lakhani and Jill Panetta, The Principles of Distributed Innovation (2007)
  2. Some authors have referred to "ecologies" or "ecosystems" when interpreting Bill Joy's quote, but I believe the more accurate and useful term is community.
  3. For more on community building, see Peter Block, esp. Community: The Structure of Belonging (2008)