I was at a reception last week at the home of the US Ambassador to New Zealand, Mark Gilbert. The event was in celebration of innovation in New Zealand. “We share an ocean,” joked Ambassador Gilbert, as he listed some things America has in common with New Zealand. He meant the Pacific Ocean. New Zealand’s Minister of Science and Innovation, Steven Joyce, replied that our two countries are separated by just “a couple of movies and a sleep” – referring to the non-stop twelve hour flight from Auckland to San Francisco.

It’s true, New Zealand isn’t as isolated as we once were. I’ve journeyed across the Pacific Ocean many times, from Auckland to San Francisco (add on a one hour plane ride from Wellington to Auckland). The flight from Auckland is non-stop and typically overnight, so you’ll leave at 8 or 9pm and arrive in San Francisco about midday. Except you arrive before you leave, because of the International Date Line. So if you depart New Zealand on a Sunday night, you’ll arrive in San Francisco on Sunday afternoon.

Cultural isolation isn’t a problem either. Kiwi innovators have a lot in common with our counterparts in the US. We speak english, we’re smart, we have a can-do attitude and we’re creative. The one thing we need help with, according to Minister Joyce, is marketing ourselves. We kiwis are certainly more reticent. Witness the almost embarressed way that our rugby players celebrate scoring tries, despite scoring more of them on average than our opponents. I must confess that self-promotion was something I struggled with, during my time running the tech blog Read/Write Web (now called ReadWrite). It was almost an oxymoron to be a blogger who didn’t talk himself up, but that’s what I was. I didn’t see it as a challenge though. As long as I was scoring those tries, I was doing fine.

Geographic isolation isn’t much of an issue these days, given that so much can be done over the Internet. Tools like Facebook, Skype and Twitter enable anyone to feel more connected to others, regardless of location. Of course, geography still matters – why else do young entrepreneurs flock to Silicon Valley from all over the world? There’s nothing quite like bumping into fellow tech workers at your local cafe, a meetup, or (best of all) a party. But despite the obvious benefits of face-to-face networking, distance is no longer the tyranny it once was.

I spent nearly a decade running Read/Write Web and for all of that time I was based in New Zealand. I founded the blog in April 2003 and sold it to a US company at the end of 2011. During that period, I never once felt isolated. It was only after the sale, during 2012, that I began to feel isolated. But even then, it wasn’t geographic or cultural isolation that was the problem. It was more a feeling of not belonging anymore.

Feeling like you belong and that you’re connected to people is the opposite of isolation. Another word for it is community. This is what I found when I first started blogging. My goal back then, in 2003, was simply to write about Web technology and try to connect to people with a similar passion. Much to my surprise and pleasure, I connected to many such people through the blog. A good portion of them happened to live and work in Silicon Valley.

At that time, Silicon Valley occupied a near mythical place in my imagination. I had never traveled there (I hadn’t been to the US, full stop) and I had little clue as to what it looked like. Or what its inhabitants looked like. But I soon discovered they were on the same wavelength as me. So I felt a sense of community with these people, who I’d never even met. I felt like I fit in with the people of the blogosphere, much more than I fit in with the people I saw every day at my job in New Zealand.

Even though I found my community in the blogosphere, I was still very much an outsider – because I wasn’t from Silicon Valley. Being an outsider is not the same as being isolated. You choose to be an outsider, you don’t choose to be isolated.

Being an outsider actually worked to my advantage as a tech blogger, because it allowed me to step back and take a bird’s eye view of what was going on in Silicon Valley. I soon attracted other outsiders to write the blog with me: a Russian-born entrepreneur with a gift for deep analysis of Web trends, a big thinking Turkish entrepreneur, a promising young blogger from Rhode Island, a previously unknown Tampa Bay blogger (now one of the best in the business), and a number of talented bloggers from that most alternative of US cities, Portland. Because of this disparate group of outsiders, none of whom were based in Silicon Valley, Read/Write Web became known as an analysis blog which looked at technology a bit differently.

That’s not to say that going to the home of technology wasn’t important. I traveled to Silicon Valley on a regular basis on Read/Write Web business, to attend events and eventually host a couple of our own. I first traveled to the US in 2005, when I attended the second annual Web 2.0 Conference in San Francisco. I immediately felt welcome in Silicon Valley. People there were curious about this odd fellow behind Read/Write Web and were interested in what I had to say, even if they couldn’t make out my accent sometimes (I pronounced Web as “Weeb,” apparently).

It was a time of trial and experimentation on the Weeb, I mean Web. This era even had a name, “Web 2.0,” although nobody actually knew what that meant. All we knew was that innovation was happening again and that something new was emerging from Internet technology. My opinions about this seachange were as good as anyone else’s. People listened. They liked hearing from me and seeing me. So yes, visiting the valley was important.

For most of my time though, I was in a place that is more than 10,000 kilometres away from the valley. Even so, I never felt that isolation was an issue. I met hundreds of like-minded people through blogging and for the majority of them, it was only a virtual connection. Up till about 2009, I hadn’t even met many of my own staff! I was the only one of Read/Write Web’s full-time workers to live in New Zealand. All the others lived in the US. But again I didn’t feel isolated, because we communicated daily using virtual tools like Skype and Basecamp. I felt a sense of belonging, to both the blogosphere and to the company I’d created.

I sold the blog in late 2011. I continued to work there and travel to the US, to visit the new ReadWrite office in San Francisco and go to events. But things had changed. I was increasingly out of the loop with business decisions, since most of the time I wasn’t in the office when they were being discussed. I was also sensing a certain resentment from people I knew in the valley. On one trip a successful media entrepreneur from the valley, who I knew well, said to me in a rather disdainful voice: “So you just come over here every few months, get what you need and go back home?” The implication was that because I hadn’t chosen to base myself in Silicon Valley when I sold the business, I wasn’t really a part of it anymore. He’d said it jokingly, but it was on the mark. I was beginning to feel like I didn’t belong.

To be clear, this was all my own doing and I have absolutely no regrets. It was my decision to sell the business and it worked out well for me financially. But by selling Read/Write Web, I knew I’d be handing over the controls. I also chose to stay in New Zealand after selling, because of personal reasons. Then when I left the site in October 2012, I stopped regular blogging and focused my attention on a new goal: writing a book. All this led to a feeling of isolation, from Read/Write Web and the blogosphere at large.

The solution was to rebuild my connections to people, which I began to do over 2013 and beyond. People who love Internet technology are still my community, but they are now primarily reachable through social media. So I try to stay active on Twitter and Facebook, and write occassionally on my personal blog. I’m still that same outsider I was when I founded Read/Write Web. Only now my primary creative outlet is different: books. So I make an effort to keep my presence up on the Web, while I slog away at writing books.

The takeaway for fellow entrepreneurs from New Zealand, or indeed any other country that is far away from Silicon Valley, is to stay connected. Find your community and get involved. Geography is no longer an excuse to be isolated. It still matters, but not as much as you’d think. After all, I managed Read/Write Web for nearly ten years from across the ocean.

Why Following People On Twitter Is Broken (And What To Do About It)

Twitter logoMost social media products rely on the ‘friend or follow’ model of connecting to other people. Friending usually means a two-way connection, whereas following is most often a one-way connection. Friending is inherently a strong connection, because it mimics how we operate in real life – friends socialize and share things together. But just how useful is it to have a follow connection with someone? I’m going to argue that it’s becoming less and less useful. Particularly on Twitter.

The follow model on Twitter has a glaring problem: it doesn’t scale in terms of usability. The math is simple. The more people you follow, the noisier it gets. So if you follow hundreds or thousands of individuals, as many of us do, you’ll get overwhelmed with chatter.

There is a solution, albeit one that Twitter discourages you from using. Perhaps because this solution does not require you to follow people on Twitter. I’m talking about lists.

Lists are my favorite feature of Twitter. I’d go as far as to say that most of the value, in terms of content, that I get from Twitter comes from lists. Put another way: most of the best tweets I see are from people I don’t follow.

The Value Of Lists

A Twitter list is essentially a group of people who have something in common. Twitter itself defines a list as “a curated group of Twitter users and a great way to organize your interests.” You can create your own public or private lists, or subscribe to the public lists of others. Regardless of whether you curate or someone else does, here’s a key point: you don’t need to follow a person in order to see their updates on a list. For example if you create a list of tennis players, you can add Novak Djokovic to it but not follow him personally.

Most of the lists I’ve created or subscribed to are topical. I have a list for health tech people, a list for people I interviewed for my book Trackers, and a list for Virtual Reality people and companies (the topic of my current book project). Some of my lists are personal in nature; such as a list for my friends and a list for my favorite tweeting authors. As well as creating my own lists, I subscribe to a number of public lists related to health tech, VR, and other interests.

Subscribing to public Twitter lists is often a better idea than creating your own. For example, I subscribe to the Digital Health list of Paul Sonnier. He’s a leading Twitter influencer in the health tech market, so he’s much better qualified to curate a health tech list than I am. There are 1,700+ people on Sonnier’s Digital Health list. I probably follow about 10% of them, at most. Yet I can see tweets from any of those 1,700+ people on Paul’s list, at any time, because I subscribe to his list. I feel much more connected with the health tech industry, thanks to Sonnier’s curation.

Paul Sonnier list

You may be thinking, well isn’t there a lot of noise too on a list with 1700+ people? That’s true, but it’s much easier to scan a topic-focused list of Twitter users and pick out the good stuff. A well curated list, like Mr Sonnier’s, will mostly stick to its topic.

Why Does Twitter Hate Lists?

The most common ways to use Twitter are via its website or on its mobile apps. Yet strangely, on both the Web and mobile, Twitter hides its lists away in an unintuitive menu. On, lists are absent from the top menu and hard to spot elsewhere. As for Twitter’s mobile apps, when iOS8 was released at first I thought the lists feature had been deleted from the app entirely. But after googling, I discovered its new location: click the Settings icon on your profile page (a tiny, wheel-like icon), and your lists are in a sub-menu below ‘Settings’. That’s right folks, your lists are in the Settings menu.

It’s no wonder lists aren’t as well used as they ought to be. For the record, I use Hootsuite to view my Twitter lists. The Hootsuite dashboard has a tabs interface so that you can further organize your lists. Another option is TweetDeck, an app that Twitter owns. But I’ve had syncing problems with TweetDeck in the past, so I’m afraid I can’t recommend it. Here’s a glimpse of my Hootsuite setup:


So why does Twitter hide away its lists? I can think of two reasons. Firstly, Twitter is obsessed with being a large scale social network like Facebook. It wants people to socialize on Twitter, which means follow and talk to other people. But that’s simply not going to happen, at least on the scale its executives want. Most people will do their social networking on Facebook, which has a massive social graph. Twitter’s true value is in its interest graph, not its social graph. In other words, it’s more valuable to follow topics rather than people on Twitter. Gee, I wonder what feature allows Twitter users to do that?

I suspect the second reason why Twitter discourages lists is that it thinks they’re too difficult for the average user to understand. But Twitter has shot itself in the foot here, because lists are easy to understand if you make the user interface simple. For a great example of this, look at how Flipboard does lists. Flipboard has topic “boards” (a.k.a. lists) that are front and center in the product and deliver immediate value to new users. I’m sure Flipboard finds this to be an effective way to on-board new users and encourage regular usage. Twitter badly needs new users and for them to be more active. So why not promote topical lists on the Twitter homepage, rather than the Twitter accounts of celebrities and power users who aren’t famous?


Finally, a word about Twitter and third party apps. We all know that Twitter blew it with the developer community in 2012, when it began to limit access to Twitter’s data. I won’t retread that, but I do think third party apps could make great use of lists. Take Nuzzel for example, the app that filters content based on who you follow on Twitter (and Facebook). The idea is to show you the most talked-about articles at any one time. Nuzzel is already a neat app, but how much more useful would it be if it could also filter based on Twitter lists? Imagine Nuzzel’s technology applied to Paul Sonnier’s health tech list; it would be like a Techmeme for health tech news.

The Case For Using Lists

Let’s return to my original question: how useful is it to have a one-way follow connection with someone? Clearly there is a lot of value to be had in following someone who has consistently good content and/or who you want to engage with. That’s why I follow Paul Sonnier on Twitter, for example. But there’s a limit to how many of those people I can keep track of, let alone interact with. That’s why it doesn’t make sense for me to follow the 1,700 people on Sonnier’s health tech list. Subscribing to his list is enough.

I’m not saying the follow model of social media connection will die any time soon. But there’s room for much more experimentation with groups of people in social media, especially ones who share a particular interest. Which is precisely what lists are for in Twitter.

So forget following people, or at least following masses of people. Create lists and subscribe to the topical lists that others have created. The interest graph is the core of Twitter and you should make better use of it. With any luck, Twitter will too.

ReadWrite Turns 12

RWW logo 200812 years ago to this day, I started a blog called ReadWrite/Web. It’s now called ReadWrite.

Twelve years is a long time in the Internet world. Back in 2003, Facebook had just been launched from Mark Zuckerberg’s Harvard dorm room, there was not yet a YouTube or Twitter, Apple’s most popular product was the iPod, and the height of mobile technology was a WAP browser.

Today, three of the services I just mentioned are among the top 10 websites in the world (Facebook, YouTube and Twitter), mobile rules the Internet, and Apple’s iPod is all but obsolete.

I’m a bit sentimental, so I like to raise a toast to the founding of ReadWrite on the 20th of April every year. But I know full well that the Internet has only one speed and one direction: fast and forward.

In another 12 years, wearables (or implants!) will have replaced mobile phones, the Internet of Things will be earth’s infrastructure, and Virtual Reality will be interchangeable with Real Life. Which is why I’m looking into all that stuff right now, as a writer and a consultant.

A blogger? If they still exist, I’m an amateur one now. No, I’ll always be a blogger. A blogger from an upside down timezone. One who lived in the virtual world and beamed into Silicon Valley when needed.

Here’s to the future, because ReadWrite is still going strong. My best wishes to Owen Thomas and the Wearable World team, as they continue to map the near future.

As for the past, well there’s always the Wayback Machine.

ReadWriteWeb 2003

Previous anniversary posts:

Emerging Markets: Social VR

Jordan ReyneThe virtual world Second Life launched in mid-2003 and from the start it was out of place. Its jerky and cartoonish graphics seemed to belong more to the 1990s than to the new era of social software. Nevertheless, Second Life enjoyed as much media attention in the 2000s as Web 2.0 darlings like Flickr and YouTube.

Nowadays, in the middle of the 2010s, Second Life gets much less attention. Its user base peaked in 2008, at about 550,000 active users. But the company could be about to enjoy a…well, second life. With the help of new Virtual Reality (VR) headsets like Oculus Rift and Sony Morpheus, Second Life may yet become a stellar example of social software. Although, as you’ll discover in this essay, perhaps Facebook and a new company called High Fidelity will be the big winners in this emerging market.

The Missing Link in Second Life

To understand the opportunities in social VR, we first need to do a quick review of the past. On 1 May 2006, at the height of Web 2.0, BusinessWeek ran a cover story on Second Life entitled My Virtual Life. The story was about Anshe Chung, the Second Life avatar of Ailin Graef. A Chinese-born school teacher who lived in Germany, Graef had made US$250,000 selling virtual land on Second Life. Later that year she became Second Life’s first real-world millionaire. These days Anshe Chung (the name she now goes by professionally) runs a virtual reality studio, which employs about 80 people, and has invested in a number of other VR-related startups.

Anshe Chung

Chung/Graef is still the poster girl of business success on Second Life, but in reality she’s an outlier. The vast majority of Second Life “residents” (its term for users) either don’t make any money or earn a pittance. That’s because the primary markets in Second Life are related to the product itself: virtual land and virtual fashion. It’s like the early days of tech blogging, in 2003-04, when we mostly blogged about blogging.

Second Life will only become truly interesting, in a business sense, when its users can make money from the real world. And what’s the key to that? You guessed it: better social experiences.

Social VR Experiments

One market where VR could lead to better social experiences is live music. A musician already experimenting with virtual reality is Jordan Reyne, a Goth influenced singer-songwriter who has performed many times within Second Life. Most recently, she gave a series of “inworld” concerts in March.

Jordan Reyne in Second Life

In a blog post entitled How to Attend an Online Gig, Jordan explained that Second Life is a good way for fans to hear a live show no matter where they’re located IRL (In Real Life). Another benefit of a Second Life concert is that the performer can interact with the listeners. In a review of a recent Jordan Reyne concert, the Second Life blog Ciaran Laval noted that “Jordan belted out several tunes to an eager audience and allowed them to vote on which cover song she should perform.” OK, the audience at Jordan Reyne’s concert included dwarves and small bears, but the point is these were real people at a live concert.

Jordan Reyne concert

If you look beyond the cartoonish aspects of Second Life, the fact that it brings people together in one (virtual) place indicates that it has potential as a market for social experiences. But Second Life needs to get the user experience right for mainstream users. Which is where Virtual Reality comes in.

The Need For Better Virtual Spaces

VR is a natural fit for virtual worlds, because it creates a far more immersive experience than watching two-dimensional dwarves and bears on your PC screen. A key word in VR is “presence,” meaning how believable a virtual reality experience is. If Second Life enables its users to transport into a 3D world using VR headsets, it will increase presence significantly. It’ll feel like you’re really there, at that live concert.

The problem is that VR headsets are not yet ready for commercialization. The one with the most interest, Oculus Rift, hasn’t even released its first consumer version. But it’s coming; Oculus 1.0 is likely to be released before the end of 2015.

Along with good VR headsets, we’re going to need compelling virtual spaces to explore. Second Life will be a leader in that, but perhaps the one to watch is the second company of Second Life creator Phillip Rosedale. He formed High Fidelity in 2013, with the aim of providing “shared virtual reality” on a worldwide scale. It’s clear that Rosedale wants to bring the real world to virtual reality:

“We believe that both the hardware and the internet infrastructure are now available to give people around the world access to an interconnected Metaverse that will offer a broad range of capabilities for creativity, education, exploration, and play.”

High Fidelity is trying to do what Second Life must do also. To quote its homepage, virtual worlds need to become “incredibly immersive” and the social interaction more “lifelike and emotional.”

Facebook & Social VR

Let’s not forget that Facebook is hugely invested in Virtual Reality taking off, due to its acquisition of Oculus in March 2014. Nobody quite knows what a 3D Facebook will be like, but it will certainly open up new markets for virtual shared experiences. If you think shared video streams from Meerkat and Periscope are an interesting development, remember that they’re 2D and not in the least immersive.

I began this essay talking about Second Life, but it may not even be a winner in this new world of VR in virtual worlds. Not for lack of trying, because Linden Lab (the company that runs Second Life) is in the process of developing a new version of Second Life for the Oculus Rift.

Second Life is a survivor and I think it will adapt well to VR headsets. But I wouldn’t be surprised if it’s Facebook and High Fidelity that will take VR mainstream, by delivering compelling social VR experiences to a wider audience than Second Life has reached so far.

Whatever the case, entrepreneurs and businesses should start thinking now about what social software will be like when VR hits it big.

If you’d like to further explore the business opportunities in virtual reality, you may be interested in my consulting services.

Need a Product Evaluation or Market Research Report? Enquire Within…

I’d like to announce that I’m available for consulting engagements, as an Internet analyst and startup advisor. If you’re looking for market research, product evaluation or advice for your startup, then send me an email to enquire.

I just completed my first consulting assignment for 2015, a product evaluation report for an upcoming Apple Watch app. My client was very happy with the report and wrote this recommendation on my LinkedIn profile:

In my research relating to wearable technology I came across Richard’s book Trackers which provided valuable insight into the future of ‘the quantified self’.

I was impressed by Richard’s work and contacted him to provide additional research and analysis as a consultant relating to my specific technology. Richard was impressively prompt and professional and his expert insight and research have proven to be critical for development of my technology.

I give my strongest recommendations for his work and reputation and will surely use his services in the future.

As for my experience, being the founder and CEO of tech analysis blog ReadWriteWeb for nearly a decade is one clear qualification. But I also have prior experience as a consultant. I did this type of work in the early days of ReadWriteWeb, as a way to earn an income as I bootstrapped the business. You can find recommendations for some of that work on LinkedIn.

Why am I doing consulting again now? Well for one I’d like to earn some coin again, since writing books isn’t a very lucrative business. I’m also keen to jump back into the exciting world of startups and tech business.

You can find more details about my consulting services on my website. Send me an email (info AT if you’d like to follow up.

My New Book Project: A Novel About Virtual Reality

I recently started work on a novel, on the theme of Virtual Reality. I think VR is the most promising consumer technology around right now. Products like the Facebook-owned Oculus Rift, Sony’s Project Morpheus and Samsung’s Gear VR are all early stage (Oculus hasn’t even released a version 1.0 for consumers). But VR is poised to change the world.

My first book, Trackers, was nonfiction. It was about self-tracking, which in my view has been one of the most interesting consumer technologies of the past five or so years. Why? Because it has fundamentally changed the way we manage our health. And now the Apple Watch is about to make self-tracking mainstream.

Virtual Reality seems to be at a similar point that self-tracking was in 2007, or what became Web 2.0 (and led to YouTube, Facebook et al) in 2003. It’s at that point where the technology is a couple of years away from being complete, but the potential impact is huge.

Because VR is a work in progress, I decided the best way to explore this technology was to write a work of fiction. So that’s what I’m attempting. My role models in this endeavor are some of my favorite novelists: J G Ballard, William Gibson and Tom Wolfe.

I should mention that in 2014 I started a second nonfiction book, on the topic of Douglas Engelbart and The Mother Of All Demos. But I’ve put that project on hold, as I couldn’t find a way to make it a compelling Laura Hillenbrand-esque narrative. The trouble with writing nonfiction about technology is that there is usually very little action or excitement in the narrative. The solution, at least for me at this time, is to make up my own action and excitement! In other words, write fiction instead.

Even though I’m now writing a novel, my goal is the same as it’s always been: to explore technology. That’s been my modus operandi as a writer since the founding of ReadWriteWeb in 2003.

If you’d like to follow or help me in my new writing adventure, I’ll be active most days on Twitter.

Paradise Lost: How Moreover Won & Lost The Real-Time Web

MoreoverNews aggregator Moreover was born in the late 1990s, at the same time as Google. At one point, Moreover dominated Google in the delivery of real-time news. So why did Moreover turn its back on the Consumer Web…

The late 1990s was the middle of the Dot Com boom. Looking back, we tend to associate this period of intense growth with e-commerce startups. During 1998, eBay went public, Paypal was founded, was spending big to “expand its reach” and launched. Also started in the late 90s, but with much less noise, were two pioneering information management businesses. In 1998, two Stanford University students working from a Menlo Park garage were testing a new search engine, called Google. At about the same time, three Englishmen joined forces to create a “news aggregator” called Moreover.

We all know the Google story by now. Much less well known is the story of Moreover and how it changed the way we consume news. Basically, what Moreover did was gather news headlines from all over the Web and make them available to other websites to use. Both were pioneering search engines, in their own way. Google ‘spidered’ the Web for links, while Moreover ‘scraped’ news websites for headlines. Google ended up building the best all-purpose search engine in the world. Moreover succeeded in building not only the best online news distribution tool, but (almost by accident) the first news search engine. Indeed Moreover was the catalyst for Google building its own news aggregator, Google News.

The story of Moreover is also one of opportunity lost, for both Moreover’s founders and for consumers of online news. Because within the space of just a few years, Moreover switched from a consumer business model to an enterprise model. It went from being an ‘information wants to be free’ enabler, to the lock and key of information gatekeeper.

It’s not as if Moreover wasn’t successful in the consumer market. In 2001, Moreover had Google on the ropes as a distributer of online news to consumers. When the 9/11 tragedy struck, the best way to get updates online was the Moreover-powered news search on AltaVista. But soon after, Moreover turned its back on the consumer market for the easy money in enterprises.

It’s particularly galling as a consumer of online news today, since the powerhouses of this era — Facebook and Twitter — are terrible at filtering and organizing online news. Precisely the expertise that Moreover had, and still has.

Late last year, Moreover was quietly sold for the third time. This latest acquirer seems appropriate to the fate that Moreover chose for itself. The sale was to LexisNexis, one of the earliest gatekeepers of electronic information (it was founded in 1970).

Moreover is a good case study of what happens when consumer innovation is stifled early on. Had Moreover kept going on its original groundbreaking path, it may well have solved some of the problems that frustrate us today on the consumer Web: a lack of intelligence in sourcing quality news, the paucity of quality topic tracking tools, a general lack of interest amongst the big companies (Google, Facebook and Twitter: I’m looking at you!) to let users filter and organize information. Moreover’s technology could’ve made a difference, had it chosen a different route.

How Moreover Got Started

The founders of Moreover were Nick Denton, David Galbraith and Angus Bankes. Galbraith and Bankes were the developers, Denton the business guy. Denton is the most well-known of the trio now, as the always quotable founder of blog network Gawker. In 1998, he left his job as a journalist at the Financial Times in order to raise money for Moreover. He roped in two early investors, angel investor Richard Tahta (a senior director at and British venture capitalist Christopher Spray of Atlas Venture. A press release dated June 2, 1999 announced the seed round, which valued Moreover at $3.75 million.

On the same day, Moreover launched its website. It enabled anyone to add a list of news headlines to their own website, by copying and pasting a snippet of code. Denton’s ambition was “that every site should have a Moreover section, of web-style newsfeeds that we provide.”

The service was free for webmasters and developers. But to make money, Moreover also had a premium offering for corporate and media websites. Search engine portals were its prime target. Portals were all the rage in the late 1990s and early 2000s, as destination webpages for all kinds of content: news, email, weather, and more. Moreover’s news headlines would be perfect for portals, as well as company intranets (which were essentially portals too — I know this because I built and managed intranets for a living back then).

Moreover 2000Moreover sometime in 2000. Image credit: Brian Kelly, University of Bath, Reflections On WWW9.

Although it charged companies to access its news feeds, Moreover itself didn’t pay news providers for the links. Instead it used a clever technology hack to “scrape” the headlines. At the end of August 1999, Denton described Moreover to a technical message board as “an internet service which scrapes headline links from about 1,500 sources on the web, and across more than 150 categories of news.”

This ‘scraping’ method eventually attracted controversy, because to some news media companies ‘scraping’ was a synonym for ‘stealing’. The way Moreover preferred to view it was that it was doing the world a service, because it organized information — much like Google was doing. As David Galbraith described it for a presentation at the 2000 World Wide Web Consortium, held during May in Amsterdam, Moreover was “taking unstructured data and delivering structured results in various flavors of XML.”

The Rise of RSS, Blogging & The Real-Time Web

In 2000, a Web publishing format called RSS began to gain momentum. Originally developed by Netscape in 1999, RSS was based on XML and enabled websites to publish a real-time feed of news. The acronym had several definitions: Rich Site Summary, RDF Site Summary, or Really Simple Syndication. The reason for this was that development on RSS had forked into two separate projects: the first run by blogger and Web developer Dave Winer, the other a breakaway group of entrepreneurs and developers called the RSS-DEV Working Group.

In February 2000 Denton and Galbraith traveled to Silicon Valley to get their heads around the politics of RSS. The pair ate spicy noodles in Menlo Park with Dave Winer. [Incidentally, I too have had the privilege of eating spicy noodles with Dave Winer — along with Michael Arrington, Gabe Rivera and Fred Oliviera — in October 2005.] Despite the appeal of Winer’s format, which was already popular with bloggers, Moreover sided with the RSS-DEV Working Group — which didn’t release its version until December 2000.

Spicy NoodlesNick Denton eating spicy noodles with David Galbraith and Dave Winer, February 2000. Photo credit: Dave Winer

Moreover soon cornered the market on the commercial use of RSS. It began to position itself as “the webfeed company” and claimed to have “the world’s largest collection of webfeeds.”

Moreover also tried to grab a big slice of the blogging market. Sometime during 2001, Nick Denton tried to pursuade the Moreover board to buy pioneering blog publishing platform, Blogger. The price was $3m, but the board balked at the deal. Frustrated at not getting his way, Denton quit as CEO in August 2001.

As an ex-journalist, Denton saw potential in blogging and he began to experiment with this new form of publishing. He bought the domain name in July 2002, which became the early flagship of his media empire Gawker.

Despite Denton eventually being proven right about blogging, Moreover was right to keep its focus on news aggregation in 2001. Indeed, shortly after Denton resigned as CEO, Moreover made a breakthrough as a source of real-time news. It was due to the 9/11 tragedy. That fateful day, everyone wanted news updates urgently. It turned out that AltaVista, not Google, was the fastest way for people to get updates. That was thanks to the Moreover-powered ‘Top News Stories’ widget on AltaVista, which meant the latest news stories about the terrorist attack were just a click away. AltaVista also had a special ‘News Search’ box on its homepage, which enabled users to search across the more than 2,000 news sources scraped by Moreover at that time.

altavista 911altavista on 9/11/01; Image credit:

As well as AltaVista, Moreover had licensed its technology to other leading search engine portals of the era — including Yahoo!, MSN and AskJeeves.

Ironically, Google had overtaken AltaVista as the leading search engine on the Web earlier in 2001. But Google hadn’t realized the importance of real-time news updates and search, until it saw Moreover’s widget on AltaVista. Development on Google News began shortly after, finally debuting in September 2002. Several months after that, Google acquired what Denton had coveted — it bought Blogger for an undisclosed sum in February 2003.

Google may’ve been slow off the mark with real-time news, but after 9/11 it made all the right moves. The same can’t be said for Moreover after 2001.

The Switch to Enterprise

Moreover’s exposure on the leading dot-com search engines was as far as it would go in the consumer world. Soon after, it began chasing the easy money in the enterprise world.

By the middle of 2002, Moreover was promoting itself as “a provider of real-time information management solutions to Global 2000 companies.” Its PR became a jumble of enterprise IT buzzwords. Moreover promised to “impact business decisions,” “capture the most actionable information” and “provide business users with continuous function-specific information.”

This switch in focus was a great shame. It’s tempting to wonder how far Moreover could’ve gone in the consumer Web market, considering that another golden era (which came to be termed ‘Web 2.0′) was just around the corner. Consider that Moreover had built a pioneering news syndication and distribution service. It had virtually cornered the consumer market for this, with its near blanket coverage on the major search engines of the era. The only major search engine Moreover wasn’t featured on was Google. But at the time, 2000–2001, Google was a distant second to Moreover on real-time news. Google wasn’t yet a big threat.

Yet despite holding all the cards in the nascent RSS and news syndication market, Moreover went the conservative route and marketed itself to corporations. The conclusion is clear: Moreover missed a huge opportunity to out-maneuver Google and dominate the real-time Web.

Blogger Jason Kottke, who worked as a designer for Moreover from late 2000 to mid 2001, lamented in an October 2003 blog post that Moreover was no longer “at the forefront of this still-developing space, building on those innovative ideas that they weren’t able to execute on.” The frustrating thing is that Moreover was certainly capable of executing on its innovations. It simply chose the safer route. Co-founder David Galbraith later suggested this was due to internal pressure from its board to find “a revenue model.”

The Exit: Verisign

Despite the opportunity lost in Web 2.0, Moreover’s enterprise pivot was undoubtedly profitable. It also led to Verisign’s $29.7m acquisition of the company in 2005.

Verisign, an Internet infrastructure company, wanted to become the Grand Central Station of real-time news — the central point from where it all flowed. A not unrealistic goal, considering that at the time Moreover was gathering news from “more than 12,000 news sources and millions of blogs.” Google was worried about this, so it put in a late bid for Moreover. But the deal went ahead with Verisign.

Verisign made another, similar, acquisition at the same time, buying from Dave Winer for $2.3m. tracked any change made to a weblog, via a simple “ping” to the web server. As blogger Tom Foremski put it, the two acquisitions together enabled Verisign to “track most new content published online, and track who accessed it and where.”

At a time when RSS and blogs were the primary ways to disseminate news on the Internet, Verisign was suddenly looking like a potential kingmaker.

Verisign had at least a couple of years with which to press its advantage, but it frittered it away with corporate dithering. It wasn’t until 2007 that social networks — and in particular Facebook and Twitter — began to challenge RSS and blogs as content distribution platforms. But from 2005–2007, Verisign did virtually nothing with Moreover on the consumer Web.

What Verisign did instead was double down on the enterprise strategy. Which meant more impenetrable marketing-speak. In May 2006, Verisign announced the release of something called the “Moreover Connected Intelligence (CI) Newsdesk 3.0.” The market yawned.

At about the same time, I was writing on ReadWriteWeb about how a young startup called YouTube had doubled its traffic in May 2006, from 6.6m unique visitors in April to 12.6m in May (YouTube now has over a billion users). Of course in 2006 nobody had any clue how big social networking would become, but something was happening here. Whatever it was, Google noticed. It acquired YouTube in October 2006 for $1.65 billion, at the time a staggering amount.

In this context, Moreover’s connected-newsdesk-blahdy-blah was at most dull and at worst irrelevant.

Moreover continued to sleepwalk through 2007. By the end of that year, Verisign had given up completely — it announced a plan to sell off Moreover and other “non-core” businesses.

To add to the frustration, Moreover’s scraping of news feeds finally got it into legal trouble. In October 2007, Associated Press sued Verisign for copyright infringement. The lawsuit was settled “amicably” in August 2008. Although the terms weren’t disclosed, the fact that Moreover continued to include AP’s links in its service suggests that a licencing agreement was reached.

The Rise of Social Media; Moreover Goes Indie Again

After yet more dithering, Verisign finally sold Moreover in May 2009 to a group of investors led by former AOL executive Paul Farrell. Included with the transaction was, which led to a name change for the combined company: Moreover Technologies, Inc. At the time of the sale, Moreover claimed to catalog “450,000 news articles daily from more than 30,000 news sources.”

Paul Farrell assumed the reigns as CEO of the newly independent Moreover just as social media was starting to dominate the online news landscape. The month after Moreover was sold, Facebook began to make public posting the default for its users. Prior to 2009, Facebook was primarily a private social network. This shift in strategy by Facebook was largely in response to the growing popularity of Twitter, which had a breakout year in 2009. In a June 2009 blog post, Moreover itself noted that “the traditional blog [is] no longer the main vehicle for expression.” Instead, it was “micromedia platforms like FriendFeed, Posterous, and Twitter.”

While the term “micromedia” quickly went out of fashion, the big picture trend was evident by the end of 2009: social media was taking over from RSS and blogging as the main distribution method for online news.

Under Farrell’s leadership, Moreover adapted to this seachange. It continued to focus on the enterprise market, so nothing earth-shattering was developed. But to its credit, Moreover poured resources into social media monitoring. In June 2010, it announced a new product called the ‘Social Media Metabase business intelligence portal’. Moreover was now scanning 12 million news sources, a 390% increase from a year ago — thanks to the addition of millions of new social media accounts.

Gatekeeper 2.0

Nothing of note happened for the next four years, until Moreover was acquired by LexisNexis in October 2014 for an undisclosed sum. The acquisition wasn’t even noticed by the technology media. Since LexisNexis is a legacy gatekeeper of digital information (primarily legal content), it just seemed like a big fish swallowing a smaller one.

moreover_lexisnexisMoreover homepage, March 2015.

But the LexisNexis acquisition was deeply ironic, because Moreover was created in 1998 to be the antithesis of LexisNexis.

‘Information wants to be free’ was the rallying cry behind early Web innovations like Wikipedia and Blogger. In part, that philosophy was in response to the first wave of digital information management companies — like LexisNexis. Those companies held information under lock and key. If you wanted access, you had to pay for it.

During its first few years, in the late 1990s and early 2000s, Moreover had challenged this model. It had made access to information — in this case, news content — much more freely available. It put that content onto many of the major search engines and portals of the Dot Com era. It allowed bloggers and small niche websites to access and use that content too.

In my research I stumbled across an article dated 1 January 2000, in which two LexisNexis executives discuss the then young startup Moreover. “Wasn’t this UK company,” asked the reporter, “by providing news feeds for free, flying in the face of the established online vendors?” The European Director of LexisNexis at the time, Jon Webb, agreed that Moreover presented a challenge. But he questioned its long-term viability. “If all the information which is currently for free remains free and increases,” said Webb at the beginning of 2000, “then where is the profit going to come from to pay for the endeavour? The quality must suffer and where is the incentive for you, as a journalist, to write, or me manage? There are still many people out there who value trusted sources.”

It turns out that Jon Webb of LexisNexis was right: there wasn’t enough profit for Moreover in providing news feeds for free. Or at least, not enough profit to satisfy Moreover’s board and investors. The bursting of the Dot Com bubble didn’t help. So in 2001, at the same time it was successfully beating Google on real-time news, Moreover began to look for a solid revenue stream. There was an easy solution: sell its news aggregation technology to enterprise customers.

In short, in 2001 Moreover became an information gatekeeper — just like LexisNexis.

The further irony is that Moreover was able to become a gatekeeper despite not paying for information itself. The only exception was when Moreover was sued — then it had little choice but to pay. For the most part though, Moreover freely ‘scraped’ information from thousands of news sources around the Web.

What If…

What if Moreover had continued to make its technology available to consumers for free, instead of becoming a gatekeeper? Instead of Verisign and then LexisNexis acquiring it, it could’ve been Google or Yahoo. Indeed, think what Twitter or Facebook would be able to do with such powerful information management software. The achilles heel of both Twitter and Facebook is that neither does a good job of filtering and organizing news content.

I’m not faulting the founders of Moreover — Denton, Galbraith and Bankes — for not following through on its early innovation. The likely cause of its switch to the enterprise market in 2001 was financial pressure. Moreover’s board wanted the company to find a revenue source fast, so it chose to focus on corporate customers.

The real damage was done in 2005, when Verisign acquired Moreover. At the time, Moreover was still well positioned to disrupt online news — after all, Google itself put in a bid to buy it. But the innovation was stifled once and for all under the bumbling management of Verisign.

That said, history also suggests that Google probably would not have done much better. Witness how Feedburner, the most innovative RSS management startup of the Web 2.0 era, was virtually shelved after Google acquired it in June 2007.

As for the information management landscape of today, it’s clear that Moreover under LexisNexis is totally irrelevant to consumers. If there’s one takeaway for the present I’d like to posit from the story of Moreover, it’s this:

We need a new Moreover for the Facebook/Twitter era.

What if a couple of clever developers like David Galbraith and Angus Bankes, and a big thinking entrepreneur like Nick Denton, took on the information gatekeepers of today? What if they developed a tool that helped us better organize social media, track the topics we’re passionate about, and discover the pearls of data we’re looking for. What a divine service that would be!

Of course, the temptation would be the same: to sell out to an enterprise company.


Get every new post delivered to your Inbox.

Join 7,423 other followers