Way back in 2009, Steven Pemberton, a programmer and researcher who had contributed to many of the bedrocks of the modern Web, including XHTML and CSS, gave a presentation on his vision of the next evolution of the Web. While the first iteration of the Web, what could be called “Web 1.0”, was based around individual, isolated websites, mostly operated by brands and professionals, the “Web 2.0” paradigm that sprouted over the course of the preceding decade was based around centralized platforms, designed to make it as easy as possible for anyone to contribute, allowing countless people to congregate in a single place and creating hubs for content created by thousands or millions of people. Social media services such as MySpace, Facebook, and Twitter represented the pinnacle of the Web 2.0 experience, but it also covered blogging sites like Blogger, WordPress, and the later Tumblr, more specialized services such as YouTube and LinkedIn, and even sites like Wikipedia where the sum total of everyone’s contributions is more important than each person’s individual contributions.
Then as now, the concern was that this threatened to concentrate too much data and power in the hands of a handful of sites owned by increasingly powerful corporations, and there was a certain amount of wistfulness for the free-wheeling, decentralized days of the early Web. What made it so insidious, though, was that each social media site’s success would feed itself. As each site got bigger, people would flock to whatever site their friends were hanging out on, and brands and celebrities would flock to the sites with the most people on them, bringing their fans along for the ride. Network effects would result in larger social networks being exponentially more valuable and powerful than smaller ones, with the result that even those personalities with the most concern about centralization would need a presence on Facebook and Twitter. It was certainly possible for a social network to be supplanted, as MySpace and Digg gave way to Facebook and Reddit, but probably only by doing something to actively alienate their userbase and chase users off to another platform. As Google would find with Google+, even an immensely large, powerful company in its own right would find it difficult to muscle its way in next to the social media behemoths, though Google+ was marred by a number of mistakes in its launch and implementation that may have crippled its ability to compete. At its peak, for many Facebook was the Internet, and anything they didn’t see on Facebook may as well not exist. Regardless of the product sites like Facebook and Twitter presented compared to would-be competitors, the sheer size of their userbase was their biggest asset. Leaving for another site would mean leaving behind all your established friends and followers, not to mention all the content you contributed to that site, in favor of potentially yelling at nobody – yet staying would leave you at the mercy of that site’s features, policies, and in some cases existence.
Pemberton’s solution was to bring the best elements of Web 2.0 to the decentralized environment of Web 1.0, proposing that, rather than commit themselves to the walled gardens of proprietary privately-owned networks, people instead create their own personal websites and fill them with machine-readable data that could be picked up by aggregators. A single website could host a resume that could be read by job-search sites and the equivalent of LinkedIn, genealogical information that could be read by family-tree tracing sites, photos and other images that could be read by the equivalent of Instagram, videos that could be searched for and viewed on the equivalent of YouTube, text updates that could range from short (picked up by neo-Twitter) to medium (picked up by the equivalent of Facebook or Tumblr) to long-form (as in the longest-winded blogs), and more. Just as email was a single protocol running on a wide variety of servers that could all talk to each other, which no one server could completely control, so could the largest of the Web 2.0 behemoths be turned into similar decentralized protocols. Such sites could connect to any other site on the Web with similar data, creating a single massive network with all the power of a Web 2.0 site but without falling under the control of any one corporation, rediscovering the promise of the early Web that no one could control. Pemberton wasn’t the first to propose this decentralized model of the Internet – no less a figure than Tim Berners-Lee, inventor of the Web itself, had proposed the notion of a “semantic Web” a decade earlier – but it was Pemberton’s presentation that would introduce me to the concept and move me to write a post about it and how the advent of HTML5 could help make it a reality.
A decade and change later, circa 2021, Web 2.0 was stronger than ever while the “Web 3.0” model of decentralized web sites crawled by aggregators had so far failed to catch on; my prediction in that 2009 post that YouTube would cease to exist within a year seems laughable in retrospect, and in fact content creators that had set up shop on alternative video sites like Blip, Vimeo, and Dailymotion would trudge back to YouTube and submit themselves to the whims of its algorithm and ContentID rules by the end of the following decade. The advent of the iPhone would prove to reinforce the power of the walled garden by giving them their own apps and threatening to make the Web itself all but invisible to the end user, spurring Chris Anderson to write his infamous Wired column “The Web is Dead (Long Live the Internet)“, and seemingly putting the notion of decentralization on the sidelines for good. But like all good ideas, it’s never fully gone away, and now seems to be tantalizingly close to reality, with an increasing network of decentralized social platforms arising over the past several years promising their own alternatives to the centrally-owned behemoths. With governments increasingly focusing their antitrust eyes on Facebook’s unparalleled power, Facebook coming under fire more specifically for its contribution to the rise of far-right movements the world over, and concern over Elon Musk’s takeover of Twitter, his stated plans for the service, and what he’s done with it since, it looks like Web 3.0 may finally have its moment, as the Internet and everyone on it, at long last, transitions towards a set of decentralized social networks accessible to anyone and owned by no one.
There’s just one problem: we’ve been here before.
Before Snapchat and TikTok, before Instagram and Tumblr, before Facebook and Twitter, before Myspace and Livejournal, before Amazon, Google and eBay, before Yahoo and Geocities, before there was even a World Wide Web itself, before even the online services like America Online and Prodigy that were the general public’s first exposure to the Internet, before even the Internet as we know it today even existed, there was Usenet.
Usenet was created by a pair of Duke University students, Tom Truscott and Jim Ellis, in 1979, and made available to interested Unix administrators the following year. Usenet was partly written to replace the outdated announcement software that had been used by Duke’s computer science department, but it also was meant to take advantage of dropping modem prices and the new Unix-to-Unix Copy Program (UUCP) that had been introduced as part of Unix earlier in the year to create a standard for communication between computers, including at different universities, where opportunities for inter-university collaboration had been limited, especially for schools that weren’t on ARPANET, the direct technical forerunner of the modern Internet that was open only to schools receiving funding from the United States Department of Defense. The initial idea was to have a dynamically-updating newsletter, and computers running the software would periodically check other computers for updates and download the most recent updated version of the newsletter, so anyone with access to a computer on the network could upload their own articles and comments and have it propagate to every computer running the software.
Although there was immediate interest, Usenet didn’t really take off until a graduate student at the University of California-Berkeley named Mark Horton (now Mary Ann Horton) joined the network, creating a connection between Usenet and ARPANET, and began reposting the content of ARPANET mailing lists on Usenet. As Usenet grew in popularity, the limitations of the existing “A News” program, with its assumption that users would want or be able to read the entire newsletter in one sitting, became apparent, and Horton teamed up with a high school student named Matt Glickman to create a new, more robust Usenet client, releasing the result, “B News”, in 1982. “B News” introduced the ability to follow specific newsgroups, read only certain articles within a newsgroup, keep track of which articles had been read, and retain articles only for a limited time, introducing many features that would become standard elements of modern-day Internet forums and websites patterned after them. “B News” facilitated the splintering of Usenet into a multitude of interest-based subcommunities, allowing people to find a community for virtually any topic.
Usenet’s growth still created headaches from an administrative perspective, and the way it worked, with hosts having to contact each individual computer in the network, was inherently slow especially as it expanded, so Horton compiled a list of the largest nodes in the network in hopes of facilitating communication between them. Gene Spafford, who took over Georgia Tech’s node in 1983, took that list, identified about ten especially heavy-duty machines, and convinced their administrators to form stronger connections between them, creating a “core” that would further reinforce Usenet’s growth as well as the basis for what would become known as the “backbone cabal”, the closest thing Usenet had to a controlling authority, which would hold control over creating new newsgroups and other administrative tasks. 1983 also saw Rick Adams take over development of B News, and his tenure would further streamline administration of Usenet with the introduction of rudimentary support for community control and moderation.
Usenet started to reach its most familiar form in 1986, when Berkeley student Phil Lapsley, with help from Erik Fair and Brian Kantor, introduced the Network News Transfer Protocol, or NNTP, which introduced more scalability to Usenet by allowing Usenet clients to use TCP/IP to read a Usenet host over the Internet, similar to how email works. The following year, Adams introduced one of the first large-scale Internet providers, UUNET, to further reduce the cost of distributing Usenet feeds.
The bigger development in Usenet’s history in 1987, though, came when Adams, Spafford, and other members of the Backbone Cabal introduced the “Great Renaming”, which reorganized newsgroups into the familiar “Big Seven” top-level hierarchies of misc.*, comp.*, sci.*, soc.*, talk.*, rec.*, and news.*. Part of the impetus for the Great Renaming was that certain European networks were refusing to deliver content related to sensitive issues such as religion or racism; after the Great Renaming, sensitive or controversial topics would be quarantined under the talk.* hierarchy, which could then be blanket-blacklisted by any servers on networks refusing to carry that content. In addition, the creation of new newsgroups would be subject to an automated voting process, which would ultimately serve to undermine the Cabal’s authority. At the moment, some Usenet users, including Cabal member Brian Reid, were still dissatisfied with the Renaming and the Cabal’s continued influence on the network, resulting in the creation of the alt.* hierarchy, where any user with sufficient technical know-how could create their own newsgroup free of the Cabal’s influence. Other hierarchies outside the Big Seven would appear, and the Big Seven itself would be expanded to a Big Eight in 1995 with the addition of humanities.*, but the alt.* hierarchy would become by far the most populous on the network, and its existence and content would have a profound influence on Usenet’s future direction. (Quick note that most of the history to this point comes from here.)
In those days before the World Wide Web, Usenet was the Internet as far as most people that knew it existed were concerned (indeed Tim Berners-Lee would announce the launch of the Web itself on Usenet), and many elements of modern-day Internet culture can trace their origins to Usenet. Terms like “flame”, “sockpuppet”, and perhaps most profoundly, “spam” were coined to describe behavior on the newsgroups. Besides the Web, Usenet also saw the announcement of the launch of Linux and the Mosaic web browser.
For its first decade-plus, Usenet was the domain primarily of those with the know-how and resources to run their own server, or those with access to the network through another party, primarily universities. As a result, even as it grew Usenet was a primarily closed system, with a largely bounded group of users that were largely self-policing, each of which quickly learned the norms and customs of the community. Usenet primarily grew its userbase each September as a new crop of freshmen college students got their first connection to the Internet, and with it Usenet, from their school, resulting in a deluge of inexperienced users with no knowledge of Usenet’s existing norms and customs, but who would eventually be weeded out, usually by a wall of flame, or acclimate to the community. (The concept of a “FAQ” originated as a community-maintained file newcomers to a newsgroup would be asked to read in lieu of asking a question that had been asked and answered ad nauseam.) This changed around 1993, when a growing crop of general-purpose ISPs began offering access to their own Usenet servers, resulting in an influx of users that could join at any time, culminating in America Online’s introduction of its own Usenet server to its userbase of nearly a million in 1994, resulting in a crop of newcomers that outnumbered the old guard and swamped their ability to deal with them. Even before AOL got involved, though, Dave Fischer had already declared 1993 to be “the year September never ended”, helping give a name to the “Eternal September” that, indeed, has continued ever since, even as Usenet itself has faded in relevance.
As a whole, the Eternal September was probably a good thing for the long-term development of the Internet, democratizing access and incentivizing more user-friendly interfaces. As has been seen in numerous other fields since, a close-knit community that doesn’t suffer outsiders gladly, especially one dominated by straight white males, can get ugly; Fischer himself acknowledged that the Gamergate fiasco helped put the concerns of 1994 into perspective. For Usenet, though, it would be the beginning of the end.
Beyond the Eternal September, several factors would lead to the decline of Usenet over the course of the 90s and 2000s, not the least of them being the growth of the Web and forums there that could duplicate much of the functionality of Usenet while having moderation and other capabilities Usenet couldn’t boast, but probably the biggest was the growth of the alt.binaries.* hierarchy. Usenet was built for text-only communication, but it was possible to encode any binary file as a string of text that could be decoded back into the original file at the other end. alt.binaries.* became, in effect, one of the earliest peer-to-peer file sharing services, sharing everything from images to videos to entire programs, and soon made up a significant chunk of Usenet traffic, causing the cost of hosting a Usenet server to rise significantly. The Internet being what it was in the 90s, a lot of what was shared on alt.binaries.* was pornography and pirated materials, and while Usenet’s decentralized nature made it difficult for the authorities to eliminate illegal content, ISPs started to wonder why they were maintaining servers to host unseemly and possibly illegal content accessed by a dwindling minority. Banning alt.binaries.*, though, opened one up to outrage from the substantial number of people who used the hierarchy and people fleeing for providers that still carried it, and that started to mean all of Usenet was at risk.
Acclaimed science-fiction author Harlan Ellison sued AOL in 2000 over one of his books being pirated over Usenet in an early court test of the Digital Millennium Copyright Act; the sides settled out of court in 2004, but AOL shut down its Usenet servers the following year. The final straw, though, came in 2008, when then-New York State Attorney General Andrew Cuomo claimed to have found child porn on 88 groups in the alt.* hierarchy, resulting in Sprint and Verizon cutting off access to alt.* and AT&T’s dial-up service shutting down its Usenet servers entirely. Duke University would shut down its Usenet servers in 2010, a moment of great symbolic significance as the place where Usenet was born turned out the lights.
Usenet is, technically, still around, but to call it a shell of its former self would still be overestimating things. Google Groups continues to maintain a searchable archive of discussions past, and numerous dedicated providers continue to offer access to the network, but all of them charge for access these days and are primarily oriented towards downloading large files; venture outside of alt.binaries.* and actual discussions are few and far between amongst all the spam. Meanwhile, discussion boards on the Internet have in many ways come full circle: Reddit’s panopoly of themed subreddits in many ways mirrors Usenet’s structure. But it’s hard to imagine any sort of revival of Usenet coming as a Web 3.0 alternative to Reddit.
Many of the flaws that doomed Usenet were the result of the limited state of technology at the time it was developed. With no realistic way for a third party to delete messages from every server it might have propagated to after the fact, “moderation” on Usenet, to the extent it existed, consisted of a moderator approving every single post to the newsgroup, so it’s not surprising that most groups didn’t bother even – perhaps especially – in the face of the Eternal September. That resulted in the ad hoc form of moderation through incessant trolling, with its tendency not to suffer newbies well and creating relatively closed-off communities, and spam and other abuse going pretty much uncontrolled. As Usenet was built on a hodgepodge of technologies, diverse interests ranging from end users to coders of clients to server hosts to ISPs, with only limited overlap between them, had input on the direction of the network, and something as large-scale as the Great Renaming likely wouldn’t have been possible without the existence of the backbone cabal. Even with the advent of NNTP, Usenet servers were expected to maintain copies of every single post on the network, so as Usenet kept splintering into smaller communities servers housing massive amounts of content, most of which might never be requested on that particular server, became wasteful and inefficient, especially as alt.binaries.* grew, while also providing a small number of relatively centralized, corporate points for authorities to request the removal of illegal or taboo content from, unlike today’s peer-to-peer file sharing services.
On the other hand, the problems facing Usenet wouldn’t entirely go away for any other decentralized service. Most obviously, a single central website can set the direction of the service and add features and fix problems far more easily (although such changes might not necessarily be to the benefit of the end user) than a network spread out over a multitude of nodes and with no central authority. More fundamentally, it may not be necessary for every hub on the network to host every post if the network is Web-based, but posts are still spread out over a wide variety of servers, and it’s not realistic to expect a single person to moderate them all; it’s hard to imagine forcing an independently-owned server to take down a post. Copyright holders looking to take down piracy, and authorities trying to take down other illegal content, might be able to do it, though it would be enough of a game of whack-a-mole that the temptation to shut the service down entirely is still there, but it’s probably a bridge too far for volunteer moderators. This problem would be alleviated if each topic – equivalent to a newsgroup or subreddit – had its own server; then the people in charge would have complete control and be able to moderate in the sense we’re familiar with on the web. This would, effectively, be an old-fashioned Internet forum, only with each poster’s presence able to be linked with other forums and their presence on other services, similar to what Automattic tried to build with Gravatar.
That then raises the question of why we abandoned forums for Reddit and why a service like this hasn’t replaced or even competed with it, even as Reddit has run into its own controversies regarding the way it’s run. Reddit’s ability to connect diverse communities can’t be understated as an aspect of it, but that would presumably come with the territory of being able to connect a single identity across many forums. The upvoting system, and the way it facilitates algorithmic content discovery, is also important for appealing to people who aren’t so obsessive over a handful of topics as to read every single thing people are saying about them, as Usenet seemingly required and forums were designed to facilitate. (It’s probably not coincidental that services like Slashdot, Fark, and Digg, which were built around primarily sharing links and voting on them, started catching on in turn around the time Usenet declined and ended up bridging the gap with Reddit.)
But if you create a centralized hub for activity across an Internet-wide network, you run into a version of the same problem that bedeviled Usenet servers. Keep in mind that over the last decade-plus, the trend has been to store less and less function on the computer itself and instead rely on the cloud for most functionality; indeed the Chromebook, originally built to essentially be a web browser and nothing else, is 12 years old at this point. So while a locally stored app can check just the forums you personally are subscribed to, what’s more likely is that a centralized service is going to have to check every forum on the network. Merely checking various forums is less taxing than actually hosting most of them, but on the other hand it’s easier to check every forum when they’re all stored on your database than when you have to reach out to countless servers, with more coming online your site may not even know about. It’s certainly doable – indeed cloud-based RSS readers like Feedly have continued to hold down the fort since Google Reader’s shutdown – but it’s not a trivial task.
To go back to the question of illegal content, Web 3.0 evangelists who suggest that decentralized services can’t be controlled by governments and can therefore be used to share information the powers that be don’t want shared would do well to learn from the fate of Usenet: if the authorities can’t shut down just the information they don’t want, they might just be able to ban use of the service altogether, if not by shutting it down directly then by pressuring Internet providers to block it. Oppressive and aspiring-oppressive regimes have been known to shut down the whole Internet in their countries during times of upheaval to ensure their control over the flow of information. In freer societies, a service that can be used to share anything can be used to share anything, and if child porn, piracy, and other taboo or illegal content are placed beyond the reach of the authorities, the end result may be to discredit the entire service at best or render it terminally crippled at worst.
Over the next few weeks, I’m going to go through the history of decentralized services, how they’ve dealt with these problems and many others besides, and look at the challenges facing any attempt to build a Web 3.0 service to topple today’s social media behemoths. What approaches have been tried, what are/were their benefits and drawbacks, and what can we learn from them going forward? My hope is to put together a framework for what Web 3.0 should look like, how present-day efforts to create it measure up by that standard, and what could be done to get closer to that standard. Although the vision for Web 3.0 espoused by Pemberton and others sought to transform and replace all of the most popular websites, the way that social media has seemingly subsumed the entire Web in the intervening years, and the acuteness of the problem posed by social media’s network effects, means that this series will primarily focus on the potential for decentralized services to fill the role of the most popular social media services. Based on this list, that means primarily Facebook, YouTube, WhatsApp, and Instagram, secondarily TikTok, Facebook Messenger, and LinkedIn, and to a lesser extent Snapchat, Telegram, Reddit, Pinterest, and Twitter. (I’m ignoring services primarily used in China as the government’s control over the Internet in that country affects which services are popular there and would confound my analysis.) I’ll also touch on other services with fewer than 400 million active users that might not have direct equivalents in the above list or that are particularly historically significant, such as Discord and Tumblr.
Of course, the reasons I haven’t done a long-form series like this in years haven’t gone away. I conceived of this series before the idea of Elon Musk buying Twitter was even on anyone’s radar, held off on writing it in hopes that he’d be able to get out of the deal, and wrote this post very slowly and in fits and starts over a couple of weeks-plus, only substantially finishing on Veterans’ Day, and only really got started on another post before putting the whole thing on the back burner until recently. All the while, Musk ran Twitter into the ground faster than anyone thought possible and sent people running for whatever safe harbor they can, resulting in two different services built on the principle of decentralization seeing booms in popularity and developments quickly outpacing the context in which I conceived of the structure of the series (I originally didn’t intend to even bring up those services until the end). It’s entirely possible that there’s a large gap between now and the next post in this series, if I ever get around to it at all.
But I’m forcing myself to work on it regardless, because at the start of the month we may have crossed a tipping point, with a pair of actions that might seem trivial to describe. First there was Reddit’s long-awaited changes to API access, which would make third-party Reddit clients prohibitively expensive to access when those apps often provide features absent from Reddit’s official apps and that adherents, especially moderators, consider essential. Then Musk abruptly imposed limits on the number of tweets a person could access each day, limits that numbered in the hundreds but which proved to be so limiting as to threaten to make Twitter completely unusable. Finally, last week the parent company of Facebook and Instagram released a Twitter clone called Threads, which quickly amassed over 100 million sign-ups and could ultimately leverage Instagram’s existing user base to create a service far eclipsing anything Twitter ever achieved, and one built on a protocol that will eventually allow it to interoperate with the first Twitter alternative to catch on in the wake of Musk’s takeover.
In short, the transition to a Web 3.0 future may well have already started, and is progressing far faster than even the biggest evangelists may have thought possible. So as much as I would have liked to complete this series last year, maybe there’s no better time to look back on how we got here and lay out the challenges – and opportunities – facing this new paradigm.