An Open Letter to FCC Chairman Tom Wheeler

To: Federal Communications Commission Chair Tom Wheeler
CC: Other FCC commissioners, the United States Senate Commerce Subcommittee on Communications, Technology, and the Internet, the House Energy Subcommittee on Communications and Technology (and any other interested members of the House of Representatives), the National Association of Broadcasters, and all concerned citizens reading on

Read moreAn Open Letter to FCC Chairman Tom Wheeler

The Problem With Internet Companies Getting Major Sports Rights

I have a much longer series of posts planned on the broader issues surrounding the current era of sports on television, but I wanted to make this particular point because I think it’s particularly important.

The NFL is reportedly still considering an expansion and splitting of its Thursday night package to sell to another partner, and is reportedly interested in potentially selling games to a tech company like Google or Netflix. This comes as the NBA, still in the process of negotiating its next TV package, has been speculated to potentially also sell games to a tech company. And that comes amidst years of speculation that tech companies like Google, Apple, Microsoft, Facebook, or Netflix, could be the best candidates to challenge ESPN and completely upend the sports TV wars.

But I’m still unconvinced that Internet companies are really the threat they’re made out to be. In my opinion, the speculation surrounding them is mostly superficial and based on only a few factors, without seriously considering the circumstances and what their entry into the market would actually mean, and I don’t believe they’re a realistic candidate to score sports rights, or that if they are that it would turn out to be a good idea, or that if it does that they would really be as revolutionary as they’re cracked up to be.

For one thing, I’m having a hard time seeing exactly how tech companies would distribute games and make money off them. I can’t imagine Google would simply slap games on YouTube, as that would mean they would need to collect money through advertising alone, when the great advantage of sports networks like ESPN is their dual revenue stream of advertising and subscriber fees. That means tech companies would need to restrict access to the games in some way, and most of the options don’t sound very promising. Would Apple restrict games to users of iOS devices and Apple TV, or Google restrict them to users of Android devices and Google TV? That seems like it would have the potential for disaster as people would be shut out for choosing the wrong product, especially if we’re talking about being the equivalent of a national television partner as opposed to getting a piece of the out-of-market package. A company like Netflix could distribute games to its subscribers, but that would be the equivalent of a premium channel at best. The best-case scenario probably involves Facebook or Google effectively blackmailing people into signing up for their services in order to view the games, but even then I’m not seeing how that would help them raise enough money to be competitive with sports networks.

And none of these approaches would avoid the other issues, certainly not the issue of being a middleman. The nature of TV is such that sports benefit from distributing their games through middlemen, which is why none of the sports leagues that own their own networks have abandoned their relationships with other partners; from its humble beginnings as the Outdoor Life Network, the entity now known as the NBC Sports Network has acquired more and more properties to obtain more distribution than any sport-specific network other than Golf Channel and, until this past August, Speed – and those two had a multiple-year head start on gaining distribution before the full effect of the sports TV wars set in. In theory at least, fans of any of its properties can drop in on coverage of any other property, thus broadening the exposure to that property. But the open nature of the Internet already provides exposure to anyone who wants to drop in, so I’m not sure what sports leagues would gain from selling games to Google when they could cut out the middleman and distribute games themselves. In this sense, Major League Baseball has already entered this territory; its service regularly offers one game for free each day to non-customers.

But none of that begins to approach the most fundamental issue, the basic distinction between the Internet and television, which I laid out before: the Internet is good at distributing many programs to a few people, but television is good at distributing a few programs to many people. The Internet effectively consists of one “channel” for each of its customers, meaning you have a channel that you can program yourself, allowing you to watch whatever you want whenever you want. But if many people want to watch the same thing all at once, i.e., some sort of live event (i.e., a live sporting event), they all have to watch it on their own individual “channels” – the server has to serve the event to each individual computer that asks for it. We saw the result with the massive issues NBC had with streaming of events at the London 2012 Olympics, and those didn’t reach more than a million or so people at a time. Things haven’t improved that much since then:

Perhaps the issues surrounding large-scale Internet streaming can be fixed with bigger pipes and more investment in servers and the like, but this structural issue will remain: why distribute the same event many times to each individual customer if you could find a way to distribute the event once and allow anyone, at least with the proper credentials, to hop on the stream with no additional strain on your end? On this front, it’s instructive to see how the mobile world, which (at least at the moment) already lives in the world where all television is over the Internet, is dealing with this issue, and it’s clear that they at least recognize it: AT&T has begun work on a network that will precisely allow them to push video out to many different devices at once. One thing strikes me about this project: it is a completely separate service that requires use of completely separate spectrum from AT&T’s normal 3G/4G network (indeed, spectrum that had most recently been used for a similar service). In other words, once you begin broadcasting the same signal for any device to hop on to, it is no longer the Internet, at least not as we know it. In this particular case, it becomes something fundamentally not that different from over-the-air broadcast television – indeed the spectrum in question may well have once been TV spectrum.

Once the distinction between and relative strengths of TV and the Internet are recognized, it’s clear that at least on a large scale, showing a single live event for everyone to view at once is something the Internet simply is not suited for. The great advantages of the Internet for viewing video are the ability to view it anywhere you want and to watch whatever you want whenever you want, but only the former applies to live events like sports, and even that goes away if the technology is developed to deliver content to many devices at once. Broadcast television is already halfway there, but is currently only reaching mobile devices through optional kludges attached to the existing broadcast standard, rather than having one standard suited to reaching all devices whether stationary or on the go. If the television industry recognizes its place in a future where Internet distribution of video reaches maturity – a place where its purpose becomes refocused specifically on the broadcasting of live events – adopts a standard that maximizes its investment in its existing infrastructure and reorganizes its business accordingly, it can survive and effectively compete in that future for years to come, even if that future is substantially different from what exists now.

How Windows 8 Changes Everything, Part V: The Reinvention of E-Mail (And How Another Blast from the Past Could Be Your Google Reader Replacement)

If you follow my tweeter, you know that I finally got on board the smartphone bandwagon a few months ago, shortly after completing (or so I thought) this series. I’d lost my cell phone back in February and for all her reticence, Mom wanted me to have a cell phone while she took a vacation in Phoenix over my spring break, so she gave me her old iPhone. As you might expect, it has proceeded to become a massive time-suck, not helped by my laptop being unusable during the break and falling apart now (I honestly fully expected to have a Windows 8 tablet by now, but Mom actually seems to be holding out for the more expensive tablet with cellular access).

Confession time: the e-mail address I’ve given on Da Blog in the past, the mwmailsea at yahoo dot com one? I’ve actually checked it very seldomly for years. For the most part, it’s filled up with a bunch of newsletters I signed up for many years ago, some not even intentionally, most of them before I got IE7 and its accompanying RSS reader, that I never really intended to even read, so the signal to noise ratio has been low and I’ve generally used another e-mail address to actually communicate with my family, therapists, and school personnel. Even that address I’ve never checked as obsessively as some people check their e-mail.

Now, however, I’ve hooked up both e-mail accounts to the iPhone’s e-mail app, meaning I now find myself checking both accounts regularly throughout the day. In the process, a funny thing has happened. Those newsletters that I signed up for lo those many years ago, that I’ve never given a second thought to in years? I’ve actually bothered to look at some of them, and some have managed to link me to rather interesting articles, some of which I’ve even gone on to link elsewhere.

For years, e-mail has sort of been the quiet, unsung backing of Internet communication. As Google, Facebook, Twitter, and more have continued to seize the headlines, e-mail has remained the same, quietly plugging away and serving as the backbone of everything else. Almost every time you’ve set up an account on a new site, or submitted a blog comment, you’ve had to provide a valid e-mail address, but e-mail itself has remained under the radar, with most people using it either for one-on-one communication or as a dummy to throw at those sites asking for one. But with e-mail now taking a newly central role on smartphones and tablets, it’s possible it could be the key to understanding the future of the Internet.

Earlier in this series, I mentioned that there may soon be a new syndication mechanism geared towards blogs, one that doesn’t simply collect text the way RSS does but allows blog creators to optimally place ads and other content. Could the e-mail newsletter be that mechanism? E-mail allows for the addition of images to such an extent that you can make it look like your actual website in a way RSS doesn’t allow, and most blogs already have the ability to subscribe via e-mail tucked away somewhere. Even the structure is more in your control; many big sites offer a daily roundup of relevant stories in one complete package. It does have a number of drawbacks; besides the susceptibility to spam and viruses, which leads many e-mail providers to put up filters that break images, signing up for too many newsletters could overwhelm you without filters to move them into folders, which doesn’t always work. (This is the case with RSS as well, but folders are easier and more reliable there.)

Webcomics tend not to support e-mail delivery. There seems to be a philosophy around the webcomics community these days that says that the design of your site is as much a part of your comic as the comic itself. There’s something to be said for that, but only insofar as the design of your site serves to define your site. As Part IV should have made clear, site design becomes less important in a mobile world, unless you’re talking about the design of your app, which is pretty much the same thing. Besides the ability to customize e-mail to look more like your site, two elements are really the only ones important enough to be included in an e-mail, assuming you don’t just ape what you’re putting in RSS feeds already (for comics that put their comic images in their RSS feeds): an ad and perhaps a link to the store. This could be another place where “comics page” services could come in handy, if not with delivering comic images alongside ads the revenue from which gets passed on to creators, then at least with links to comics that have updated since the last e-mail.

Perhaps the revival of e-mail could be the key to bringing everything together into the decentralized social network I put forward at the end of Part III. It won’t be able to do everything, since e-mail is still geared more towards one-to-one communication, and other things will need to take the role currently filled by the social networks of today – although Tumblr and Twitter might cover most of what’s needed, especially since most e-mail clients allow you to sort your contacts into groups that you can then contact all of with the push of a button, serving a similar function to Google+’s circles. Regardless of anything else, it seems clear to me that e-mail is a critical cog in understanding the Internet of the future.

This wouldn’t be so bad if the Internet Archive had more of their content from the old site.

Here’s what the proprietors of the Superman through the Ages site apparently decided back in April: “We’re too lazy to perform basic, common-sense steps and research to figure out how to keep our site safe, so we’re going to make it as difficult as possible for us to restore our site by uploading everything manually onto static pages, risking losing all the following we’ve spent nearly two decades building by taking forever to get back the content that was their reason for coming to the site in the first place, if ever! It’s not like the Internet is going to blow up when the new Superman movie comes out in such a way that our collection of classic Superman stories might be a key source of research for certain bloggers looking to weigh in on the controversy!”

Seriously, 99.9% of sites on the Internet, including sites with forums, WordPress, wikis, etc., work just fine with nary a slice of malware or other hackery, but no, you get hit with malware that renders your site mostly offline for over a year before you do anything about it and when you do, you just decide to give up on any and all web technology that wasn’t around when Netscape was big and Geocities reigned (with the possible exception of CSS). I’m sure there are plenty of people who would be willing to pitch in and help you get everything back up faster, but no, that’s just another “security hole” you’re opening up. Never mind the multitude of sites like Wikia or (possibly) that would effectively keep your site safe with their own security upgrades without you having to do anything…

(It also doesn’t help that the only source of updates other than the main page is on Facebook, which I continue to avoid signing up for like the plague, but which I need to sign up for in order to see more than bits and pieces of five comments, which means I can’t see any of what other people have said about the situation or the proprietor’s responses to same…)

How Windows 8 Changes Everything, Part IV: The Triumph of Scott McCloud (Or: “Webcomics” Are Dead. Long Live Digital Comics.)

But for all of hypertext’s advantages, the basic ideas behind hypertext and comics are diametrically opposed! Hypertext relies on the principle that nothing exists in space. Everything is either here, not here, or connected to here, while in the temporal map of comics, every element of the work has a spatial relationship to every other element at all times.
-Scott McCloud, Reinventing Comics

In an app-based future, one where social media becomes most people’s gateway to the Internet if not defining it, it’s easy to fear, as John Allison did a few weeks ago, that those who have taken advantage of the openness of the Web may find themselves increasingly abandoned and unable to gain traction. But as I said in Part II and in my response to Allison, the tools of the new Internet paradigm are open to anyone, with nothing stopping it from being as open, if not more so, than the old Web-based paradigm.

Four years ago, I wrote my Webcomics’ Identity Crisis series, the core of which (in Parts III and IV) explored the obstacles to the future of comics Scott McCloud outlined in Reinventing Comics. I felt that the one revolution McCloud advocated – the infinite canvas – was wholly dependent on the other – micropayments – in order to truly catch on, because any other revenue model (where the form the online version took was relevant, that is) depended on the breaking up of the story into parts, defeating much of the point of the infinite canvas and often even rendering it counterproductive. Micropayments, for their part, were doomed to fail, at least as far as webcomics were concerned, because of the psychological barrier against paying anything for anything – perhaps they might have become the norm if they were ready when the Internet started catching on, but so long as enough of the Internet’s content was available for free, it would be extremely difficult to produce something with enough value that, even with all the stuff out there for free, a substantial number of people would be willing to pay even a cent or two for it, especially if it was possible, even easy, for someone to repost it elsewhere for free, especially if they had to buy it sight-unseen from someone whose content they didn’t already know they wanted, and especially if they had to pay for something that had previously been free.

Ironically, one of the more famous proponents of the “psychological barrier” theory for the failure of micropayments was… Chris Anderson, in his book Free: The Future of a Radical Price. Though he never directly mentioned it, perhaps hoping people wouldn’t notice the contradiction and accuse him of holding whatever position attracted the most attention, “The Web is Dead” could be seen as an implicit admission of how wrong he was then. The thesis of “The Web is Dead” was that people would pay for the same content they could get for free, simply because it came in a form worked better and was easier for them. If we are increasingly moving to a future where consumers are increasingly willing to pay to receive content on their smartphones once available on the Internet for free, it may well be only a matter of time before micropayments take hold in this far more fertile soil.

Already most of the apps in the Windows store are available for less than half of the magic $10 price most online retailers need to hit to justify the cost of a single credit card transaction. I’ve long felt that the fees people pay to their Internet service provider for Internet access were low-hanging fruit for micropayments, similar to how charges for pay-per-view content appear on your cable bill, if it weren’t for the numerous ways to access the Internet that other people pay for. The advent of cloud computing and the single login, including devices like those that run Windows 8 that are tightly associated with a single online account, makes it far easier to charge your credit card on the fly without introducing extra steps and at virtually any price. While producers of “fungible” content that can easily be spread elsewhere will probably continue to need to offer their wares for free (or just enough to render piracy inconvenient), we may one day see the day where producers of other types of content, to take just one example, allow anyone to access their content for a small charge, or for free if you buy their app once (and possibly pay a regular subscription fee thereafter).

It’s highly unlikely that a single comic, even a full-size comic book or graphic novel, would justify its own app, but the point is the technology exists to offer it at any price, regardless of the mechanism. We’ve already seen the development of an “iTunes” for comics, in the form of Comixology and its associated formats, and Marvel and DC have already embraced the online, digital distribution of their wares for new mobile devices, with Marvel even going so far as to produce what I call “digital stage comics” for their Avengers v. X-Men event. As Allison’s attitude shows, however, the webcomic community has been surprisingly slow to adapt to this new world order. Many webcomics have developed apps for the distribution of their content, but like webcomics in general, most of them are comic strips easily suited to distribution on a periodical basis (though Least I Could Do offers access to its archives through its app for just 99 cents).

If the web starts to be pushed to the background, you could see webcomics, as we know them today, pushed to the background as well. Even comic-strip-type webcomics may soon find their main means of distribution through “comic page” apps that aggregate them together. (One wonders if this was one of the ideas Scott Kurtz planned to hawk to syndicates with last year’s consulting offer.) But the real impact will be felt in “long-form” comic-book-like webcomics, who could jump at the chance to exploit the exposure advantages of the Internet without any of the drawbacks. It was, after all, the comic book model McCloud had in mind with his advocacy of micropayments and the infinite canvas. While the problem of spending money on unproven content hasn’t gone away entirely, some workarounds have sprung up; recently my dad published a prose novel that he promoted in part by making a short snippet available free for people considering the book on Amazon, a tactic that has apparently helped many novels achieve success through online sales, including some you may have heard of.

Beyond micropayments making the infinite canvas far easier to monetize, the advent of touchscreen-enabled devices eliminates the main interface-based constraint on the infinite canvas as well. Maintaining an “unbroken reading line” would seem to imply the horizontal infinite canvas, where the row of panels scrolls off to infinity to the right, but most applications of the infinite canvas have been of the vertical variety, due to the nature of mouse wheels, the most hassle-free way to scroll on the computer. But the touchscreen does away with the need to scroll entirely; all it takes is a swipe to move to a different part of the canvas, or moving the finger across the screen. It’s even possible to zoom in with the double-tap. This isn’t limited to comics; I really don’t like how the Kindle and other e-readers feel the need to stick to the norms of print by chopping up books into discrete pages. I don’t know this either way, but I hope Comixology’s formats and others allow people to make their “page” whatever size they wish if they so choose; we could see an explosion in long-form stories told in forms unthinkable not too long ago. I can’t help but wonder if, when McCloud semi-unintentionally anticipated the iPad in Reinventing, he was giving a look at the sort of device that he had in mind when talking about the infinite canvas, without explicitly stating so.

Many of the applications of the infinite canvas McCloud proposed will probably always be too gimmicky to catch on, but there’s nothing stopping those applications with real storytelling potential from changing the way you look at comics. It’s possible the digital comic of the future will look a lot like Homestuck – essentially, a variant on the digital stage comic, only told in many thousands of tiny chunks, highlighting another failing of hypertext: the way advertising on the Web rewards breaking stories up into as many tiny units as possible so as to score more pageviews to drive up the price of advertising. With alternate business models, it would no longer be necessary to exploit perverse incentives like this, because the reader could be charged directly in a way that makes sense.

This is only a hint of how the move to an app-based future can be a boon to independent producers of content prepared for it, despite the decline of the open, free-wheeling web they have taken advantage of to this point. We could be on the verge of an explosion in content of all shapes and sizes, a golden age of artists flocking to the most rewarding environment the arts has ever seen, creating content that takes forms never before possible, and potentially achieving the long-deferred vindication of Scott McCloud’s original vision. The rise of devices like the iPad and Surface doesn’t mark the end or a decline of the great revolution impelled by the rise of the Internet over the course of the last decade. Rather, it’s just the beginning.

How Windows 8 Changes Everything, Part III: The Nature of Social Media (And What a Blast from the Past Means for the Future of Facebook)

For a lot of people, social media ARE the internet.
John Allison, creator, Scary Go Round and Bad Machinery

It’s becoming apparent to me that most people do not use the Internet the way I do.

I am not a social media fiend. The only social network I’m on is Twitter, and I’m not even sure I use Twitter the same way most people do; I only follow 15 or so people on Twitter and I can’t even imagine following much more than that. I get the sense that for many people, social media completely defines their online life, serving as their gateway to the rest of the Internet, to the point that any attempt to understand the workings of the Internet, from the failure of RSS to catch on to what a post-Web future of the sort Chris Anderson describes might look like even now, has to start with social networking first and foremost.

When Google Plus launched nearly two years ago, it made a big deal about its “Circles” feature, which recognized that people don’t have just one type of friend. Circles allowed you to sort your contacts into groups, such as friends, family, coworkers, more distant relatives, college buddies, and so on. It struck me that this model was the opposite of Twitter: when I first discovered Twitter, I applauded it for recognizing that “following” someone wasn’t necessarily reciprocal like “friendship” has to be on Facebook, but Twitter allows you to follow anyone’s tweets without requiring them to follow you back, while Google+ was effectively allowing you to determine who received whatever messages you sent, without their input.

What would a social network be like that combined the two? Well, anyone could choose to “follow” any of the public postings of anyone else. A person could then organize the people who follow them into groups, like Google+’s circles; perhaps they’d receive a notification whenever someone they followed followed them or vice versa, asking if they’d like to place that person in any of their circles, or perhaps someone could ask to join any of their circles, similar to how Facebook’s “friendship” works now. Some of their posts would continue to be public, while others would be restricted to certain circles. You’d effectively have two different levels of “following”: a basic level allowing you to follow anyone and anything, like how many people use social media now, and a deeper level for your actual friends, indeed as many “deeper levels” as you want. This would serve as a curb on the proliferation of “friends” that plagues Facebook, and it could also allow the social network to be more open; many if not most Facebook profiles are closed to nonmembers, and often even to people who aren’t friends. With this system, anyone could still have a public timeline anyone could view like on Twitter, but they could still restrict some of their postings to people they’re closer to, which Twitter can’t do except in the form of “direct messages” (which no one uses) and restricting the whole account to followers only.

Perhaps we could take this further, and somehow recognize when an actual group of people all (or almost all) count one another as friends, or in analogous circles. The social network could recognize this group as a self-contained group in its own right, enabling them to better organize and converse with one another as a group. This doesn’t have to be limited to an actual circle of friends; in fact, the great shortcoming of most social networks is its inability to recognize groups of people with a common interest and serve as a place for them to discover one another and talk about that interest with one another. As such, people with common interests end up fractured among many different sites, often blogs that become a hub for the community even though they may not work well for this purpose. When I launched the forum, I said that forums still had a place in an era of blogs and social media, as a place for a community to gather and talk about common interests, but why have a forum and a collection of whatever other sites are out there for this purpose when anyone interested in a topic can connect with everything everyone else is doing and saying in that topic in a single place, perhaps one that can accommodate blogging as well?

Being able to serve as not just a site that can be all things to all people, but to specifically connect people with common interests, might be the one great advantage that someone might yet be able to topple Facebook with. The social network that can best Facebook is one that can leverage the network advantages of having everyone on there, yet also cater to specific interests. In that sense, it may be a flashback to the original social network, Usenet, but adapted for the modern web. Were that to happen, it could be the last break from the Web as we know it now and the ultimate realization of Chris Anderson’s vision. Farhad Manjoo thinks this is impossible, that no social network that claims to be all things to all people can also serve as a social network for a particular interest. I think it can if it opens up the toolbox so that the community surrounding a particular topic can customize their own corner of the network with all the functionality they could possibly want. That probably means the social network of the future will have to be open source – and without the ability to monetize it, that will make it very difficult to run.

Perhaps the social network of the future is already under construction, in the form of WordPress’ BuddyPress plugin. This plugin allows any WordPress site to set up their own social network within it, something that seems kind of odd to me; the network effects of social networks are such that any social network for a particular site would seem to have limited utility. But if someone were to set up a competitor to Facebook and run it on BuddyPress, it could catch on like wildfire, if only among people concerned about Facebook becoming just another evil company with little regard for privacy – but that might be enough to attract everyone else in the long term, if it truly embraces the open-source ethos. One thing I know for sure: I’ve finally closed up shop on the Morgan Wick Forum, which had become little more than a wretched hive of spam and villainy, and if and when I relaunch it, it’s probably going to be with BuddyPress installed (if only because that might be the only way to get some of the functionality, like high-level mod tools and private messages, I’m looking for).

Or perhaps the social network of the future won’t be a single site at all, but rather new technologies and protocols to link people together without the need of a central site. This is the dream behind the notion of the “semantic web“, the idea that all you need to do is put all your relevant information in a single place in a common format and it will follow you anywhere, capable of being read and understood by anything – a concept that could be key to a truly post-HTML future. It’s hard to imagine what such a decentralized social network might look like, but that hasn’t stopped some people from trying. The growth of devices like the iPad and Surface that are so tightly connected to the Internet may help bring the semantic web into reality, or at least make it more possible, and in that sense, perhaps the real clue to the social network of the future may lie in the “People” tile in Windows 8 and Windows Phone. As more and more people move to the cloud, and to devices like the Surface that are constantly registered with an account that connects them to that cloud, it’s only a matter of time before the accounts their devices are registered with are used to help form a new kind of social network – one that might not have a single identity at all, and one that might truly define the Internet for its users. All it would take is a way for iOS, Android, and Windows users to communicate with each other seamlessly.

So in retrospect, why did we end up abandoning Usenet, anyway?

How Windows 8 Changes Everything, Part II: The Triumph of Chris Anderson

You wake up and check your email on your bedside iPad — that’s one app. During breakfast you browse Facebook, Twitter, and The New York Times — three more apps. On the way to the office, you listen to a podcast on your smartphone. Another app. At work, you scroll through RSS feeds in a reader and have Skype and IM conversations. More apps. At the end of the day, you come home, make dinner while listening to Pandora, play some games on Xbox Live, and watch a movie on Netflix’s streaming service. You’ve spent the day on the Internet — but not on the Web. And you are not alone.
-Chris Anderson, “The Web is Dead. Long Live the Internet”, Wired magazine, September 2010

One reason why, if push came to shove, I might still be willing to get a Windows RT device and accept being limited to Internet Explorer as my web browser despite its limitations may be the ability of apps to fill those holes.

IE has never had the broad-based add-on support of its competitors, but Windows 8 may suggest it doesn’t need to; many of the same add-ons I’ve added to Firefox over the years are available in some form as Windows 8 apps. Most obviously, I might be able to do without an RSS reader in IE and instead install an RSS reader app; indeed this would have the advantage of flashing new items from my RSS feeds on my Start screen. Couple this with Twitter and e-mail apps, and I can keep up with everything in real time right from my Start screen. In fact several of my RSS feeds might provide their own apps with their own live tiles, to the point that the very concept of an RSS reader might be unnecessary; Microsoft may one day make it possible to pin RSS feeds directly to the Start screen as live tiles. The Start screen is so useful that even making it easier to get to it with one touch or swipe may not be enough to do it justice. The Start screen is the one element of Windows 8 that can’t be “snapped” to one side of the screen or the other, yet may be the most useful thing to be snapped (to the point that without it, live tiles seem more like a gimmick at best and a distraction at worst than actually useful); I’d like to have the option to keep my Start screen constantly on screen so I can see my live tiles at a glance and pull up a new app on the fly, not unlike a traditional Start menu.

Indeed, with Windows 8 coming with a separate Bing app for searching the Internet and many other sites creating their own apps to access their content, perhaps the Metro version of IE’s light-duty nature and discouragement of the proliferation of tabs is easily explicable. Perhaps IE itself is transitioning into a “miscellaneous” app, to be used only for visiting those sites that actually require it, rather than an all-purpose must-have for exploring any part of the Internet. Perhaps it’s the traditional web browser that’s entering the twilight of its relevance as our concept of what the Internet is gets turned completely upside down.

A few years ago, Wired editor-in-chief Chris Anderson published a controversial feature proclaiming, as the issue’s eye-catching cover put it, “The Web is dead. (Long live the Internet.)” The idea was that with the advent of devices such as the iPhone and iPad, people would increasingly shun the Web, as in the sites that require a browser to access, in favor of more specialized apps that met their needs better and could be more easily monetized, especially when it came to digital content for sale. The openness of the Internet favors innovation, and for decades that openness led to unprecedented innovation in all corners of the Web that doomed the “walled gardens” of old, but once that innovation started to settle down the Web’s tendency to be all things to all people would become a drawback and people would flock to dedicated apps that did just one thing and did it as well as anything could. All it needed was an Internet connection to access the data. APIs would be the new walled gardens, providing superior performance at delivering the data it provided, and people would be willing to pay the price, in money and control, for it.

Most observers pooh-poohed Anderson’s conclusions, for a wide variety of reasons – many of them having to do with the graph at the start of the article, which only showed web traffic declining as a percentage of overall Internet traffic, joined by peer-to-peer file sharing and video, the latter of which uses a lot more bandwidth than anything else and is usually accessed via the web anyway (to say nothing of the utter lack of any measurement of “apps”). Many critics also took issue with Anderson’s definition of “app”, noting that almost all the services in the opening paragraph above at least have accompanying web sites, and in fact their “apps” are really just clients for accessing their Web sites.

I think the latter group largely missed Anderson’s point. Sure, you can use your web browser to access Facebook and Twitter, but even then they seem to stand apart from the rest of the Web, as almost their own entities, to the point that when you access another page within Twitter, it acts as if it’s loading the other page within the Twitter interface, and then delivers it without the browser seeming like it’s done much of anything at all, as though Twitter is just sitting in the browser doing its own thing while the browser does nothing. Moreover, as time has passed Twitter and several other web sites have made their interfaces look increasingly like that of the iPhone’s – in order to match their apps. Facebook and Twitter are the most obvious examples of “web sites” that don’t really need to be web sites; they can exist just as well as their own specialized applications. And the same goes for all the other services Anderson lists in the paragraph above.

Before all this started, those web-watchers who saw something similar coming warned that it would close off the Web’s innovative nature, that entrenched interests would erect as many barriers to entry around the Web as traditional media had. Anderson, by contrast, predicted that this revolution would remain mainly limited to the monetization of digital content; his hyperbolic headline aside, e-commerce and things created for non-monetary reasons would continue to find a home on the Web. I get the sense both sides are making a faulty assumption – that in fact the new world of apps is just as open to anyone with the requisite knowledge of the coding languages involved as the Web is, making it actually a boon to independent producers of content, as evidenced by the nonprofit Wikipedia having its own apps. I mentioned in Part I how difficult it is to read Homestuck on the iPad or Surface, but there isn’t really anything stopping Andrew Hussie from making his own Homestuck app, which in fact might be a better way to experience it than relying on the Web site, especially given its “adventure game” motif. (This is especially the case on Windows 8, where it’s possible to make an app with rudimentary knowledge of HTML, CSS, JavaScript, and other fairly common programming languages.)

That means it’s very possible the Web really is dead, or at least dying. Anything that can be a Web site can also be an app that delivers the same content in a more focused way. Even e-commerce can be carried out within the confines of an app, as evidenced by the existence of an Amazon app in Windows 8. As the Twitter interface indicates, the Web itself is starting to become bent to look more and more like an app, but for sites to offer their own isolated comprehensive experiences on a platform designed to accommodate individual pages without bias between different sites is somewhat contradictory, and not enough to resist the trend. The trend towards trying to wring as much as possible out of a single Web page, as opposed to the many separate pages linked by hypertext that is the Web’s defining feature, seems to be an inherent contradiction. Why hack a Web site to make it look like its own program when it can actually be its own program? HTML has long been a clunky language, and there have been many admirable efforts over the years to expand its capabilities to match what people have been using the Web for, including HTML5, and to expand the capabilities of the plugins like Java and Flash that help expand the capabilities of HTML, but we may be straining its limits, or reaching the point where the effort to keep expanding it doesn’t outweigh the desire to cut out the bloat and start over.

Internet Explorer 9 introduced the ability to pin web sites to the task bar, some of which could accommodate special functions specific to that web site, as though the site was a program in itself. Windows 8 may be the final nail in the coffin of the Web, because its live tiles both obviate the need for such fake apps, and provide the ultimate motivation for sites to transition to an app-based future, by providing something the Web can’t easily offer no matter how much it expands. Well before the iPhone and iPad came along, Firefox add-ons provided added functionality that were part of the browser but didn’t involve opening pages in the browser. It showed that no matter how flexible the Web page could be, it was still trapped within the confines of the Web browser, and there were still ways in which it could escape those confines and deliver you content by becoming part of its own prison, from within the browser’s interface, or even escape it entirely. Simply put, some sites are just bigger than the browser, and shouldn’t be restricted to within its confines. Windows 8 has provided online content providers the tools they need to fully escape the confines of the Web browser and explore their potential in ways the browser could never offer.

Indeed, Windows 8 may take Anderson’s future further than even he predicted, specifically the RSS reader he mentions checking, by replacing them with live tiles. I generally keep up with what’s going on in most web sites each day by checking two sites: Twitter, and Google Reader. Were I to get a Surface, I would have three tiles lined up all in a row for me to check: e-mail, Twitter, and an RSS reader. It quickly becomes apparent that the RSS reader is a “miscellaneous” app just like IE, one jumbling together all the sites that don’t have their own apps. If any site can be an app, any site with an RSS feed can become an app with a live tile. Most providers of news either now or will eventually have their own apps that will offer their content in new and better ways, and I’ll talk later about what all this means for webcomics. I predict that by 2015, there will be a new syndication mechanism aimed specifically towards blogs, one that doesn’t simply collect text and render it in the way the reader specifies but instead allows blogs to format them however they like, allowing them to more easily place ads and optimally organize content – a sort of “uber-app” to allow blogs to take advantage of the freedom and flexibility of apps. I’ve never really gotten the point of Tumblr, but perhaps it provides a hint at what the future of blogs might look like, a standardized mechanism streamlining many of their purposes and presenting them in unified fashion.

Google’s Chromebook, along with its predecessor ideas, subscribes to the ethos that the browser can be the OS. Microsoft has been inspired by Apple to flip that script around: the OS is the browser. Technically, the OS has been the browser since Windows 98, but only recently has Microsoft really subscribed to it as more than a way to raise the hackles of antitrust agencies – and my hunch is they’re closer to the future of computing than Google is. The rest of this series will explore the implications of all this, great and small.

How Windows 8 Changes Everything, Part I: The Rebirth of Microsoft

What you have seen and heard should leave no doubt that Windows 8 shatters perceptions of what a PC now really is. We’ve truly reimagined Windows and kicked off a new era for Microsoft and a new era for our customers…With a glance, you will always know what’s going on in the world and with the people who count in your life…The experience is really magical. You log in just once and you see your device light up with your life. Buy a new computer, it lights up with your life.
-Microsoft CEO Steve Ballmer, Windows 8 launch event, October 25, 2012

I don’t like Windows Phone.

Oh, I liked it in theory – the notion of simplifying and bringing together so many ways of communicating with the same people, of simplifying the very concept of a smart phone, made me put it at the top of my list of desired operating systems should I ever get a smart phone. I haven’t jumped on the smartphone bandwagon yet, though not for lack of wanting to – I may have said in my very first post that I tend to shy away from the stuff everyone else finds popular, but when the likes of Instagram and Angry Birds become household names almost entirely off the back of smartphones it’s clearly become more than a passing fad – but I suspect Mom quite rightly doesn’t trust me with something that leaves me connected to the Internet no matter what she does.

So like I said, I liked the concept of the Windows Phone, until I actually tried it and found the interface clunky, not for me, and missing its greatest opportunity. The most useful source of unification was the unification of social networks, yet they seem to be consigned to the “People” tile, which just flashes images of the people therein without reporting new messages like the others; calling people not in your address book is a major pain, and it seems to assume you have a personal connection to everyone in your address book. Maybe that’s how some people work, but it wasn’t for me. With the iPhone being tiny (and thus hard to be precise with), coming off as very basic, and being so simple as to have a seemingly inconsistent interface, my preferred smartphone OS shifted to Android.

I’ve also never understood the appeal of tablets – they’re basically bigger, heavier smartphones that can’t call anyone and don’t fit in your pocket. This one I understood more after trying out smartphones – while a bigger screen for watching video wouldn’t move my needle much, being able to type on a keyboard where I don’t accidentally hit the wrong key every other letter would. But I was very surprised when Windows 8 was announced – as much as I liked the idea of Windows Phone, that Microsoft would make the most radical change to the Windows interface since at least Windows 95 over 15 years ago to bend the bedrock of the company’s success to match their johnny-come-lately product sitting a distant third in the smartphone wars, a product that only adopted its “live tiles” gimmick because neither Apple nor Google would, seemed damn near unthinkable. I couldn’t even imagine how it could possibly work on an actual computer. As it happened, Windows 8 seems to have gone over so poorly that it’s started to elicit comparisons to the infamous Windows Vista – which it’s actually doing worse than. Now that I’ve put in more time on the Surface than I reasonably should have, are the critics right? Is Windows 8 the New Coke of computing, or does it represent a true revolution?

When the Surface first came out, many people wondered whether Microsoft was really putting its best foot forward, what the point was behind the “lite” version of Windows 8 the Surface shipped with, Windows RT. Ostensibly it was the “tablet” version of Windows 8, but Intel has been making chips that save enough on battery life that the full version can be and has been used to power tablets as well, with no evident disadvantages (but, in my experience, at higher prices than the equivalent RT devices), so many tech commenters saw it as essentially Windows without the ability to run old-style desktop apps.

Let me state upfront that I consider this a red herring. I felt it was inevitable that makers of most old-style desktop programs would quickly rush to fill up the Windows Store with new app versions of their programs if enough people bought Surfaces, meaning almost any desktop application you might miss would have a version fit for Windows RT sooner or later. Admittedly this might not include the programs commenters usually chose as specific examples, iTunes and Adobe Photoshop, in the former case because Apple doesn’t want to support Microsoft’s attempt to compete with the iPad (and Microsoft would rather you use their Music app anyway), in the latter case because Photoshop is too heavy-duty for touchscreen use and might seem to require a desktop, as with most PC games, though I could be proven wrong on this (more on this later). More to the point, I felt that regardless of its other problems, Windows RT came with a killer app that would help make sure people would buy Surfaces: free Microsoft Office. And if that doesn’t sound impressive to you, you’ve never had to pay over $100 for Office.

Admittedly the version of Office that comes with the Surface is licensed only for individuals, meaning companies – the main purchasers of Office – would have to buy a group license anyway, and if they do that they might as well shell out for the Surface Pro. So what sort of person would benefit from free Office on an individual basis, perhaps a group chronically challenged for money and with a tendency for early adoption of technology? College students. I imagined college campuses littered with people carrying around Surfaces to do their work, connect to the Internet, and whatever else they wanted to do.

But I said earlier that almost any desktop program would soon have a version for Windows RT, and there is one big exception to that group: web browsers. Microsoft has effectively blocked browser makers from important resources that would allow them to make their own browsers, so you won’t be able to install Firefox or Chrome on a Windows RT machine (though you can on Windows 8). The EU, which has long hounded Microsoft’s bundling of Internet Explorer with Windows, allowed this anticompetitive move on the grounds that as a tablet OS, Microsoft is entering a field already contested by Apple and Google, not really protecting its traditional-PC OS hegemony.

For the record, I agree with them; in fact, limiting browser competition for Windows RT might actually backfire on Microsoft and may already be hurting the sales of RT machines (sales of Windows 8 machines in general haven’t been as strong as I would have otherwise predicted), as I would be more willing to buy a Windows RT machine and be limited to IE as my only option if the Metro version weren’t so clunky and bare-bones. For example, I have to swipe down from the top (or up from the bottom) to show the tab bar (which contains unnecessary thumbnails of each tab), when it’s always visible on the iPad browser and most Android tablet browsers; combine this with the inability to open multiple windows and it’s almost like a return to the pre-IE7 days before Microsoft embraced tabbed browsing, when you had to open multiple windows to have multiple pages open at the same time. (And near as I can tell, I can’t even use Ctrl+PgUp/PgDn to switch tabs – I have to pull down the tab bar every single time.)

Further, I’m not sure if it’s possible to search from the address bar at all, let alone change search providers on the fly from, say, Google to Wikipedia, which seems odd when most browser makers seem to be moving more towards address bars patterned after Chrome’s “omnibox”, including Microsoft itself in IE9 (while the iPad browser still has a separate Search box). I can’t access Favorites unless they’ve been pinned to (and thus clutter) the Start screen, meaning among other things I can’t pull up an entire folder of favorites at once as a ready-made tab set; between this and the aforementioned loss of multiple windows (not to mention the unchanging, bulky thumbnail-tabs), it’s clear IE makes it damn near impossible to have hundreds of tabs open like I’m used to with Chrome. The RSS reader seems to have gone out the window as well, and while I understand why they did it limiting Flash to pre-approved sites raises concerns for me about privileging content producers with resources at the expense of independent producers; already impossible on the iPad, it’s damn near impossible to read Homestuck on the Surface either (though thankfully Microsoft seems to have turned an about-face on this). Admittedly most if not all of these issues are irrelevant if I’m using the desktop version, but then, well, what’s the point?

On the other hand, some of these might be emblematic of larger issues with Windows 8 that fully Metro-optimized third-party browsers might not fix. As I worried, they may have made it harder to use with a keyboard and mouse, to the point that if your computer doesn’t have a touchscreen, don’t bother upgrading it to Windows 8, no matter how much it meets the other hardware requirements. But Microsoft embraced a “minimalist” design aesthetic in general for Windows 8, minimizing the constant presence of interface elements both in the OS in general and in most of its own apps, going against not only its own past habits but even iOS and Android precedents, but the end result pretty much just amounts to the interface feeling clunky even for touchscreen users and taking too many steps to do anything; the problem with the tab bar is only a specific case of the necessity to swipe from the top or bottom to reveal the “app bar”, which in some apps is necessary to do damn near anything. (You seriously couldn’t include a tappable “all apps” thing on the Start screen without using the app bar, Microsoft?)

Similarly, I usually end up accessing the Start screen by swiping from the side and pressing the Start charm, which feels like one step too many. The Windows logo on the Surface accesses the Start screen, but its location just above where the cover snaps in only means it’s awkward to reach for and easy to hit by accident when holding it vertically; I would have placed it on the right side (when it’s resting on the kickstand), which just so happens to mean iPad users holding it vertically, assuming they rotate it the way I do (admittedly I’m left-handed), will find it in a familiar place. (On some other machines, the equivalent button is almost completely hidden when docked, thus making me wonder what the point of its placement is.) I’m tempted to do the same with the other “charms” (the Surface re-appropriates the function keys for them, but I never used the Windows button to open the Start menu and the function keys are almost as awkward to reach for as the Windows logo on the Surface itself); the Search charm takes on the functions of searching in every single app, including the searching in IE I’m looking for (though while I can change apps on the fly, I’m still not sure I can change search providers – the Wikipedia app feels almost like a beta so it’s no replacement), making it too useful to be the two-step process it is.

That’s not all; the decision not to have actual folders on the Start screen, only “groups”, is completely mystifying regardless of what it means for Favorites, a backtrack from the hierarchical organization the Start menu has had since Windows 95 and that the iPad exhibits as well, forcing most users to have to scroll long distances to see many of their tiles and forcing less-than-ideal tile organization in many cases. Apparently Microsoft wants you to zoom out and then tap a group if it’s a problem for you, but once again that’s one step too many, I should have the option to start with it zoomed out if that’s the case, and it means there’s only one level of “group” and all tiles have to be within one of them, so if you want to look at or use any of the tiles, all the groups are displaying all their tiles as well. In this and other areas, I think this is something where the folks at Microsoft would have benefitted from actually using the iPad as opposed to apparently hearing about it secondhand.

The big innovation of the Surface is supposed to be its “Touch Cover”, but I actually prefer “typing on glass” on an iPad or Android tablet to the “typing on cloth” feel I get from the Touch Cover; at least on the iPad I’m actually typing on a hard surface that actually sounds and feels like an actual thing. Presumably the Touch Cover was made as it is (and hyped much more than the only marginally more expensive Type Cover) because if you fold the Type Cover back behind the Surface it feels weird having a back side of keyboard keys.

Yet despite all of this, I’ve fallen completely in love with Windows 8. Microsoft did not merely re-appropriate the Windows Phone interface for its regular Windows product. It’s completely overhauled our notion of what a computer is, merging the tablet and laptop, putting the final nail in the coffin of the desktop computer as we know it, and serving notice to Apple that they’re not going to renounce their second-class yoke so easily. When it came out, many tech commenters compared the Surface unfavorably to the iPad, in price and in general experience, which was perhaps inevitable but missing the point of what Microsoft was trying to do. In fact, I don’t consider even the Windows RT version of the Surface to be a tablet in any way at all; to me, a tablet has to be in some way connected to a cell-phone network. The Surface is a laptop that happens to have some tablet-like features, and in that sense it’s an absolute game-changer – or at least, what Windows 8 represents is.

There’s a book I’ve heard of but not read called The Innovator’s Dilemma, which attempts to explain a question I’ve long wondered about: why companies facing the advent of an innovation that threatens to undermine their business model so often attempt to kill it rather than adapt into a provider of the new innovation. The short answer seems to be that the new technology is rarely an actual improvement over the old one for customers used to the old one, at least at first, and so any move to embrace the new technology will inevitably alienate existing customers. This seems like a false dichotomy to me; most of the time, there are opportunities for synergy between the two that makes the product more enticing to the new customers and helps transition the old customers to the new paradigm.

For example, I appreciated that Blockbuster at least tried to compete with Netflix with its “Total Access” service, which was advertised as the same DVD-by-mail service as Netflix, at the same price, but with the additional option of returning old DVDs to a Blockbuster store and getting the new one instantly without having to wait for it in the mail; problem was, it didn’t offer streaming like Netflix was already offering, and I believe you had to pick store or mail ahead of time and stick to it. (In my opinion, Blockbuster probably could have done better in competing with Redbox, but it was already dying by the time that became A Thing.) Netflix itself became the impetus for my learning of The Innovator’s Dilemma in the aftermath of its failed attempt to rebrand its DVD-by-mail service as Quikster, because of a Slate article claiming it was Netflix’s attempt to better transition to a business based on streaming of content, but the fact a major reason the Quikster rebrand went over so poorly was the loss of the ability to manage streaming and DVD deliveries together suggests that’s not the whole story.

It wasn’t obvious the iPad was such a threat to the entire PC paradigm, and thus Microsoft’s hegemony. Microsoft could have continued merrily along its way, with old-style PCs being completely separate from tablets, and it wasn’t obvious that status quo couldn’t hold forever. Yet Microsoft apparently saw the iPad as such a threat that they decided to completely destroy the PC as we knew it, effectively undermining their own monopoly and threatening to alienate their hardware (and software) partners, by aiming their new operating system for a new kind of device that would effectively attempt to merge the tablet and the laptop, by so wholeheartedly embracing the touch-screen ethos that a lot of what makes it harder to use with a keyboard and mouse might be unnecessary, much like the radical separation between Netflix and Quikster; there isn’t really any reason, in and of itself, why the desktop couldn’t have been outfitted with a traditional Start button. It seems Microsoft is sending a message: this is what Windows is going to be like from now on, we’re only including a desktop at all for the sake of people used to it or without touchscreens, and before long calling the operating system “Windows” will seem a misnomer.

Perhaps Microsoft saw the trajectory the computer business was headed down; you may recall that last year I predicted that the home desktop computer would become a thing of the past with laptops becoming more and more popular and powerful and with the potential ramifications of computers hooked up to the TV such as the Google TV and Apple TV. Perhaps Microsoft realized that it was fast becoming a maker of operating systems for laptops and that going forward, it would need to optimize their OS’s for them, that the computer of 2012 shouldn’t be running essentially the same operating system as in 1995 or even 2001. Perhaps, too, Microsoft saw the iPad as threatening many of the purposes people were still using laptops for, that if the iPad couldn’t replace the personal computer now, it was only a matter of time before Apple revamped the Mac interface to be closer to that of the iPad. (Indeed, Microsoft was pushing the notion of a “tablet PC” as early as a decade ago, when Windows XP was fairly new.)

Microsoft gets a bad rap for stealing ideas from other companies (especially Apple) rather than actually innovating themselves, but it would be more accurate to say that it’s made a business out of being Nintendo to other peoples’ Atari, of being the ones to refine other companies’ raw ideas into the forms that allow them to take over the marketplace. Regularly Microsoft can see some other company’s innovation in a bigger picture and fix some of the niggling flaws you didn’t even know existed. Windows 95’s interface was jeered as a ripoff of Mac’s, but it was substantially more user-friendly and included the taskbar that made multitasking substantially easier than it had been in either Windows or Mac. Windows Phone is an even more obvious example of this, and with Windows 8 Microsoft took the same notion of “live tiles” to the iPad while also beefing it up to the capability of a full-size computer.

In effect, Microsoft has made a business out of taking technologies that people saw as a threat to their business model, and not only embracing them, but doing so so wholeheartedly that they often become the ones to introduce them to the general public – embracing the challenge of The Innovator’s Dilemma like few other companies out there. Such may have been the case with the Internet itself when they introduced Internet Explorer and killed Netscape. Such was the case with cloud computing, which I might still be seeing hyperbolic spammy ads (from the Motley Fool, of all places) touting it as the downfall of Microsoft – yet for many people, it was Microsoft itself that introduced them to the notion of cloud computing with its Windows 7 “to the cloud!” ads, and between that, the introduction of Office 365, and how heavily Microsoft has pushed SkyDrive (there’s a reason Windows 8 ships with a SkyDrive tile on the Start screen but no first-party touch-based Explorer app), it’s clear Microsoft is alive and well, if not quite strong as ever. And with Windows 8, such might also be the case not only with the iPad, but the late-90s concept of the “network computer“, a computer with next to no hard drive that loaded all its software from the Internet, which Google finally brought to something approaching reality when it unveiled the Chromebook a few years back.

I don’t believe the touchscreen will ever replace the precision of a mouse pointer entirely, and all you have to do to figure out why is to think about the basic ways to manipulate a cell in Excel. To select a cell, you click on it. To select a range of cells, you click on one cell and drag until the cells you want are covered. To select an entire column or row, you click the column letter or row number. To move a cell, click on the border and drag it; to fill information in the cell into adjoining cells, you click on the little control indicator in the corner of the cell border.

Okay, so how do we translate this to a touch environment? Well, obviously you tap where it says click, and tap and drag where it says click and drag. So you tap on a cell to select it, and move your finger from one cell to the other to select a range. But wait, it’s kind of hard to select a border a few pixels wide with a finger. No problem; just tap and drag on an already selected cell or group of cells. Sure that makes it harder to start a new range from a currently-selected cell, but that’s an acceptable tradeoff. What’s that? You want tap and drag to move the display of the spreadsheet, not select a range of cells? What about the stuff you use the right mouse button for? And what about stuff you hold the control keys (Ctrl, Alt, Shift) for? Ay-yi-yi-yi… The iPad’s Excel knockoff “Numbers” does an admirable job of trying to appropriate all these functions, but there are still some odd holes and quirks. Perhaps the stylus can finally come into its own as a replacement for this sort of precision, though it doesn’t provide the same sort of feedback as a mouse pointer – and I doubt it can replace being able to hover over a hyperlink to see where it leads without clicking it (something Microsoft didn’t try very hard to keep in its new browser).

Still, with Windows 8 Microsoft is firmly leaving behind not only those who are too used to a keyboard and mouse to take to a touchscreen, but also those with little use of touch because of their work’s large proportion of typing and inherent need for the precision of the mouse pointer, the sort of hardcore computer user that has long disdained Windows and may still be too attached to their desktop to consider adopting a laptop – a group that could potentially include most Photoshop users. To these people, Microsoft is saying: Tough. We have always strove to make the computer more user-friendly – it was us who ultimately brought the computer to the masses. If we haven’t done enough to make it crystal clear before, we’re now making it explicit that not everyone has the same needs as uber-nerds, and with Windows 8 we’re making a conscious decision to focus wholly on the consumer market and completely leave the uber-nerds behind. If you can’t face the prospect of a fully touch-based Windows in the future, you might as well move on to Linux now if you haven’t already, unless Apple never decides to iPad-ize the Mac.

In this, Windows 8 represents a milestone in the history of computing, one so momentous it could be considered the climax of the computer’s evolution from an expensive tool for highly academic settings to a device so simple anyone can use it. The advent of Windows and Mac OS was a revolution in user-friendliness for the computer, introducing a level of abstraction between the internal level of the computer’s programming and the user-interface level through the mediator of the mouse-controlled graphical user interface. Now, almost 30 years later, Windows 8 marks the turning point in a revolution just as massive, one started by the iPhone and iPad that makes user-friendliness the first consideration over any underlying assumptions of the hardware or even the most basic assumptions a programmer might have about a keyboard. Before, everyone was still using a “computer”, the same device a programmer used to create the software running on it, something an engineer might use to solve important problems. Now, computing technology has reached the point that for most people, it has transcended the notion of using a “computer” entirely.

The problem for Microsoft is, the more I play with the Surface, the more I find that Windows 8 itself kind of feels like a raw idea inviting refinement by some other company. Microsoft took a massive risk with Windows 8, and for it to pay off they had to leverage their existing advantages to create something extraordinary, and I’m not convinced they did. (Hopefully Microsoft won’t make us wait for Windows 9 to work out the kinks, because they might not have that long if Apple and Google are already hard at work on their own refinements – though apparently Microsoft is moving Windows to the same incremental-update schedule as Firefox and Chrome, with an update to Windows 8 potentially coming as soon as August.) But even if Microsoft is, for once, the Atari to someone else’s Nintendo, they’ve introduced or at least accelerated the most revolutionary change to our lives since the popularization of the personal computer and Internet themselves.

The Death of Google Reader: Not the End of the World

I have used Google Reader as my source for RSS feeds since adopting Chrome as my main browser four years ago. The reasoning was simple: Chrome, unlike IE and Firefox, didn’t have a built-in RSS reader, and it made sense to use Google’s own service for the purpose.

Today, Google announced that I won’t be able to use Google Reader after July 1. The Internet has reacted as though it were the coming of the Mayan apocalypse. But honestly, I’m not that broken up about it; if anything, I miss the ability to nest folders I had in IE and Firefox, and I’ve complained in the past about Reader’s idiosyncrasies like setting viewing order by feed but viewing of read items globally, when the two are inextricably linked.

Why are people so worked up about Reader? Part of it appears to be that Reader was a social RSS reader, one that allowed users to share feeds and articles, and discover feeds they might enjoy based on what others are following. Google has since moved most of Reader’s social functions to Google+, and theoretically they could have integrated Reader into Google+ pretty easily, potentially gaining a big leg up over at least Facebook in the social media wars – though the lukewarm reaction to the social-function shutdown suggests it wouldn’t have been a sure thing. But Google may not have ever really known what they had with Reader, only that its team knew social and thus were a valuable resource to help them build their more explicit social pursuits. Another reason is Reader’s sheer simplicity and no-frills approach, but it’s hard to get more no-frills than IE’s RSS reader.

On the other hand, Google’s stated reason for shutting down Reader – the declining usage of the service – has introduced me to the notion that RSS is obsolete in the age of Twitter. It’s true that I’ve removed several feeds from Reader when the same feeds became available in Twitter form, if the rest of what was on the relevant tweeters was sufficiently interesting to me. But there’s quite a few where I did not and don’t expect to anytime soon. Twitter lacks nuance: if you follow someone, you’re subjected to every little thing they post on there, even if they’re the dreaded “what I’m having for breakfast” tweeters. I can’t imagine myself following more than 20 or so people on Twitter, and I’m very suspicious of people who claim to follow hundreds or thousands of accounts, or feel the need to follow everyone who follows them as though Twitter were just like Facebook. (More on this later this year.) I’ve actually added RSS feeds that weren’t even linked to by knowing the URL structure of popular blogging providers, because I only wanted to follow certain categories of a blog.

Ironically, another big leg up RSS has over Twitter is something the lack of which some people have cited as one of their favorite parts of Reader: the ability to read articles without clicking a link. Not all web sites do this, which is understandable in an age where hits are king and advertising revenue must be maximized, but I find it incredibly useful to read content from several different web sites, one at a time, without leaving a given page, only clicking through if I felt the need to leave a comment or save a page for later. As it stands, this is impossible on Twitter, and I suspect it’s a big key to the popularity of Tumblr, which could best be described as a “social blogging” platform, or Twitter without the character limit.

RSS might eventually become obsolete, but it’s going to take a number of advances in other areas to replace it entirely. On a note more related than you might think, I guess this means I’m setting July 1 as a target date to get a new computer (you wouldn’t believe the problems this one has developed) so I can move my RSS feeds to a Windows 8 app. More on both of these next week.

Some Quick Thoughts on the Future of Webcomics

Last week John Allison of Scary Go Round and more recently Bad Machinery fame wrote a blog post expressing his fear that, as more and more webcartoonists took to social networking sites like Tumblr, it would be harder for them to make money off their work because even if their work went viral, it would get lost in the shuffle of people’s Tumblr feeds and no one would make the connection to them as the creator of that work. As a result, he fears the decline of the sort of “community” that has so characterized webcomics up to this point.

Personally, I think his fears are overblown; for one thing, I find it hard to compare Tumblr cartoonists with other webcartoonists, in part because most blogging platforms that aren’t modified WordPress make poor places to put up webcomics anyway, mostly due to archive management. As such, I suspect most Tumblr cartoonists aren’t very interested in fame and fortune anyway, and are more of the David Morgan-Mar frame of mind, of just wanting to share their creations with the world. In any case, the question is, would, say, Kate Beaton still have attracted a large following if she’d started out on Tumblr instead of LiveJournal? (After all, the former is essentially an evolved version of the latter.) Since most webcomics got their start through word of mouth, I find it hard to believe that the boom in social networking is anything but good for them (though whether it’s good for the quality of content that becomes popular is another matter, if it means the most popular comics essentially become nothing but meme factories).

But Allison’s broader fear is the notion that, for many, “social media ARE the Internet”, making it harder for web sites like his to catch anyone’s notice. I think this too is overblown, but mostly because of a far larger force reshaping the Internet that’s both largely responsible for that notion and that could end up sweeping both visions of the Internet under its feet, one that does pose a tremendous challenge, but ultimately a tremendous opportunity, for webcomics. I’ll have more on that next week.