How Windows 8 Changes Everything, Part IV: The Triumph of Scott McCloud (Or: “Webcomics” Are Dead. Long Live Digital Comics.)

But for all of hypertext’s advantages, the basic ideas behind hypertext and comics are diametrically opposed! Hypertext relies on the principle that nothing exists in space. Everything is either here, not here, or connected to here, while in the temporal map of comics, every element of the work has a spatial relationship to every other element at all times.
-Scott McCloud, Reinventing Comics

In an app-based future, one where social media becomes most people’s gateway to the Internet if not defining it, it’s easy to fear, as John Allison did a few weeks ago, that those who have taken advantage of the openness of the Web may find themselves increasingly abandoned and unable to gain traction. But as I said in Part II and in my response to Allison, the tools of the new Internet paradigm are open to anyone, with nothing stopping it from being as open, if not more so, than the old Web-based paradigm.

Four years ago, I wrote my Webcomics’ Identity Crisis series, the core of which (in Parts III and IV) explored the obstacles to the future of comics Scott McCloud outlined in Reinventing Comics. I felt that the one revolution McCloud advocated – the infinite canvas – was wholly dependent on the other – micropayments – in order to truly catch on, because any other revenue model (where the form the online version took was relevant, that is) depended on the breaking up of the story into parts, defeating much of the point of the infinite canvas and often even rendering it counterproductive. Micropayments, for their part, were doomed to fail, at least as far as webcomics were concerned, because of the psychological barrier against paying anything for anything – perhaps they might have become the norm if they were ready when the Internet started catching on, but so long as enough of the Internet’s content was available for free, it would be extremely difficult to produce something with enough value that, even with all the stuff out there for free, a substantial number of people would be willing to pay even a cent or two for it, especially if it was possible, even easy, for someone to repost it elsewhere for free, especially if they had to buy it sight-unseen from someone whose content they didn’t already know they wanted, and especially if they had to pay for something that had previously been free.

Ironically, one of the more famous proponents of the “psychological barrier” theory for the failure of micropayments was… Chris Anderson, in his book Free: The Future of a Radical Price. Though he never directly mentioned it, perhaps hoping people wouldn’t notice the contradiction and accuse him of holding whatever position attracted the most attention, “The Web is Dead” could be seen as an implicit admission of how wrong he was then. The thesis of “The Web is Dead” was that people would pay for the same content they could get for free, simply because it came in a form worked better and was easier for them. If we are increasingly moving to a future where consumers are increasingly willing to pay to receive content on their smartphones once available on the Internet for free, it may well be only a matter of time before micropayments take hold in this far more fertile soil.

Already most of the apps in the Windows store are available for less than half of the magic $10 price most online retailers need to hit to justify the cost of a single credit card transaction. I’ve long felt that the fees people pay to their Internet service provider for Internet access were low-hanging fruit for micropayments, similar to how charges for pay-per-view content appear on your cable bill, if it weren’t for the numerous ways to access the Internet that other people pay for. The advent of cloud computing and the single login, including devices like those that run Windows 8 that are tightly associated with a single online account, makes it far easier to charge your credit card on the fly without introducing extra steps and at virtually any price. While producers of “fungible” content that can easily be spread elsewhere will probably continue to need to offer their wares for free (or just enough to render piracy inconvenient), we may one day see the day where producers of other types of content, to take just one example, allow anyone to access their content for a small charge, or for free if you buy their app once (and possibly pay a regular subscription fee thereafter).

It’s highly unlikely that a single comic, even a full-size comic book or graphic novel, would justify its own app, but the point is the technology exists to offer it at any price, regardless of the mechanism. We’ve already seen the development of an “iTunes” for comics, in the form of Comixology and its associated formats, and Marvel and DC have already embraced the online, digital distribution of their wares for new mobile devices, with Marvel even going so far as to produce what I call “digital stage comics” for their Avengers v. X-Men event. As Allison’s attitude shows, however, the webcomic community has been surprisingly slow to adapt to this new world order. Many webcomics have developed apps for the distribution of their content, but like webcomics in general, most of them are comic strips easily suited to distribution on a periodical basis (though Least I Could Do offers access to its archives through its app for just 99 cents).

If the web starts to be pushed to the background, you could see webcomics, as we know them today, pushed to the background as well. Even comic-strip-type webcomics may soon find their main means of distribution through “comic page” apps that aggregate them together. (One wonders if this was one of the ideas Scott Kurtz planned to hawk to syndicates with last year’s consulting offer.) But the real impact will be felt in “long-form” comic-book-like webcomics, who could jump at the chance to exploit the exposure advantages of the Internet without any of the drawbacks. It was, after all, the comic book model McCloud had in mind with his advocacy of micropayments and the infinite canvas. While the problem of spending money on unproven content hasn’t gone away entirely, some workarounds have sprung up; recently my dad published a prose novel that he promoted in part by making a short snippet available free for people considering the book on Amazon, a tactic that has apparently helped many novels achieve success through online sales, including some you may have heard of.

Beyond micropayments making the infinite canvas far easier to monetize, the advent of touchscreen-enabled devices eliminates the main interface-based constraint on the infinite canvas as well. Maintaining an “unbroken reading line” would seem to imply the horizontal infinite canvas, where the row of panels scrolls off to infinity to the right, but most applications of the infinite canvas have been of the vertical variety, due to the nature of mouse wheels, the most hassle-free way to scroll on the computer. But the touchscreen does away with the need to scroll entirely; all it takes is a swipe to move to a different part of the canvas, or moving the finger across the screen. It’s even possible to zoom in with the double-tap. This isn’t limited to comics; I really don’t like how the Kindle and other e-readers feel the need to stick to the norms of print by chopping up books into discrete pages. I don’t know this either way, but I hope Comixology’s formats and others allow people to make their “page” whatever size they wish if they so choose; we could see an explosion in long-form stories told in forms unthinkable not too long ago. I can’t help but wonder if, when McCloud semi-unintentionally anticipated the iPad in Reinventing, he was giving a look at the sort of device that he had in mind when talking about the infinite canvas, without explicitly stating so.

Many of the applications of the infinite canvas McCloud proposed will probably always be too gimmicky to catch on, but there’s nothing stopping those applications with real storytelling potential from changing the way you look at comics. It’s possible the digital comic of the future will look a lot like Homestuck – essentially, a variant on the digital stage comic, only told in many thousands of tiny chunks, highlighting another failing of hypertext: the way advertising on the Web rewards breaking stories up into as many tiny units as possible so as to score more pageviews to drive up the price of advertising. With alternate business models, it would no longer be necessary to exploit perverse incentives like this, because the reader could be charged directly in a way that makes sense.

This is only a hint of how the move to an app-based future can be a boon to independent producers of content prepared for it, despite the decline of the open, free-wheeling web they have taken advantage of to this point. We could be on the verge of an explosion in content of all shapes and sizes, a golden age of artists flocking to the most rewarding environment the arts has ever seen, creating content that takes forms never before possible, and potentially achieving the long-deferred vindication of Scott McCloud’s original vision. The rise of devices like the iPad and Surface doesn’t mark the end or a decline of the great revolution impelled by the rise of the Internet over the course of the last decade. Rather, it’s just the beginning.

How Windows 8 Changes Everything, Part III: The Nature of Social Media (And What a Blast from the Past Means for the Future of Facebook)

For a lot of people, social media ARE the internet.
John Allison, creator, Scary Go Round and Bad Machinery

It’s becoming apparent to me that most people do not use the Internet the way I do.

I am not a social media fiend. The only social network I’m on is Twitter, and I’m not even sure I use Twitter the same way most people do; I only follow 15 or so people on Twitter and I can’t even imagine following much more than that. I get the sense that for many people, social media completely defines their online life, serving as their gateway to the rest of the Internet, to the point that any attempt to understand the workings of the Internet, from the failure of RSS to catch on to what a post-Web future of the sort Chris Anderson describes might look like even now, has to start with social networking first and foremost.

When Google Plus launched nearly two years ago, it made a big deal about its “Circles” feature, which recognized that people don’t have just one type of friend. Circles allowed you to sort your contacts into groups, such as friends, family, coworkers, more distant relatives, college buddies, and so on. It struck me that this model was the opposite of Twitter: when I first discovered Twitter, I applauded it for recognizing that “following” someone wasn’t necessarily reciprocal like “friendship” has to be on Facebook, but Twitter allows you to follow anyone’s tweets without requiring them to follow you back, while Google+ was effectively allowing you to determine who received whatever messages you sent, without their input.

What would a social network be like that combined the two? Well, anyone could choose to “follow” any of the public postings of anyone else. A person could then organize the people who follow them into groups, like Google+’s circles; perhaps they’d receive a notification whenever someone they followed followed them or vice versa, asking if they’d like to place that person in any of their circles, or perhaps someone could ask to join any of their circles, similar to how Facebook’s “friendship” works now. Some of their posts would continue to be public, while others would be restricted to certain circles. You’d effectively have two different levels of “following”: a basic level allowing you to follow anyone and anything, like how many people use social media now, and a deeper level for your actual friends, indeed as many “deeper levels” as you want. This would serve as a curb on the proliferation of “friends” that plagues Facebook, and it could also allow the social network to be more open; many if not most Facebook profiles are closed to nonmembers, and often even to people who aren’t friends. With this system, anyone could still have a public timeline anyone could view like on Twitter, but they could still restrict some of their postings to people they’re closer to, which Twitter can’t do except in the form of “direct messages” (which no one uses) and restricting the whole account to followers only.

Perhaps we could take this further, and somehow recognize when an actual group of people all (or almost all) count one another as friends, or in analogous circles. The social network could recognize this group as a self-contained group in its own right, enabling them to better organize and converse with one another as a group. This doesn’t have to be limited to an actual circle of friends; in fact, the great shortcoming of most social networks is its inability to recognize groups of people with a common interest and serve as a place for them to discover one another and talk about that interest with one another. As such, people with common interests end up fractured among many different sites, often blogs that become a hub for the community even though they may not work well for this purpose. When I launched the forum, I said that forums still had a place in an era of blogs and social media, as a place for a community to gather and talk about common interests, but why have a forum and a collection of whatever other sites are out there for this purpose when anyone interested in a topic can connect with everything everyone else is doing and saying in that topic in a single place, perhaps one that can accommodate blogging as well?

Being able to serve as not just a site that can be all things to all people, but to specifically connect people with common interests, might be the one great advantage that someone might yet be able to topple Facebook with. The social network that can best Facebook is one that can leverage the network advantages of having everyone on there, yet also cater to specific interests. In that sense, it may be a flashback to the original social network, Usenet, but adapted for the modern web. Were that to happen, it could be the last break from the Web as we know it now and the ultimate realization of Chris Anderson’s vision. Farhad Manjoo thinks this is impossible, that no social network that claims to be all things to all people can also serve as a social network for a particular interest. I think it can if it opens up the toolbox so that the community surrounding a particular topic can customize their own corner of the network with all the functionality they could possibly want. That probably means the social network of the future will have to be open source – and without the ability to monetize it, that will make it very difficult to run.

Perhaps the social network of the future is already under construction, in the form of WordPress’ BuddyPress plugin. This plugin allows any WordPress site to set up their own social network within it, something that seems kind of odd to me; the network effects of social networks are such that any social network for a particular site would seem to have limited utility. But if someone were to set up a competitor to Facebook and run it on BuddyPress, it could catch on like wildfire, if only among people concerned about Facebook becoming just another evil company with little regard for privacy – but that might be enough to attract everyone else in the long term, if it truly embraces the open-source ethos. One thing I know for sure: I’ve finally closed up shop on the Morgan Wick Forum, which had become little more than a wretched hive of spam and villainy, and if and when I relaunch it, it’s probably going to be with BuddyPress installed (if only because that might be the only way to get some of the functionality, like high-level mod tools and private messages, I’m looking for).

Or perhaps the social network of the future won’t be a single site at all, but rather new technologies and protocols to link people together without the need of a central site. This is the dream behind the notion of the “semantic web“, the idea that all you need to do is put all your relevant information in a single place in a common format and it will follow you anywhere, capable of being read and understood by anything – a concept that could be key to a truly post-HTML future. It’s hard to imagine what such a decentralized social network might look like, but that hasn’t stopped some people from trying. The growth of devices like the iPad and Surface that are so tightly connected to the Internet may help bring the semantic web into reality, or at least make it more possible, and in that sense, perhaps the real clue to the social network of the future may lie in the “People” tile in Windows 8 and Windows Phone. As more and more people move to the cloud, and to devices like the Surface that are constantly registered with an account that connects them to that cloud, it’s only a matter of time before the accounts their devices are registered with are used to help form a new kind of social network – one that might not have a single identity at all, and one that might truly define the Internet for its users. All it would take is a way for iOS, Android, and Windows users to communicate with each other seamlessly.

So in retrospect, why did we end up abandoning Usenet, anyway?

How Windows 8 Changes Everything, Part II: The Triumph of Chris Anderson

You wake up and check your email on your bedside iPad — that’s one app. During breakfast you browse Facebook, Twitter, and The New York Times — three more apps. On the way to the office, you listen to a podcast on your smartphone. Another app. At work, you scroll through RSS feeds in a reader and have Skype and IM conversations. More apps. At the end of the day, you come home, make dinner while listening to Pandora, play some games on Xbox Live, and watch a movie on Netflix’s streaming service. You’ve spent the day on the Internet — but not on the Web. And you are not alone.
-Chris Anderson, “The Web is Dead. Long Live the Internet”, Wired magazine, September 2010

One reason why, if push came to shove, I might still be willing to get a Windows RT device and accept being limited to Internet Explorer as my web browser despite its limitations may be the ability of apps to fill those holes.

IE has never had the broad-based add-on support of its competitors, but Windows 8 may suggest it doesn’t need to; many of the same add-ons I’ve added to Firefox over the years are available in some form as Windows 8 apps. Most obviously, I might be able to do without an RSS reader in IE and instead install an RSS reader app; indeed this would have the advantage of flashing new items from my RSS feeds on my Start screen. Couple this with Twitter and e-mail apps, and I can keep up with everything in real time right from my Start screen. In fact several of my RSS feeds might provide their own apps with their own live tiles, to the point that the very concept of an RSS reader might be unnecessary; Microsoft may one day make it possible to pin RSS feeds directly to the Start screen as live tiles. The Start screen is so useful that even making it easier to get to it with one touch or swipe may not be enough to do it justice. The Start screen is the one element of Windows 8 that can’t be “snapped” to one side of the screen or the other, yet may be the most useful thing to be snapped (to the point that without it, live tiles seem more like a gimmick at best and a distraction at worst than actually useful); I’d like to have the option to keep my Start screen constantly on screen so I can see my live tiles at a glance and pull up a new app on the fly, not unlike a traditional Start menu.

Indeed, with Windows 8 coming with a separate Bing app for searching the Internet and many other sites creating their own apps to access their content, perhaps the Metro version of IE’s light-duty nature and discouragement of the proliferation of tabs is easily explicable. Perhaps IE itself is transitioning into a “miscellaneous” app, to be used only for visiting those sites that actually require it, rather than an all-purpose must-have for exploring any part of the Internet. Perhaps it’s the traditional web browser that’s entering the twilight of its relevance as our concept of what the Internet is gets turned completely upside down.

A few years ago, Wired editor-in-chief Chris Anderson published a controversial feature proclaiming, as the issue’s eye-catching cover put it, “The Web is dead. (Long live the Internet.)” The idea was that with the advent of devices such as the iPhone and iPad, people would increasingly shun the Web, as in the sites that require a browser to access, in favor of more specialized apps that met their needs better and could be more easily monetized, especially when it came to digital content for sale. The openness of the Internet favors innovation, and for decades that openness led to unprecedented innovation in all corners of the Web that doomed the “walled gardens” of old, but once that innovation started to settle down the Web’s tendency to be all things to all people would become a drawback and people would flock to dedicated apps that did just one thing and did it as well as anything could. All it needed was an Internet connection to access the data. APIs would be the new walled gardens, providing superior performance at delivering the data it provided, and people would be willing to pay the price, in money and control, for it.

Most observers pooh-poohed Anderson’s conclusions, for a wide variety of reasons – many of them having to do with the graph at the start of the article, which only showed web traffic declining as a percentage of overall Internet traffic, joined by peer-to-peer file sharing and video, the latter of which uses a lot more bandwidth than anything else and is usually accessed via the web anyway (to say nothing of the utter lack of any measurement of “apps”). Many critics also took issue with Anderson’s definition of “app”, noting that almost all the services in the opening paragraph above at least have accompanying web sites, and in fact their “apps” are really just clients for accessing their Web sites.

I think the latter group largely missed Anderson’s point. Sure, you can use your web browser to access Facebook and Twitter, but even then they seem to stand apart from the rest of the Web, as almost their own entities, to the point that when you access another page within Twitter, it acts as if it’s loading the other page within the Twitter interface, and then delivers it without the browser seeming like it’s done much of anything at all, as though Twitter is just sitting in the browser doing its own thing while the browser does nothing. Moreover, as time has passed Twitter and several other web sites have made their interfaces look increasingly like that of the iPhone’s – in order to match their apps. Facebook and Twitter are the most obvious examples of “web sites” that don’t really need to be web sites; they can exist just as well as their own specialized applications. And the same goes for all the other services Anderson lists in the paragraph above.

Before all this started, those web-watchers who saw something similar coming warned that it would close off the Web’s innovative nature, that entrenched interests would erect as many barriers to entry around the Web as traditional media had. Anderson, by contrast, predicted that this revolution would remain mainly limited to the monetization of digital content; his hyperbolic headline aside, e-commerce and things created for non-monetary reasons would continue to find a home on the Web. I get the sense both sides are making a faulty assumption – that in fact the new world of apps is just as open to anyone with the requisite knowledge of the coding languages involved as the Web is, making it actually a boon to independent producers of content, as evidenced by the nonprofit Wikipedia having its own apps. I mentioned in Part I how difficult it is to read Homestuck on the iPad or Surface, but there isn’t really anything stopping Andrew Hussie from making his own Homestuck app, which in fact might be a better way to experience it than relying on the Web site, especially given its “adventure game” motif. (This is especially the case on Windows 8, where it’s possible to make an app with rudimentary knowledge of HTML, CSS, JavaScript, and other fairly common programming languages.)

That means it’s very possible the Web really is dead, or at least dying. Anything that can be a Web site can also be an app that delivers the same content in a more focused way. Even e-commerce can be carried out within the confines of an app, as evidenced by the existence of an Amazon app in Windows 8. As the Twitter interface indicates, the Web itself is starting to become bent to look more and more like an app, but for sites to offer their own isolated comprehensive experiences on a platform designed to accommodate individual pages without bias between different sites is somewhat contradictory, and not enough to resist the trend. The trend towards trying to wring as much as possible out of a single Web page, as opposed to the many separate pages linked by hypertext that is the Web’s defining feature, seems to be an inherent contradiction. Why hack a Web site to make it look like its own program when it can actually be its own program? HTML has long been a clunky language, and there have been many admirable efforts over the years to expand its capabilities to match what people have been using the Web for, including HTML5, and to expand the capabilities of the plugins like Java and Flash that help expand the capabilities of HTML, but we may be straining its limits, or reaching the point where the effort to keep expanding it doesn’t outweigh the desire to cut out the bloat and start over.

Internet Explorer 9 introduced the ability to pin web sites to the task bar, some of which could accommodate special functions specific to that web site, as though the site was a program in itself. Windows 8 may be the final nail in the coffin of the Web, because its live tiles both obviate the need for such fake apps, and provide the ultimate motivation for sites to transition to an app-based future, by providing something the Web can’t easily offer no matter how much it expands. Well before the iPhone and iPad came along, Firefox add-ons provided added functionality that were part of the browser but didn’t involve opening pages in the browser. It showed that no matter how flexible the Web page could be, it was still trapped within the confines of the Web browser, and there were still ways in which it could escape those confines and deliver you content by becoming part of its own prison, from within the browser’s interface, or even escape it entirely. Simply put, some sites are just bigger than the browser, and shouldn’t be restricted to within its confines. Windows 8 has provided online content providers the tools they need to fully escape the confines of the Web browser and explore their potential in ways the browser could never offer.

Indeed, Windows 8 may take Anderson’s future further than even he predicted, specifically the RSS reader he mentions checking, by replacing them with live tiles. I generally keep up with what’s going on in most web sites each day by checking two sites: Twitter, and Google Reader. Were I to get a Surface, I would have three tiles lined up all in a row for me to check: e-mail, Twitter, and an RSS reader. It quickly becomes apparent that the RSS reader is a “miscellaneous” app just like IE, one jumbling together all the sites that don’t have their own apps. If any site can be an app, any site with an RSS feed can become an app with a live tile. Most providers of news either now or will eventually have their own apps that will offer their content in new and better ways, and I’ll talk later about what all this means for webcomics. I predict that by 2015, there will be a new syndication mechanism aimed specifically towards blogs, one that doesn’t simply collect text and render it in the way the reader specifies but instead allows blogs to format them however they like, allowing them to more easily place ads and optimally organize content – a sort of “uber-app” to allow blogs to take advantage of the freedom and flexibility of apps. I’ve never really gotten the point of Tumblr, but perhaps it provides a hint at what the future of blogs might look like, a standardized mechanism streamlining many of their purposes and presenting them in unified fashion.

Google’s Chromebook, along with its predecessor ideas, subscribes to the ethos that the browser can be the OS. Microsoft has been inspired by Apple to flip that script around: the OS is the browser. Technically, the OS has been the browser since Windows 98, but only recently has Microsoft really subscribed to it as more than a way to raise the hackles of antitrust agencies – and my hunch is they’re closer to the future of computing than Google is. The rest of this series will explore the implications of all this, great and small.

How Windows 8 Changes Everything, Part I: The Rebirth of Microsoft

What you have seen and heard should leave no doubt that Windows 8 shatters perceptions of what a PC now really is. We’ve truly reimagined Windows and kicked off a new era for Microsoft and a new era for our customers…With a glance, you will always know what’s going on in the world and with the people who count in your life…The experience is really magical. You log in just once and you see your device light up with your life. Buy a new computer, it lights up with your life.
-Microsoft CEO Steve Ballmer, Windows 8 launch event, October 25, 2012

I don’t like Windows Phone.

Oh, I liked it in theory – the notion of simplifying and bringing together so many ways of communicating with the same people, of simplifying the very concept of a smart phone, made me put it at the top of my list of desired operating systems should I ever get a smart phone. I haven’t jumped on the smartphone bandwagon yet, though not for lack of wanting to – I may have said in my very first post that I tend to shy away from the stuff everyone else finds popular, but when the likes of Instagram and Angry Birds become household names almost entirely off the back of smartphones it’s clearly become more than a passing fad – but I suspect Mom quite rightly doesn’t trust me with something that leaves me connected to the Internet no matter what she does.

So like I said, I liked the concept of the Windows Phone, until I actually tried it and found the interface clunky, not for me, and missing its greatest opportunity. The most useful source of unification was the unification of social networks, yet they seem to be consigned to the “People” tile, which just flashes images of the people therein without reporting new messages like the others; calling people not in your address book is a major pain, and it seems to assume you have a personal connection to everyone in your address book. Maybe that’s how some people work, but it wasn’t for me. With the iPhone being tiny (and thus hard to be precise with), coming off as very basic, and being so simple as to have a seemingly inconsistent interface, my preferred smartphone OS shifted to Android.

I’ve also never understood the appeal of tablets – they’re basically bigger, heavier smartphones that can’t call anyone and don’t fit in your pocket. This one I understood more after trying out smartphones – while a bigger screen for watching video wouldn’t move my needle much, being able to type on a keyboard where I don’t accidentally hit the wrong key every other letter would. But I was very surprised when Windows 8 was announced – as much as I liked the idea of Windows Phone, that Microsoft would make the most radical change to the Windows interface since at least Windows 95 over 15 years ago to bend the bedrock of the company’s success to match their johnny-come-lately product sitting a distant third in the smartphone wars, a product that only adopted its “live tiles” gimmick because neither Apple nor Google would, seemed damn near unthinkable. I couldn’t even imagine how it could possibly work on an actual computer. As it happened, Windows 8 seems to have gone over so poorly that it’s started to elicit comparisons to the infamous Windows Vista – which it’s actually doing worse than. Now that I’ve put in more time on the Surface than I reasonably should have, are the critics right? Is Windows 8 the New Coke of computing, or does it represent a true revolution?

When the Surface first came out, many people wondered whether Microsoft was really putting its best foot forward, what the point was behind the “lite” version of Windows 8 the Surface shipped with, Windows RT. Ostensibly it was the “tablet” version of Windows 8, but Intel has been making chips that save enough on battery life that the full version can be and has been used to power tablets as well, with no evident disadvantages (but, in my experience, at higher prices than the equivalent RT devices), so many tech commenters saw it as essentially Windows without the ability to run old-style desktop apps.

Let me state upfront that I consider this a red herring. I felt it was inevitable that makers of most old-style desktop programs would quickly rush to fill up the Windows Store with new app versions of their programs if enough people bought Surfaces, meaning almost any desktop application you might miss would have a version fit for Windows RT sooner or later. Admittedly this might not include the programs commenters usually chose as specific examples, iTunes and Adobe Photoshop, in the former case because Apple doesn’t want to support Microsoft’s attempt to compete with the iPad (and Microsoft would rather you use their Music app anyway), in the latter case because Photoshop is too heavy-duty for touchscreen use and might seem to require a desktop, as with most PC games, though I could be proven wrong on this (more on this later). More to the point, I felt that regardless of its other problems, Windows RT came with a killer app that would help make sure people would buy Surfaces: free Microsoft Office. And if that doesn’t sound impressive to you, you’ve never had to pay over $100 for Office.

Admittedly the version of Office that comes with the Surface is licensed only for individuals, meaning companies – the main purchasers of Office – would have to buy a group license anyway, and if they do that they might as well shell out for the Surface Pro. So what sort of person would benefit from free Office on an individual basis, perhaps a group chronically challenged for money and with a tendency for early adoption of technology? College students. I imagined college campuses littered with people carrying around Surfaces to do their work, connect to the Internet, and whatever else they wanted to do.

But I said earlier that almost any desktop program would soon have a version for Windows RT, and there is one big exception to that group: web browsers. Microsoft has effectively blocked browser makers from important resources that would allow them to make their own browsers, so you won’t be able to install Firefox or Chrome on a Windows RT machine (though you can on Windows 8). The EU, which has long hounded Microsoft’s bundling of Internet Explorer with Windows, allowed this anticompetitive move on the grounds that as a tablet OS, Microsoft is entering a field already contested by Apple and Google, not really protecting its traditional-PC OS hegemony.

For the record, I agree with them; in fact, limiting browser competition for Windows RT might actually backfire on Microsoft and may already be hurting the sales of RT machines (sales of Windows 8 machines in general haven’t been as strong as I would have otherwise predicted), as I would be more willing to buy a Windows RT machine and be limited to IE as my only option if the Metro version weren’t so clunky and bare-bones. For example, I have to swipe down from the top (or up from the bottom) to show the tab bar (which contains unnecessary thumbnails of each tab), when it’s always visible on the iPad browser and most Android tablet browsers; combine this with the inability to open multiple windows and it’s almost like a return to the pre-IE7 days before Microsoft embraced tabbed browsing, when you had to open multiple windows to have multiple pages open at the same time. (And near as I can tell, I can’t even use Ctrl+PgUp/PgDn to switch tabs – I have to pull down the tab bar every single time.)

Further, I’m not sure if it’s possible to search from the address bar at all, let alone change search providers on the fly from, say, Google to Wikipedia, which seems odd when most browser makers seem to be moving more towards address bars patterned after Chrome’s “omnibox”, including Microsoft itself in IE9 (while the iPad browser still has a separate Search box). I can’t access Favorites unless they’ve been pinned to (and thus clutter) the Start screen, meaning among other things I can’t pull up an entire folder of favorites at once as a ready-made tab set; between this and the aforementioned loss of multiple windows (not to mention the unchanging, bulky thumbnail-tabs), it’s clear IE makes it damn near impossible to have hundreds of tabs open like I’m used to with Chrome. The RSS reader seems to have gone out the window as well, and while I understand why they did it limiting Flash to pre-approved sites raises concerns for me about privileging content producers with resources at the expense of independent producers; already impossible on the iPad, it’s damn near impossible to read Homestuck on the Surface either (though thankfully Microsoft seems to have turned an about-face on this). Admittedly most if not all of these issues are irrelevant if I’m using the desktop version, but then, well, what’s the point?

On the other hand, some of these might be emblematic of larger issues with Windows 8 that fully Metro-optimized third-party browsers might not fix. As I worried, they may have made it harder to use with a keyboard and mouse, to the point that if your computer doesn’t have a touchscreen, don’t bother upgrading it to Windows 8, no matter how much it meets the other hardware requirements. But Microsoft embraced a “minimalist” design aesthetic in general for Windows 8, minimizing the constant presence of interface elements both in the OS in general and in most of its own apps, going against not only its own past habits but even iOS and Android precedents, but the end result pretty much just amounts to the interface feeling clunky even for touchscreen users and taking too many steps to do anything; the problem with the tab bar is only a specific case of the necessity to swipe from the top or bottom to reveal the “app bar”, which in some apps is necessary to do damn near anything. (You seriously couldn’t include a tappable “all apps” thing on the Start screen without using the app bar, Microsoft?)

Similarly, I usually end up accessing the Start screen by swiping from the side and pressing the Start charm, which feels like one step too many. The Windows logo on the Surface accesses the Start screen, but its location just above where the cover snaps in only means it’s awkward to reach for and easy to hit by accident when holding it vertically; I would have placed it on the right side (when it’s resting on the kickstand), which just so happens to mean iPad users holding it vertically, assuming they rotate it the way I do (admittedly I’m left-handed), will find it in a familiar place. (On some other machines, the equivalent button is almost completely hidden when docked, thus making me wonder what the point of its placement is.) I’m tempted to do the same with the other “charms” (the Surface re-appropriates the function keys for them, but I never used the Windows button to open the Start menu and the function keys are almost as awkward to reach for as the Windows logo on the Surface itself); the Search charm takes on the functions of searching in every single app, including the searching in IE I’m looking for (though while I can change apps on the fly, I’m still not sure I can change search providers – the Wikipedia app feels almost like a beta so it’s no replacement), making it too useful to be the two-step process it is.

That’s not all; the decision not to have actual folders on the Start screen, only “groups”, is completely mystifying regardless of what it means for Favorites, a backtrack from the hierarchical organization the Start menu has had since Windows 95 and that the iPad exhibits as well, forcing most users to have to scroll long distances to see many of their tiles and forcing less-than-ideal tile organization in many cases. Apparently Microsoft wants you to zoom out and then tap a group if it’s a problem for you, but once again that’s one step too many, I should have the option to start with it zoomed out if that’s the case, and it means there’s only one level of “group” and all tiles have to be within one of them, so if you want to look at or use any of the tiles, all the groups are displaying all their tiles as well. In this and other areas, I think this is something where the folks at Microsoft would have benefitted from actually using the iPad as opposed to apparently hearing about it secondhand.

The big innovation of the Surface is supposed to be its “Touch Cover”, but I actually prefer “typing on glass” on an iPad or Android tablet to the “typing on cloth” feel I get from the Touch Cover; at least on the iPad I’m actually typing on a hard surface that actually sounds and feels like an actual thing. Presumably the Touch Cover was made as it is (and hyped much more than the only marginally more expensive Type Cover) because if you fold the Type Cover back behind the Surface it feels weird having a back side of keyboard keys.

Yet despite all of this, I’ve fallen completely in love with Windows 8. Microsoft did not merely re-appropriate the Windows Phone interface for its regular Windows product. It’s completely overhauled our notion of what a computer is, merging the tablet and laptop, putting the final nail in the coffin of the desktop computer as we know it, and serving notice to Apple that they’re not going to renounce their second-class yoke so easily. When it came out, many tech commenters compared the Surface unfavorably to the iPad, in price and in general experience, which was perhaps inevitable but missing the point of what Microsoft was trying to do. In fact, I don’t consider even the Windows RT version of the Surface to be a tablet in any way at all; to me, a tablet has to be in some way connected to a cell-phone network. The Surface is a laptop that happens to have some tablet-like features, and in that sense it’s an absolute game-changer – or at least, what Windows 8 represents is.

There’s a book I’ve heard of but not read called The Innovator’s Dilemma, which attempts to explain a question I’ve long wondered about: why companies facing the advent of an innovation that threatens to undermine their business model so often attempt to kill it rather than adapt into a provider of the new innovation. The short answer seems to be that the new technology is rarely an actual improvement over the old one for customers used to the old one, at least at first, and so any move to embrace the new technology will inevitably alienate existing customers. This seems like a false dichotomy to me; most of the time, there are opportunities for synergy between the two that makes the product more enticing to the new customers and helps transition the old customers to the new paradigm.

For example, I appreciated that Blockbuster at least tried to compete with Netflix with its “Total Access” service, which was advertised as the same DVD-by-mail service as Netflix, at the same price, but with the additional option of returning old DVDs to a Blockbuster store and getting the new one instantly without having to wait for it in the mail; problem was, it didn’t offer streaming like Netflix was already offering, and I believe you had to pick store or mail ahead of time and stick to it. (In my opinion, Blockbuster probably could have done better in competing with Redbox, but it was already dying by the time that became A Thing.) Netflix itself became the impetus for my learning of The Innovator’s Dilemma in the aftermath of its failed attempt to rebrand its DVD-by-mail service as Quikster, because of a Slate article claiming it was Netflix’s attempt to better transition to a business based on streaming of content, but the fact a major reason the Quikster rebrand went over so poorly was the loss of the ability to manage streaming and DVD deliveries together suggests that’s not the whole story.

It wasn’t obvious the iPad was such a threat to the entire PC paradigm, and thus Microsoft’s hegemony. Microsoft could have continued merrily along its way, with old-style PCs being completely separate from tablets, and it wasn’t obvious that status quo couldn’t hold forever. Yet Microsoft apparently saw the iPad as such a threat that they decided to completely destroy the PC as we knew it, effectively undermining their own monopoly and threatening to alienate their hardware (and software) partners, by aiming their new operating system for a new kind of device that would effectively attempt to merge the tablet and the laptop, by so wholeheartedly embracing the touch-screen ethos that a lot of what makes it harder to use with a keyboard and mouse might be unnecessary, much like the radical separation between Netflix and Quikster; there isn’t really any reason, in and of itself, why the desktop couldn’t have been outfitted with a traditional Start button. It seems Microsoft is sending a message: this is what Windows is going to be like from now on, we’re only including a desktop at all for the sake of people used to it or without touchscreens, and before long calling the operating system “Windows” will seem a misnomer.

Perhaps Microsoft saw the trajectory the computer business was headed down; you may recall that last year I predicted that the home desktop computer would become a thing of the past with laptops becoming more and more popular and powerful and with the potential ramifications of computers hooked up to the TV such as the Google TV and Apple TV. Perhaps Microsoft realized that it was fast becoming a maker of operating systems for laptops and that going forward, it would need to optimize their OS’s for them, that the computer of 2012 shouldn’t be running essentially the same operating system as in 1995 or even 2001. Perhaps, too, Microsoft saw the iPad as threatening many of the purposes people were still using laptops for, that if the iPad couldn’t replace the personal computer now, it was only a matter of time before Apple revamped the Mac interface to be closer to that of the iPad. (Indeed, Microsoft was pushing the notion of a “tablet PC” as early as a decade ago, when Windows XP was fairly new.)

Microsoft gets a bad rap for stealing ideas from other companies (especially Apple) rather than actually innovating themselves, but it would be more accurate to say that it’s made a business out of being Nintendo to other peoples’ Atari, of being the ones to refine other companies’ raw ideas into the forms that allow them to take over the marketplace. Regularly Microsoft can see some other company’s innovation in a bigger picture and fix some of the niggling flaws you didn’t even know existed. Windows 95’s interface was jeered as a ripoff of Mac’s, but it was substantially more user-friendly and included the taskbar that made multitasking substantially easier than it had been in either Windows or Mac. Windows Phone is an even more obvious example of this, and with Windows 8 Microsoft took the same notion of “live tiles” to the iPad while also beefing it up to the capability of a full-size computer.

In effect, Microsoft has made a business out of taking technologies that people saw as a threat to their business model, and not only embracing them, but doing so so wholeheartedly that they often become the ones to introduce them to the general public – embracing the challenge of The Innovator’s Dilemma like few other companies out there. Such may have been the case with the Internet itself when they introduced Internet Explorer and killed Netscape. Such was the case with cloud computing, which I might still be seeing hyperbolic spammy ads (from the Motley Fool, of all places) touting it as the downfall of Microsoft – yet for many people, it was Microsoft itself that introduced them to the notion of cloud computing with its Windows 7 “to the cloud!” ads, and between that, the introduction of Office 365, and how heavily Microsoft has pushed SkyDrive (there’s a reason Windows 8 ships with a SkyDrive tile on the Start screen but no first-party touch-based Explorer app), it’s clear Microsoft is alive and well, if not quite strong as ever. And with Windows 8, such might also be the case not only with the iPad, but the late-90s concept of the “network computer“, a computer with next to no hard drive that loaded all its software from the Internet, which Google finally brought to something approaching reality when it unveiled the Chromebook a few years back.

I don’t believe the touchscreen will ever replace the precision of a mouse pointer entirely, and all you have to do to figure out why is to think about the basic ways to manipulate a cell in Excel. To select a cell, you click on it. To select a range of cells, you click on one cell and drag until the cells you want are covered. To select an entire column or row, you click the column letter or row number. To move a cell, click on the border and drag it; to fill information in the cell into adjoining cells, you click on the little control indicator in the corner of the cell border.

Okay, so how do we translate this to a touch environment? Well, obviously you tap where it says click, and tap and drag where it says click and drag. So you tap on a cell to select it, and move your finger from one cell to the other to select a range. But wait, it’s kind of hard to select a border a few pixels wide with a finger. No problem; just tap and drag on an already selected cell or group of cells. Sure that makes it harder to start a new range from a currently-selected cell, but that’s an acceptable tradeoff. What’s that? You want tap and drag to move the display of the spreadsheet, not select a range of cells? What about the stuff you use the right mouse button for? And what about stuff you hold the control keys (Ctrl, Alt, Shift) for? Ay-yi-yi-yi… The iPad’s Excel knockoff “Numbers” does an admirable job of trying to appropriate all these functions, but there are still some odd holes and quirks. Perhaps the stylus can finally come into its own as a replacement for this sort of precision, though it doesn’t provide the same sort of feedback as a mouse pointer – and I doubt it can replace being able to hover over a hyperlink to see where it leads without clicking it (something Microsoft didn’t try very hard to keep in its new browser).

Still, with Windows 8 Microsoft is firmly leaving behind not only those who are too used to a keyboard and mouse to take to a touchscreen, but also those with little use of touch because of their work’s large proportion of typing and inherent need for the precision of the mouse pointer, the sort of hardcore computer user that has long disdained Windows and may still be too attached to their desktop to consider adopting a laptop – a group that could potentially include most Photoshop users. To these people, Microsoft is saying: Tough. We have always strove to make the computer more user-friendly – it was us who ultimately brought the computer to the masses. If we haven’t done enough to make it crystal clear before, we’re now making it explicit that not everyone has the same needs as uber-nerds, and with Windows 8 we’re making a conscious decision to focus wholly on the consumer market and completely leave the uber-nerds behind. If you can’t face the prospect of a fully touch-based Windows in the future, you might as well move on to Linux now if you haven’t already, unless Apple never decides to iPad-ize the Mac.

In this, Windows 8 represents a milestone in the history of computing, one so momentous it could be considered the climax of the computer’s evolution from an expensive tool for highly academic settings to a device so simple anyone can use it. The advent of Windows and Mac OS was a revolution in user-friendliness for the computer, introducing a level of abstraction between the internal level of the computer’s programming and the user-interface level through the mediator of the mouse-controlled graphical user interface. Now, almost 30 years later, Windows 8 marks the turning point in a revolution just as massive, one started by the iPhone and iPad that makes user-friendliness the first consideration over any underlying assumptions of the hardware or even the most basic assumptions a programmer might have about a keyboard. Before, everyone was still using a “computer”, the same device a programmer used to create the software running on it, something an engineer might use to solve important problems. Now, computing technology has reached the point that for most people, it has transcended the notion of using a “computer” entirely.

The problem for Microsoft is, the more I play with the Surface, the more I find that Windows 8 itself kind of feels like a raw idea inviting refinement by some other company. Microsoft took a massive risk with Windows 8, and for it to pay off they had to leverage their existing advantages to create something extraordinary, and I’m not convinced they did. (Hopefully Microsoft won’t make us wait for Windows 9 to work out the kinks, because they might not have that long if Apple and Google are already hard at work on their own refinements – though apparently Microsoft is moving Windows to the same incremental-update schedule as Firefox and Chrome, with an update to Windows 8 potentially coming as soon as August.) But even if Microsoft is, for once, the Atari to someone else’s Nintendo, they’ve introduced or at least accelerated the most revolutionary change to our lives since the popularization of the personal computer and Internet themselves.

The Death of Google Reader: Not the End of the World

I have used Google Reader as my source for RSS feeds since adopting Chrome as my main browser four years ago. The reasoning was simple: Chrome, unlike IE and Firefox, didn’t have a built-in RSS reader, and it made sense to use Google’s own service for the purpose.

Today, Google announced that I won’t be able to use Google Reader after July 1. The Internet has reacted as though it were the coming of the Mayan apocalypse. But honestly, I’m not that broken up about it; if anything, I miss the ability to nest folders I had in IE and Firefox, and I’ve complained in the past about Reader’s idiosyncrasies like setting viewing order by feed but viewing of read items globally, when the two are inextricably linked.

Why are people so worked up about Reader? Part of it appears to be that Reader was a social RSS reader, one that allowed users to share feeds and articles, and discover feeds they might enjoy based on what others are following. Google has since moved most of Reader’s social functions to Google+, and theoretically they could have integrated Reader into Google+ pretty easily, potentially gaining a big leg up over at least Facebook in the social media wars – though the lukewarm reaction to the social-function shutdown suggests it wouldn’t have been a sure thing. But Google may not have ever really known what they had with Reader, only that its team knew social and thus were a valuable resource to help them build their more explicit social pursuits. Another reason is Reader’s sheer simplicity and no-frills approach, but it’s hard to get more no-frills than IE’s RSS reader.

On the other hand, Google’s stated reason for shutting down Reader – the declining usage of the service – has introduced me to the notion that RSS is obsolete in the age of Twitter. It’s true that I’ve removed several feeds from Reader when the same feeds became available in Twitter form, if the rest of what was on the relevant tweeters was sufficiently interesting to me. But there’s quite a few where I did not and don’t expect to anytime soon. Twitter lacks nuance: if you follow someone, you’re subjected to every little thing they post on there, even if they’re the dreaded “what I’m having for breakfast” tweeters. I can’t imagine myself following more than 20 or so people on Twitter, and I’m very suspicious of people who claim to follow hundreds or thousands of accounts, or feel the need to follow everyone who follows them as though Twitter were just like Facebook. (More on this later this year.) I’ve actually added RSS feeds that weren’t even linked to by knowing the URL structure of popular blogging providers, because I only wanted to follow certain categories of a blog.

Ironically, another big leg up RSS has over Twitter is something the lack of which some people have cited as one of their favorite parts of Reader: the ability to read articles without clicking a link. Not all web sites do this, which is understandable in an age where hits are king and advertising revenue must be maximized, but I find it incredibly useful to read content from several different web sites, one at a time, without leaving a given page, only clicking through if I felt the need to leave a comment or save a page for later. As it stands, this is impossible on Twitter, and I suspect it’s a big key to the popularity of Tumblr, which could best be described as a “social blogging” platform, or Twitter without the character limit.

RSS might eventually become obsolete, but it’s going to take a number of advances in other areas to replace it entirely. On a note more related than you might think, I guess this means I’m setting July 1 as a target date to get a new computer (you wouldn’t believe the problems this one has developed) so I can move my RSS feeds to a Windows 8 app. More on both of these next week.

Some Quick Thoughts on the Future of Webcomics

Last week John Allison of Scary Go Round and more recently Bad Machinery fame wrote a blog post expressing his fear that, as more and more webcartoonists took to social networking sites like Tumblr, it would be harder for them to make money off their work because even if their work went viral, it would get lost in the shuffle of people’s Tumblr feeds and no one would make the connection to them as the creator of that work. As a result, he fears the decline of the sort of “community” that has so characterized webcomics up to this point.

Personally, I think his fears are overblown; for one thing, I find it hard to compare Tumblr cartoonists with other webcartoonists, in part because most blogging platforms that aren’t modified WordPress make poor places to put up webcomics anyway, mostly due to archive management. As such, I suspect most Tumblr cartoonists aren’t very interested in fame and fortune anyway, and are more of the David Morgan-Mar frame of mind, of just wanting to share their creations with the world. In any case, the question is, would, say, Kate Beaton still have attracted a large following if she’d started out on Tumblr instead of LiveJournal? (After all, the former is essentially an evolved version of the latter.) Since most webcomics got their start through word of mouth, I find it hard to believe that the boom in social networking is anything but good for them (though whether it’s good for the quality of content that becomes popular is another matter, if it means the most popular comics essentially become nothing but meme factories).

But Allison’s broader fear is the notion that, for many, “social media ARE the Internet”, making it harder for web sites like his to catch anyone’s notice. I think this too is overblown, but mostly because of a far larger force reshaping the Internet that’s both largely responsible for that notion and that could end up sweeping both visions of the Internet under its feet, one that does pose a tremendous challenge, but ultimately a tremendous opportunity, for webcomics. I’ll have more on that next week.

A very, VERY belated sports graphics roundup.

I probably should have done this before the baseball playoffs started. I definitely should have done this before basketball season started, I certainly should have done this before the NHL lockout ended, and I sure as hell should have done this before the Daytona 500. I probably should do this before spring training really gets going.

It was only a few months after going logo-only on its baseball graphic that Fox pulled a surprising and disappointing about-face, debuting a football graphic that hearkens back to the very earliest incarnation of the Fox Box. Fox went from being a pioneer of the logo-only approach to the only NFL partner not to use logos at all, at least on the constant version of the graphic, and from having perhaps the best integration of timeout indicators to the worst.

I guess this is part of Fox’s preparation for the launch of Fox Sports 1 – the same graphic also appeared on FX’s and FSN’s college football coverage, introducing more NFL-college consistency than existed last year, and a similar graphic debuted during Fox’s coverage of the NLCS, complete with pitch count (once a pitcher has thrown about 40 or so, that is, which leaves an odd space below the diamond before that). But if that’s the case, it surprises me that FSN’s basketball and certainly hockey coverage continues to use the old graphic. Hockey in particular seems perfectly suited to this new graphic.

Fox’s move looks especially bad in the wake of what CBS trotted out during the Super Bowl. In the past, I might have thought this graphic was a one-time deal because of the Super Bowl, but not only do I expect it to start taking over CBS’ other sports full-time, I’d actually prefer if this was the basis for the graphic used during the NCAA Tournament, instead of the abomination CBS and Turner trotted out last year (and was still present on truTV during the Coaches v. Cancer Classic).

CBS adopts the same font ESPN and several other places have been using, and should make Fox the only major sports entity not to use the two-line box for player information in some sport. Furthermore, its use of timeout indicators goes from worst in the league to at least on par with the primetime partners.

I trust Turner to improve on last year regardless, judging by their new NBA graphics. It’s a bit bulky (especially in SD widescreen), and I could do without the massive tab showing whether a team is in the bonus, but even that is miles better than what Turner graced us with during the Tournament.

Although maybe Time Warner Cable SportsNet managed to come up with what TNT’s graphics should really look like, with one of the best implementations of not only timeout indicators, but even the bonus indicator, I’ve yet seen.

Now let’s take a quick trip through the league-owned networks, shall we? Sticking with basketball, NBATV seems desperate to suggest their games aren’t just ripped from local broadcast partners with their own graphics slapped on, but the end result ends up playing distracting animations a bit too often, though it is a more professional graphic otherwise. Though I do have to ask what those extensions below the team names are for; unlike NBC with Sunday Night Football, NBATV doesn’t have the excuse that timeout indicators haven’t come along yet.

It’s not quite as professional, though, as the NHL Network, which manages to almost completely hide its ripped-from-the-RSN nature. Unfortunately, I can’t find a video of it…

NFL Network, meanwhile, moved to a more conventional banner, and as always there’s not much I have to say about it.

And of course, we have a new player in the sports network landscape, the Pac-12 Networks, and with it a new graphics package. It’s a serviceable package that you can tell really stresses the Pac-12 logo shape. It’s very good considering the Pac-12 was launching a new network from scratch without a partner.

The basketball version, though, is oddly asymmetric, with the team names always on the left side of the score despite the graphic itself being centered. The result is that the bonus indicator is under the team name for the road team but under the score for the home team. The fact that it’s italicized for the double bonus only only adds to the distraction.

I’d be remiss if I didn’t mention NBC’s Olympics graphics, as well as the new graphics introduced for the world feed, which I don’t particularly like. I see what they’re trying to do, but the slant on the flags seems too cutesy for something that’s going to be seen all over the world, the font is surprisingly generic, and graphics for showing scores for head-to-head sports just look ugly:

Meanwhile, NBC decided to make only minor changes to its new post-NBCSN graphics package for the Olympics.

Michael Phelps Talks With Bob Costas (July, 1… by ananula
Comcast SportsNet has been updating its own graphics packages to match that of the rest of the NBC Sports family… but the actual score graphics are basically straight template swaps of the old ones, with a slight exception for baseball I’ll get to next time (hint: CSN has adopted pitch count).

Of course, I’d much prefer NBC itself adopt these instead of the bulky numbers they have now, but I’m not feeling how it looks for a box on basketball.

Root Sports has added logos to its Penguins hockey coverage.

Finally, in the middle of last year NESN changed graphics again, to something not entirely unlike ESPN’s baseball graphics. It might be the best graphics package in baseball right now.

Hopefully the next roundup will come in less than half a year’s time!

Also, I could have made an obvious Monty Python reference instead of a forced comic book reference.

(From The Order of the Stick. Click for full-sized Black Lantern creation.)

After teasing us with the resolution of one death prophecy, perhaps it shouldn’t be surprising to see Rich actually resolve the other one.

While it’s a bit earlier than when I thought it might happen, I have less of a problem with Durkon’s death coming here than with Belkar’s. As I mentioned in my last post, what really hammers this home is the fact that Durkon and Malack were so chummy earlier, only to see them turned against one another and for Malack to ultimately be responsible for Durkon’s demise – as well as the fact that this sequence represents Durkon’s biggest time in the spotlight in the entire comic. (On the other hand, it loses some impact because we don’t really know what caused Durkon to leave the rest of the group, considering he was still with them when last we saw them… and rereading that strip now could cause you to tear up all over again.)

I kind of wish I’d been able to post on the previous comic, the one with Durkon’s actual death, which I could have done if I’d started writing just a few hours before – not just because that’s the important moment in the sequence, but Durkon’s eventual resignation and ultimate acceptance effectively harkens back not only to Durkon’s initial reaction to the prophecy, but also the death of the previous most important character to remain dead at the moment (not counting Xykon), Miko. Malack isn’t really much of a threat to the gates, and things still aren’t looking up for Belkar, but it’s still, all things considered, a rather fitting way for Durkon to go out.

At least, temporarily. Because the comic I’m actually posting on completes the other half of what I thought might happen to Belkar, with Malack raising Durkon as a vampire, and suggesting a rather disturbing origin for Malack’s former “children”. While Malack clearly has some respect for Durkon that goes above and beyond what he had for almost anyone else, and we’ve already seen an image of a vampified OOTS member, even in black, that keeps some semblance of their former personality, the consensus on the forums seems to be that the Durkon we knew isn’t there anymore, and given his initial reaction to being raised it’s hard to disagree… not to mention the other prophecy surrounding Durkon from one of the prequel books.

It’s hard for me to take this as the fulfillment of that prophecy, though; this would seem to be a short-term threat for the OOTS to have to deal with, and it’s hard for me to see Durkon going very far in his current state. At the very least I would have to imagine we’d end up at the next gate as soon as the next book for this to tie in. Regardless, the OOTS is now finding itself in very dangerous territory… and given the circumstances, Xykon and company would seem to be overdue to show up.

Now THAT’S what I call Lawful Evil.

(From The Order of the Stick. Click for full-sized planning ahead.)

Well, this is hardly the first time I’ve jumped the gun on a comic – after Rich teased us with the prospect of Belkar’s death, along came Durkon to save the day, at least temporarily (and potentially setting up his own demise). The result has perhaps been Durkon’s biggest spotlight moment in the entire comic; he had enough of a story arc in the first prequel book to get the cover, had a side-plot in the first book, and had a chance to shine in battle in the third, but none of those have been as effective at pulling Durkon out of his status as the OOTS’ “forgotten” member as this sequence.

To be fair, the groundwork for this was laid much earlier in the book, with how chummy Durkon and Malack were earlier, but I may have missed the other important development, and ultimately the only one, from the last comic I posted on: Malack’s status as a vampire. After Durkon saves Belkar, the two of them have a heart-to-heart discussion on how this revelation changes their relationship and Durkon’s feelings of betrayal as a result, in a brief sequence more than a little reminiscent of Enor and Gannji, before ultimately deciding their differences are now irreconcilable and turning their spells on each other.

This allows Durkon to show off his combat skills for an extended period against a real threat in a sense we’ve rarely if ever seen in the comic before, forcing Malack to retreat and use more stealthy tactics. That leads to this strip, where Durkon, low on options, begins taunting Malack verbally in an attempt to sniff out where he is, at which point Malack starts going on about his long-term plan to outlive his former adventuring cohorts, hoping to inherit a unified empire from the three empires they control.

Ultimately, it stands as a marked contrast to Redcloak’s stance on the status of undead. 45 comics ago, Redcloak told Tsukiko that all undead, no matter how powerful or seemingly free-willed, are ultimately tools for the living, claiming that as much as Xykon may appear to control Redcloak, it is really Redcloak who controls Xykon, however subtly. If this were the case Durkon would be more spot-on with his original query than he thinks, but instead when he hears Malack’s plan he sees it as, effectively, the relationship between Redcloak and Xykon, only with the roles of undead and living reversed, and Malack would then stand as a towering counterargument to Redcloak’s conviction. Instead, Malack and Tarquin’s relationship is contrasted with Redcloak and Xykon only for the genuine friendship between them and how open they are with their planning – miscellaneous disputes on tactics (or Tarquin’s own vision for the end of his reign) aside.

But Malack’s final answer is ultimately a somewhat sublime response to Redcloak’s position: “Living or dead, we are all of us marching to our orders – you no less than I, Durkon. It does not matter whence these orders come, be it man or god. Our place is an obedient slave to those who command us. Through service, we are rewarded. That is the true natural order.” Considering Redcloak’s own personal story arc of loyalty to the Dark One, those words must hit especially hard for him were he to hear them. Of course, they take on a different meaning in a comic where the gods are known quantities that interfere directly in the lives of mortals, but even then Malack’s words are an interesting lens to view the whole comic through.

To take some of the candidates for the “nine sides” I haven’t covered already: The OOTS marches to the beat of Roy’s drum, who initially put together an adventuring party to fulfill his father’s Blood Oath, which Eugene put him up to because the powers that be won’t let him into his ultimate reward. Malack cites Nale as a “fool” who “resists” this “natural order”, but he might not even be successful at it, ultimately controlled without his suspicion by Sabine as the IFCC’s representative. There’s quite a bit of evidence that the Order of the Scribble were duped, willingly or unwillingly, into doing the gods’ bidding, and the Sapphire Guard was so hamstrung by their oath that it ultimately hindered the planet’s fate (though Shojo’s attempt to “resist” ended with Miko’s sword through his body, as Rich points out in the commentary for that book). The whole comic could be seen as a great drama staged by the gods through their creation of the rifts (and possibly other interference in the lives of mortals); indeed, Malack’s words might hint at future comic developments, such as the real reason the Order of the Scribble broke up and the nature of the “planet within a planet“. (Considering the comic seems most sympathetic to its Chaotic Good characters, I doubt Rich actually agrees with Malack, but whatever.)

Ultimately, that one penultimate panel may be one of the more critical ones in the comic. I’ve spoken before about OOTS‘ literary merit, and it’s possible that this comic may be critical to a literary appreciation of it, at a time when I’ve doubted Rich’s continuing storytelling ability given the ups and downs of this book. (And how long it’s running; do you realize that previous books were 120, 180, 184, and 188 online pages long… and this one has crossed the 200 mark without the end in sight?) That it would come between two clerics, whose entire job revolves around service to their god, and would serve as such a strong contrast to the position of another cleric, only makes it all the more fitting.