A Gastric Band Approach to Desktop Clutter

Matt Birchler recently did a nice YouTube video praising the open source tile manager AeroSpace. There is a lot to love about AeroSpace right from the get-go. While I definitely wouldn’t call it Mac-assed, since AeroSpace is for very advanced users and developers who are comfortable with text configuration files, my sense is that AeroSpace is made by people who care deeply about the Mac. Part of this is how Nikita Bobko and others supporting the project have deeply considered how AeroSpace works. Unlike Stage Manager, AeroSpace has a clearly defined lexicon which is deployed across copious documentation. Like Stage Manager tries to ease window management with more automatic windowing, AeroSpace tries to ease tile management with more automatic tiling. Users don’t need to manually arrange windows into a grid with AeroSpace. It just happens. From there users can configure and tweak how windows are automatically tiled to their heart’s content.

The idea of composing sets of apps into custom workspaces is particularly appealing to me. I find super apps (apps that are effectively multiple apps in one) mildly off-putting. Most IDEs are super apps for everything related to software development. They contain version control managers, terminals, text editors, and so on. While many IDEs do all of these things reasonably well, their super app paradigm is effectively a usability barrier to using other apps that might otherwise be better. Instead of using Visual Studio Code, for example, I could imagine a world where I have a coding workspace consisting of BBEdit, Terminal, and Git Tower. The added benefit of this sort of multi-app workspace is that I could still use the individual apps a la carte or mix in other apps as needed.

While I’m sure many people started using tile managers to build custom workspaces, I suspect many more turned to them as a way to address desktop clutter. I’ve written a couple of times already about the modern day problem of desktop clutter. Thanks to an abundance of resources (memory, processor speed, etc…), people can open more and more stuff (apps, windows, tabs) without slowing their computer. Because their computer never slows, there is no intrinsic mechanism that forces users to ever close anything and so their more and more stuff stays open until they get overwhelmed. Tile managers reduce desktop clutter by discouraging or preventing overlapping windows, which physically constrains the number of windows that can be visibly open at a given moment.

Maximally constrained user interfaces are impossible to clutter. Lots of people were drawn to the original iPad specifically because it really was just a big iPhone and they loved their iPhone in part because it was too simple to mess up. I get it. I prefer small hotel rooms when traveling solo because larger ones just give me more space to lose track of my stuff, but small hotel rooms are not without trade-offs. The tiny room at the boutique hotel I stay at when visiting my employer’s New York office isn’t really suitable for anything beyond sleeping and hygiene. Even working on a laptop for any extended period of time would be a challenge. A hotel room that is too small to do work is great until you want to do work. An iPad that doesn’t feel like a computer is also great until you want to do computer-y things with it.

Desktop tiling managers are definitely not maximally constrained like the original iPad nor are they even anywhere near as constrained as previous versions of iPadOS with split view and slide over1, but they are, by their very nature, constrained. Beyond physically constraining the number of windows visible at a given time, tiling managers also constrain the shapes of those windows. I wrote2 about this when reviewing Stage Manager on macOS Ventura:

Personally, I’ve found gridded UIs to be lacking ever since I first saw Windows 1.0. When using grids, apps’ sizes. and more importantly their aspect ratios, are dictated by other windows that are on screen. Say you want to work across a spreadsheet, word processor, and a browser. Not only do all of these apps benefit from a large amount of screen real estate, both the word processor and browser need a minimum amount of size just to be entirely visible. In a gridded UI, some or all apps would have to compromise on space and you would have to scroll constantly within each app to see the entirety of what’s being worked on.

People who use tile managers undoubtedly have strategies for mitigating this inelegance. Tiles don’t have to be even subdivisions of a given display so you can, for example, adjust the width of a word processing tile to that of its document. AeroSpace in particular seems to offer lots of tools for technical users to hone their tiled workspaces. That said, the very nature of tiling according to the size of the display limits what adjustments are even possible.

Part of me feels bad that I used AeroSpace as the jumping off point to argue against tile managers. Its makers clearly have put a praiseworthy of thought and care into how it works, but it was seeing such a well considered tile manager that solidified my thinking. AeroSpace is the most appealing tile manager I’ve seen on the Mac and while I’m certain there are plenty of workflows where AeroSpace shines, being physically constrained by an always on tile manager that dictated the number and shape of open windows would feel like a gastric band to me. Rather than wholly automatic systems like AeroSpace or Stage Manager, the best solution to desktop clutter for me remains to regularly close the stuff I open, that’s only just a little automatic.


  1. Some have lamented that iPadOS 26 new windows-based multitasking is too computer-y and while maybe Apple could have somehow continued to support the old style split-screen and slide over alongside it, I don’t see how anyone could make iPadOS meaningfully less constrained using only split-screen and slide over. 
  2. I used the term “gridded UIs” in my Stage Manager review to encompass not just tile managers, but also iPadOS style split screens. In hindsight, “tile manager” is a better term that would have worked just as well. 
Introducing MacMoji Stickers

Disguised Face MacMoji

In olden days computers had just two emotions. They either happily worked as expected or were too sad to boot. Computers today have a range of emotions, but tragically have no way to express them. That’s why our scientists developed MacMoji using the latest in sticker technology, so your favorite computers can finally convey exactly how they feel.

As a parent working a full time job, I regularly seek out creative outlets that I can manage in my limited spare time. MacMoji started out as one such outlet. The idea of combining more modern emoji with the classic startup icon was too fun to resist. I could gradually illustrate one or two, share them on the Relay Member Discord, and iterate as needed based on feedback. At some point, a Relay member suggested I turn these illustrations into an Apple Messages sticker pack. The idea was such a no brainer that I did just that…eventually. You can now buy the Sticker pack for just $0.99 over at the App Store. You’ll find the F.A.Q. over here, which addresses why these stickers aren’t available in the EU. My thanks to the Relay member community for their feedback and encouragement in creating these stickers.

Thank Fucking God Steve Jobs Took Over the Macintosh Project

There are two arguments some use to try and diminish Steve Jobs contribution to the Macintosh, and by extension all of desktop computing. The first and by far most common is to say that Jobs merely copied what he saw at Xerox Parc. While there is absolutely no doubt both the Macintosh and NeXT grew out of what Steve saw, he even said as much, the system at Parc was akin to an automobile before the Model T. It was unrefined, complicated, and not user friendly. This is why Microsoft copied mostly from the Macintosh (and later NeXTStep) rather than anything from Xerox to make Windows.

The second and more obscure argument is that Jobs merely took over the Macintosh project from Jef Raskin, the suggestion being that Raskin invented the computer and that Jobs swooped in to take credit at the last second. What that argument omits is that Raskin’s vision for the Macintosh was very different than what shipped. How different? Raskin didn’t want a mouse. Here’s Andy Hertzfeld over at the venerable Folklore.org:

He was dead set against the mouse as well, preferring dedicated meta-keys to do the pointing. He became increasingly alienated from the team, eventually leaving entirely in the summer of 1981, when we were still just getting started, and the final product [utilized] very few of the ideas in the Book of Macintosh.

We know this is true not just because of Hertzfeld’s own account, but also because Raskin did eventually get to release his computer in 1987, the Canon Cat. Sure enough, it indeed didn’t use mouse and instead relied on what were called “leap keys”. Cameron Kaiser recently went into detail of how the Cat worked.

Getting around with the Cat requires knowing which keys do what, though once you’ve learned that, they never change. To enter text, just type. There are no cursor keys and no mouse; all motion is by leaping—that is, holding down either LEAP key and typing something to search for. Single taps of either LEAP key “creep” you forward or back by a single character.

Special control sequences are executed by holding down USE FRONT and pressing one of the keys marked with a blue function (like we did for the setup menu). The most important of these is USE FRONT-HELP (the N key), which explains errors when the Cat “beeps” (here, flashes its screen), or if you release the N key but keep USE FRONT down, you can press another key to find out what it does.

Needless to say, the Cat wasn’t the huge success Raskin hoped it would be.

Eschewing The Default of Desktop Clutter

The default of any physical space is clutter, in that keeping things tidy requires persistent concerted effort. People who succeed at sustained tidiness rely on systems, habits, and routines to reduce that effort. Disposing a single delivery box, for example, is much easier when a single process is defined for all delivery boxes. Even if the physical effort of breaking down and moving the box is largely the same, the mental effort is reduced to nothing because the decision of what to do with the box has already been made. In that sense, reducing cognitive effort ties directly to reducing physical clutter, which in turn reduces cognitive clutter.

Digital spaces are no different than physical ones. Their default is also clutter. Just look at most people’s photo and music libraries. The difference is that digital clutter is much easier to ignore. You can try to ignore the delivery boxes stacking up around the foyer, but their growing hindrance to day-to-day tasks is obvious. Digital clutter doesn’t take up physical space so most of it can remain out of site and out of mind. You only deal with a cluttered music library on the occasion you make a playlist. There is however digital clutter that does hinder people’s day to day — their desktops. Windows (and tabs) can very easily stack up like empty boxes in the foyer to the point where they constantly get in the way. I wrote about this when reviewing Stage Manager in macOS Ventura.

Windowed interfaces, like those found in macOS and Microsoft Windows have historically been manual. The user opens, arranges, closes, minimizes and hides windows in whatever manner that suits their needs. When Mac OS and Windows came of age in the 80s and 90s, computers were only powerful enough to do a few things at once. These limited resources meant a given task typically involved launching a few apps, manually managing their small number of windows, then closing everything before starting the next task… I find managing a small number of windows more satisfying than burdensome. Users on today’s computers can easily amass dozens of windows from a variety of apps. Furthermore, these apps and windows persist, even between reboots. There is no intrinsic impetus that forces users to quit latent apps or close latent windows. Manual windowed interfaces became cognitively burdensome when faced with unlimited persistent windows found in modern desktop computers. While some still find them delightful, more and more people find desktop computers harder and more annoying.

Stage Manager on macOS tries to solve the problem by automating which windows are visible at a given moment. Even though my review of Stage Manager was on the positive side, it was ultimately too finicky for me. I love the concept of sets, just not enough to manually maintain them. It’s the same problem I have with Spaces. Lots of people use Stage Manager and Spaces as tools to organize and streamline their workspaces, but for me, these sorts of virtual desktops simply become mechanisms to have more windows. They facilitate clutter by hiding it rather than reduce it.

As it turns out, the best solution to window clutter for me is not some extra layer of window management. It’s less windows. I even said as much in that very quote from a review I wrote three years ago.

I find managing a small number of windows more satisfying than burdensome.

And yet it wasn’t until this summer that I actually changed my habits, so what took so long?

As a middle aged man who works a full time job and is actively involved with parenting… well, let’s just say I am less adept at identifying when and how I should change my habits. After all, a lot of my habits at this point are exactly the kind that help me minimize effort. Beyond that though, the only option built into macOS for quickly quitting out of apps is to log off with “Reopen windows when logging back in” unchecked, which doesn’t quite work the way I want it to. There are a handful of apps I always want running and don’t want to have to re-open whenever I resume using the computer. These apps could be added to login items, but I also dislike windowed apps launching automatically. They can be slow, demand extra attention through various prompts, and steal focus. Yuck. What I really wanted was to quit out of all but a handful of apps before locking the screen so that I could start instantly and with a clean slate the next time I use the Mac.

Once again, AppleScript to the rescue1. Using AppleScript, I could set a whitelist of apps to keep open, and then quit out of everything else2. Shortcuts then let me chain this script with other actions to confirm my intentions before locking the screen. Finally, I was able to add the shortcut to my Stream Deck so now at the end of my work day, I push the “Off Duty” button. Even when I have to manually address apps with unsaved documents, quitting apps in one fell swoop still greatly reduces decision making because I no longer have to individually consider whether to quit a given app. It’s going to be quit the same as the rest so all I have to decide is where I should save the open documents, which in itself compels a good end-of-workday habit that I should have been doing already. When I start work the next day, the previous day’s work is saved and my Mac is effectively reset with just a handful of apps and windows open.

Having used this automation throughout the summer, I can now say with confidence that managing windows and tabs in macOS is once again truly satisfying. Navigating between apps doesn’t feel like work anymore and features that never appealed to me with dozens of windows and tabs now make sense. I can find that one window using Mission Control. I actually use command-[number] to jump to a specific tab in Safari! By reducing the cognitive effort involved with quitting apps, I have reduced desktop clutter, which in turn has reduced cognitive clutter to the point where my Mac is once again a tool that helps me focus because it’s no longer like a foyer full of boxes I have to carefully sift through, but an extension of what is currently on my mind.


  1. This is ostensibly also doable using Shortcuts using the Find Apps and Quit actions, but as with so many other things related to Shortcuts, I never could get it to work. 
  2. In the first version of the script, “everything else” included the Finder because it had not occurred to me that was something I could quit. 
Hollywood UI

Those who have been following the rollout of Apple’s new Liquid Glass theme accuse Alan Dye and his team of designing user interfaces that look good in marketing materials at the expense of usability. That’s a fair criticism, but I don’t think “marketing” is the right way to frame it. In my mind, marketing interfaces are a separate issue. They are designed to push users to do something they wouldn’t otherwise. Liquid Glass hamfistedly just tries to look cool.

Looking cool isn’t a bad priority for an interface and it’s certainly a way better priority than marketing. Interfaces built for marketing necessarily come at the expense of usability because their priorities typically come in conflict with those of users. Streaming services are the best example of this, where promoted shows are given priority over those already in progress. Cool looking user interfaces, on the other hand, aren’t inherently at odds with users. iPhone OS looked cool and was immensely usable, and I would argue even Aqua was still very usable even before the transparency and pinstripes were rightfully toned down.

“Marketing UI” is an unfair term for something like Liquid Glass. Trying to look cool at the expense of usability is bad, but it’s way less egregious than actively interfering with users. A better term, in my mind, is “Hollywood UI”. Hollywood has long given computers made up user interfaces, some of them very cool, others not so much. Regardless of their coolness, Hollywood UIs can look like anything because they are ultimately just another prop or set piece. They don’t actually have to work.

That Liquid Glass looks cool in marketing and elsewhere isn’t really the problem. iPhone OS and Aqua looked good too. The problem is that Alan Dye and his team seem more interested in making interfaces that merely look good rather than those that can survive contact with the real world, probably because designing props is a helluva a lot easier and more fun than designing tools that actually work.

Windows 11’s Ongoing Effort to Modernize Windows

Seems like Microsoft is still migrating features from the old Windows Control Panel to its newer Settings app. Here’s Sean Hollister, at The Verge:

But the Control Panel still can’t die. The latest features to migrate, as of today’s Technical Preview: clock settings; time servers; formatting for time, number, and currency; UTF-8 language support toggle, keyboard character repeat delay, and cursor blink rate.

While it is indeed hilarious that Microsoft is still migrating stuff out of Control Panel to Settings over a decade later, my gut sense is that Windows 11 has had to pay down technical debts the same way people have to gradually pay down their financial ones, in installments that span multiple years.

The Windows 11 PC that I use for gaming and Plex isn’t my daily driver. Because I don’t use Windows for either work or other personal needs, take what you’re about to read with a huge grain of salt. That said and in my limited experience, Windows 11 has been gradually getting noticeably better, and dare I say, nicer1? Windows Settings is nicer than Control Panel. The new right-click menu is nicer than the old one. Part of me wonders why Microsoft has been so gradual with these rewrites rather than just releasing them in a more done state, but then again modernizing decades old components is never easy, especially when you’re trying to satisfy software compatibility and entrenched IT practices.

Taking such a long time to revamp these components does merit some teasing and probably some criticism, but I think keeping at it for over a decade shows a resolve that is also worthy of praise. The effort gives me confidence that the people in charge in Redmond truly care about improving the user experience of their desktop OS. I wish I could say the same about the people in charge in Cupertino.


  1. Don’t get me wrong. I still strongly prefer macOS and have many complaints about Windows, the biggest and longest standing of which is that the OS remains completely plagued with crapware. Decluttering your computer shouldn’t be standard practice, and yet with Windows, it still is. 
The Illusion of Thought

New things tend to bring out extreme opinions and AI is no different. Some liken it to the second coming, while others damn it as the antichrist. It’s early days yet, but to me AI feels more like Web 2.0 than Web 3.0. Both were maximally hyped by press and marketing departments, but Web 3.0 always felt like if a Ponzi scheme and vaporware had a baby. Web 2.0 was different. There was a there there. Google Maps, Flickr, Facebook were all real things. Web 2.0 marked the very real and immensely tangible beginning of the web as a viable platform. While there has undoubtedly been an unrelenting torrent of heinous marketing with regard to AI, there is also very clearly a there there. Even without the time to truly delve into the plethora of tools and techniques currently available, the likes of ChatGPT and Cursor have already been helpful in my work. My very limited experience with LLMs and the like gives me optimism that AI will bring a new generation of computerized tools that will help people build, create, and think. What worries me though is when I see people use AI, not as a tool to help them do those things, but to do those things for them. The best example of this is how LLMs are already being used to write.

I have been fortunate enough to have a now decades long career as a software engineer. As one might expect, my early success came from solving problems mostly through coding. What has really helped me thrive in my more senior roles of late, however, is writing. Writing regularly to this blog and elsewhere for over a decade has greatly improved my ability to distill vague ideas into cogent points. For me, practicing writing has been like practicing a strict form of thinking. John Gruber just recently talked about this connection between writing and thinking while guesting on Cortex:

But it’s that writing is thinking. And to truly have thought about something, I need to write it. I need to write it in full sentences, in a narrative sense, and only then have I truly known that I’ve thought about it.

Like John, writing makes me truly think about a subject by leading me to consider its various aspects and then forces me to organize all of those ideas into coherent prose. This process also forces me to organize these same ideas in my brain. While I agree with John that speaking extemporaneously can’t compare to the very thorough consideration involved with writing, I would argue that by making me a better thinker, the practice of writing has made me a better speaker1.

The idea that writing improves thinking isn’t unique. That’s why I suspect the liberal arts are filled with writing. It’s not so much about finding the next great academic, but creating a whole class of better thinkers. That’s ostensibly why a college degree is required for the jobs that ultimately pay people to think.

It’s this connection between writing and thinking that makes me worried about people using LLMs to write. Now not all writing is the same and I would argue that most of the writing people do, even professionally, is functionally basic communication. I’m also not all that concerned with AI tools that assist writing. An LLM that autocompletes or even rewords sentences doesn’t eliminate the process of writing. Where I see problems is when LLMs are used to do the actual writing in a way that precludes users from having to think.

Let’s assume two scenarios involving someone being asked to provide requirements for a given project. In scenario one, the person writes the requirements in five bullets, but is worried about the optics of such a short response. In scenario two, the person doesn’t yet know the requirements, but still wants to provide a response just to not be empty handed. In both scenarios, each person uses an LLM to generate a 1000 word specification document that they send to their colleagues. Both not only wasted their colleagues’ time by having them read a 1000 words of AI slop2, they also created an illusion of thought that may not have been needed in the case of person one or didn’t even happen in the case of person two.

And then there’s a third scenario. The person who has no intention of ever really thinking about any project and uses AI solely to merely keep up appearances. You might think that’s cynical or absurd, but I’ll bet you dimes to dollars this is already happening. There are many, many situations in jobs that pay people to think where avoiding thinking can be a successful strategy. That’s because thinking through ideas is the kind of time consuming, indeterminate, hard to measure, and even harder to justify task even when it’s absolutely necessary. Being the one who takes the time to think through something can easily become a “heads I win/tails you lose” proposition. Ideas that can’t be worked out can end up with a stink of failure while the best and most thought through ones can seem like common sense in hindsight3. Add to the risk/reward equation that the actual act of thinking is largely invisible. It’s the resulting documents that are seen at the end of the day, and how many bosses pay that close attention to their contents. Of those who do, how many could discern which were produced by an LLM? Many never had the time to really think about the subject, and why should they? That’s what they paid the third person who wrote the document to do, a person by the way, they already believe is an ace for being able to produced such documents on short notice.

I am still optimistic about AI the same way I have been optimistic about other major developments in computers, but those other developments never gave anyone the impression that computers could actually think. Before AI, no one looked at an image or document and questioned whether a human was involved. No one looked at Photoshop and Google Docs as an alternative to thinking. LLMs of today can already give the illusion of human thought. The idea of our attention being flooded with AI generated slop alone is worrisome, but what makes me way more worried is how often individuals will have the computer create and illusion of thought in lieu of actually thinking.


  1. And I would further argue that John’s decades of writing is precisely why he is such a damn good podcaster (even if he doesn’t like to admit being a podcaster.) 
  2. Undoubtedly, some meant to read that 1000 word spec sheet will also just run it through an LLM to summarize it back into five bullet points. 
  3. Anyone who follows Apple should be very familiar with this particular phenomenon. 
Realizing When It’s Actually Not Fine

On the most recent The Talk Show with Jason Snell, the conversation naturally (and rightly) turned to keyboards. Tech nerds with mechanical keyboards have become a bit of a joke these days and while some aspects of the market certainly merit ribbing, I think that stereotype is mostly unfair because keyboards are tools. Here is how Jason Snell put it:

…Don’t feel bad about it, because this is what we do. These are like the tools of our trade. This is your axe, this is your electric guitar, this is your screwdriver, this is your RAM truck, whatever it is. As a writer the keyboard, is as silly as it seems, totally matters because that’s our tool of our trade, is the keyboard.

Jason and John are indeed professional writers, but I would argue vehemently that keyboards are a primary and daily tool for anyone who writes. That includes coders for sure, but it also includes practically every modern day desk job. Teasing an office worker for having a preference in keyboards is just as petty as teasing a contractor for their preference in trucks, even when their truck is just as much a luxury vehicle.

The idea that office workers should care about their tools just as much as any other professional isn’t limited to keyboards, a point which both John and Jason naturally transition to. Here’s Jason again:

Look, if you can’t afford it, that’s fine. It’s totally understandable, but I think a lot of people end up suffering with crappy things to do their jobs because they’re like “no, no, it’s fine”, and sometimes the trick is realizing when it’s actually not fine and this is your profession. When I set up my own business, one of the first things I learned is it’s not “it’s a business expense” means “it’s free”, but “it’s a business expense” means “this is a tool I use to do my job”. I should probably pay for it, and that’s okay.

I am fortunate enough to work remote and thus have been able to tailor my workspace to my taste and comfort. I may not need a Studio Display, an Aeron stool, or a standing desk, but they all make doing my job easier by making it more enjoyable, and it drives me a little nuts whenever I go into the office to see most of my colleagues settle for the tools that were chosen for them because that’s whatever the company could buy in bulk at a good price. That’s not to say I expect my employer to provide everyone with $1500 displays, rather that I am disappointed that none of my colleagues eschew the $100 display provided in favor of something that would make their job easier. Sure, most of my colleagues probably don’t care about their display, but I bet at least some do and accept what they know is a crappy display simply because “it’s fine.”

There are very valid reasons for sticking with the status quo, namely “because I can’t afford anything better” or “I truly don’t value this enough to pay a premium for it”, but there are also plenty of excuses that don’t really hold up to scrutiny. Here are a few examples:

  • “It’s fine because only suckers buy their own equipment”, except you get to keep your equipment.
  • “It’s fine because I am tough enough to not need nice things”, but that doesn’t stop you from buying nice things outside of your job.
  • “I don’t want to be the guy with the weird keyboard”, which just makes you the even weirder guy who secretly wants a better keyboard.
  • “It’s fine because my job is not my life”, outside of that part of your life where you spend most of your waking hours doing your job.

The silliness of that last excuse ties into a precept that has been invaluable to my own spending habits. Where reasonable, allow yourself to spend more in the areas you spend lots of time. For example, get the best set of knives you can reasonably afford if you cook regularly, and just get a discount set if you don’t1. “It’s fine” really usually is fine for the things that only get occasional use. Just don’t automatically settle for “it’s fine” for the stuff you are going to use all the time. Once you concede it’s reasonable to invest a little more in the kitchen where you spend a handful of hours each week, then investing a even more in the office where you spend 40+ hours each week becomes a no-brainer… even if that means buying a better keyboard.


  1. Also avoid buying expensive stuff you know you’ll rarely use. This is why I don’t own an Apple Vision Pro. As much as I might want one, I know it’ll mostly sit in a drawer. 
iPadOS 95

We are coming up on the 30th anniversary of Windows 95. Windows 95 still didn’t make PCs as user friendly as Macintoshes of the day, but it was the first version of Windows that was good enough. Previous versions of the OS were terrible, and not just terrible in hindsight. Windows 1.0 all the way to Windows 3.1 were each terrible in their day. There is no better evidence of this than Windows 95 itself, an upgrade literally celebrated on the day of its release specifically because it heralded a clean break from a decade of mediocre releases. Here’s what I concluded on its 25th anniversary.

While modern day macOS has its roots in the original Macintosh System and NeXTSTEP, modern day Windows has its roots in Windows 95. Everything prior has largely been thrown away because even Microsoft knew it was garbage.

This may seem like mean spirited hyperbole to anyone who wasn’t around or aware during that era, but you don’t have to take my word for it. Go play around with Windows 3.1 right now on PCjs. It’s not good. Early versions of Windows are neat as a retro tech curiosity in large part because they are so unlike the modern versions used today. Now go check out Windows 95. While rudimentary, it’s still easily recognizable because every subsequent version of Windows was an iteration on what Windows 95 started.

iPad enthusiasts are excited by iPadOS 26 the same way PC enthusiasts were excited about Windows 95 for mostly the same reason. While iPadOS itself has been fine, its multitasking has been terrible. Like Windows 3.1, iPadOS’s various mechanisms for multitasking were all manageable, even useful in some cases, but they were never good. Like Windows 95, multitasking in iPadOS 26 is celebrated because it’s a fundamental departure from its predecessors. The similarities don’t stop there. Both Windows 95’s much improved usability and iPadOS 26’s much improved multitasking largely come from copying the Mac. Windows 95 embraced the Mac’s desktop metaphor with Mac-like window management and a more Finder-like Windows Explorer. iPadOS 26 embraces the Mac’s desktop metaphor, with Mac-like window management, and a more Finder-like Files app… oh and also there’s a menu bar.

Why Microsoft took a decade to decide to just copy the Macintosh is obvious — severe technical limitations of 80s era PC hardware and legal threats from Apple. Why Apple took a decade to just copy macOS for iPadOS multitasking is more of a mystery. iPads in 2015 were much more constrained hardware-wise, but I think that’s only part of the story. There is little doubt that Apple sees the iPad as an anti-computer of sorts. A device that does the stuff most people use computers for, but without the hassle and cruft. I wonder how much of Apple’s years long consternation about adding macOS style UX to iPadOS stems from a fear that the desktop metaphor itself was a cause of that undesired cruft. How can the iPad be the anti-computer if it looks just like a computer? Beyond embracing Mac-like features, I think the philosophical departure iPadOS 26 represents is a belief that traditional computers are complex for many reasons that have nothing to do with their interfaces. The iPad can have proper windowing, file management, a menu bar, etc… and still be the anti-computer so long as it continues to strictly manage or eliminate the undesired cruft of legacy computers. I love that large background tasks involve live activities and have to be user initiated. I’m sure some wish apps could run in the background a la macOS and Windows, but I would argue arbitrary backgrounding is exactly the sort of thing that makes those platforms complicated in ways that aren’t worth the squeeze for most users.

For the first time ever, multitasking in iPadOS 26 feels like something Apple can iterate on for decades, the same way they’ve done with macOS since System 1 and the same way Microsoft’s done since Windows 95. Just as nerds like me today look back at early versions of Windows as a weird era before Microsoft finally figured out desktop computing, future nerds will one day look back at early versions of Split-Screen, Slide Over, and Stage Manager as part of that weird era before Apple finally figured out iPad multitasking.

Productive, Portable, but Not Touchable

Tom Warren wrote his Microsoft Surface Pro 12-inch review for The Verge (emphasis added):

This Surface Pro redesign also presents some of the best bits of rival tablets into a piece of hardware that feels a lot more tablet-like than Microsoft has ever created. I’ve been using this Surface Pro in tablet mode more often as a result, simply because of the smaller size and the ease of switching between laptop and tablet modes. The operating system and apps still feel like more of a laptop, though. Microsoft also sells the Surface Pro without a keyboard, as if it’s a pure tablet, but as always, you really need to purchase that $150 keyboard to unlock the best experience.

The Surface Pro 12 sounds like the most compelling tablet-like device Microsoft has ever released, but I’ve long argued that a UX can’t simultaneously be touch friendly and information rich enough for multiasking while on a tablet sized screens. As much as iPadOS has favored touchability over information richness, Windows has done the opposite. There have been a plethora of touch enabled devices that run Windows, but none have been great tablet experiences specifically because of Windows.

I am deeply interested to see the rumored iPadOS multitasking improvements Apple is expected to announce at WWDC next week. My gut tells me that we’ll get another dud unless iPadOS 26 somehow becomes much more information rich and I don’t see how that happens without sacrificing touchability by shrinking its user interface, presumably when a keyboard and touchpad is connected.