Eschewing The Default of Desktop Clutter

The default of any physical space is clutter, in that keeping things tidy requires persistent concerted effort. People who succeed at sustained tidiness rely on systems, habits, and routines to reduce that effort. Disposing a single delivery box, for example, is much easier when a single process is defined for all delivery boxes. Even if the physical effort of breaking down and moving the box is largely the same, the mental effort is reduced to nothing because the decision of what to do with the box has already been made. In that sense, reducing cognitive effort ties directly to reducing physical clutter, which in turn reduces cognitive clutter.

Digital spaces are no different than physical ones. Their default is also clutter. Just look at most people’s photo and music libraries. The difference is that digital clutter is much easier to ignore. You can try to ignore the delivery boxes stacking up around the foyer, but their growing hindrance to day-to-day tasks is obvious. Digital clutter doesn’t take up physical space so most of it can remain out of site and out of mind. You only deal with a cluttered music library on the occasion you make a playlist. There is however digital clutter that does hinder people’s day to day — their desktops. Windows (and tabs) can very easily stack up like empty boxes in the foyer to the point where they constantly get in the way. I wrote about this when reviewing Stage Manager in macOS Ventura.

Windowed interfaces, like those found in macOS and Microsoft Windows have historically been manual. The user opens, arranges, closes, minimizes and hides windows in whatever manner that suits their needs. When Mac OS and Windows came of age in the 80s and 90s, computers were only powerful enough to do a few things at once. These limited resources meant a given task typically involved launching a few apps, manually managing their small number of windows, then closing everything before starting the next task… I find managing a small number of windows more satisfying than burdensome. Users on today’s computers can easily amass dozens of windows from a variety of apps. Furthermore, these apps and windows persist, even between reboots. There is no intrinsic impetus that forces users to quit latent apps or close latent windows. Manual windowed interfaces became cognitively burdensome when faced with unlimited persistent windows found in modern desktop computers. While some still find them delightful, more and more people find desktop computers harder and more annoying.

Stage Manager on macOS tries to solve the problem by automating which windows are visible at a given moment. Even though my review of Stage Manager was on the positive side, it was ultimately too finicky for me. I love the concept of sets, just not enough to manually maintain them. It’s the same problem I have with Spaces. Lots of people use Stage Manager and Spaces as tools to organize and streamline their workspaces, but for me, these sorts of virtual desktops simply become mechanisms to have more windows. They facilitate clutter by hiding it rather than reduce it.

As it turns out, the best solution to window clutter for me is not some extra layer of window management. It’s less windows. I even said as much in that very quote from a review I wrote three years ago.

I find managing a small number of windows more satisfying than burdensome.

And yet it wasn’t until this summer that I actually changed my habits, so what took so long?

As a middle aged man who works a full time job and is actively involved with parenting… well, let’s just say I am less adept at identifying when and how I should change my habits. After all, a lot of my habits at this point are exactly the kind that help me minimize effort. Beyond that though, the only option built into macOS for quickly quitting out of apps is to log off with “Reopen windows when logging back in” unchecked, which doesn’t quite work the way I want it to. There are a handful of apps I always want running and don’t want to have to re-open whenever I resume using the computer. These apps could be added to login items, but I also dislike windowed apps launching automatically. They can be slow, demand extra attention through various prompts, and steal focus. Yuck. What I really wanted was to quit out of all but a handful of apps before locking the screen so that I could start instantly and with a clean slate the next time I use the Mac.

Once again, AppleScript to the rescue1. Using AppleScript, I could set a whitelist of apps to keep open, and then quit out of everything else2. Shortcuts then let me chain this script with other actions to confirm my intentions before locking the screen. Finally, I was able to add the shortcut to my Stream Deck so now at the end of my work day, I push the “Off Duty” button. Even when I have to manually address apps with unsaved documents, quitting apps in one fell swoop still greatly reduces decision making because I no longer have to individually consider whether to quit a given app. It’s going to be quit the same as the rest so all I have to decide is where I should save the open documents, which in itself compels a good end-of-workday habit that I should have been doing already. When I start work the next day, the previous day’s work is saved and my Mac is effectively reset with just a handful of apps and windows open.

Having used this automation throughout the summer, I can now say with confidence that managing windows and tabs in macOS is once again truly satisfying. Navigating between apps doesn’t feel like work anymore and features that never appealed to me with dozens of windows and tabs now make sense. I can find that one window using Mission Control. I actually use command-[number] to jump to a specific tab in Safari! By reducing the cognitive effort involved with quitting apps, I have reduced desktop clutter, which in turn has reduced cognitive clutter to the point where my Mac is once again a tool that helps me focus because it’s no longer like a foyer full of boxes I have to carefully sift through, but an extension of what is currently on my mind.


  1. This is ostensibly also doable using Shortcuts using the Find Apps and Quit actions, but as with so many other things related to Shortcuts, I never could get it to work. 
  2. In the first version of the script, “everything else” included the Finder because it had not occurred to me that was something I could quit. 
Hollywood UI

Those who have been following the rollout of Apple’s new Liquid Glass theme accuse Alan Dye and his team of designing user interfaces that look good in marketing materials at the expense of usability. That’s a fair criticism, but I don’t think “marketing” is the right way to frame it. In my mind, marketing interfaces are a separate issue. They are designed to push users to do something they wouldn’t otherwise. Liquid Glass hamfistedly just tries to look cool.

Looking cool isn’t a bad priority for an interface and it’s certainly a way better priority than marketing. Interfaces built for marketing necessarily come at the expense of usability because their priorities typically come in conflict with those of users. Streaming services are the best example of this, where promoted shows are given priority over those already in progress. Cool looking user interfaces, on the other hand, aren’t inherently at odds with users. iPhone OS looked cool and was immensely usable, and I would argue even Aqua was still very usable even before the transparency and pinstripes were rightfully toned down.

“Marketing UI” is an unfair term for something like Liquid Glass. Trying to look cool at the expense of usability is bad, but it’s way less egregious than actively interfering with users. A better term, in my mind, is “Hollywood UI”. Hollywood has long given computers made up user interfaces, some of them very cool, others not so much. Regardless of their coolness, Hollywood UIs can look like anything because they are ultimately just another prop or set piece. They don’t actually have to work.

That Liquid Glass looks cool in marketing and elsewhere isn’t really the problem. iPhone OS and Aqua looked good too. The problem is that Alan Dye and his team seem more interested in making interfaces that merely look good rather than those that can survive contact with the real world, probably because designing props is a helluva a lot easier and more fun than designing tools that actually work.

Windows 11’s Ongoing Effort to Modernize Windows

Seems like Microsoft is still migrating features from the old Windows Control Panel to its newer Settings app. Here’s Sean Hollister, at The Verge:

But the Control Panel still can’t die. The latest features to migrate, as of today’s Technical Preview: clock settings; time servers; formatting for time, number, and currency; UTF-8 language support toggle, keyboard character repeat delay, and cursor blink rate.

While it is indeed hilarious that Microsoft is still migrating stuff out of Control Panel to Settings over a decade later, my gut sense is that Windows 11 has had to pay down technical debts the same way people have to gradually pay down their financial ones, in installments that span multiple years.

The Windows 11 PC that I use for gaming and Plex isn’t my daily driver. Because I don’t use Windows for either work or other personal needs, take what you’re about to read with a huge grain of salt. That said and in my limited experience, Windows 11 has been gradually getting noticeably better, and dare I say, nicer1? Windows Settings is nicer than Control Panel. The new right-click menu is nicer than the old one. Part of me wonders why Microsoft has been so gradual with these rewrites rather than just releasing them in a more done state, but then again modernizing decades old components is never easy, especially when you’re trying to satisfy software compatibility and entrenched IT practices.

Taking such a long time to revamp these components does merit some teasing and probably some criticism, but I think keeping at it for over a decade shows a resolve that is also worthy of praise. The effort gives me confidence that the people in charge in Redmond truly care about improving the user experience of their desktop OS. I wish I could say the same about the people in charge in Cupertino.


  1. Don’t get me wrong. I still strongly prefer macOS and have many complaints about Windows, the biggest and longest standing of which is that the OS remains completely plagued with crapware. Decluttering your computer shouldn’t be standard practice, and yet with Windows, it still is. 
The Illusion of Thought

New things tend to bring out extreme opinions and AI is no different. Some liken it to the second coming, while others damn it as the antichrist. It’s early days yet, but to me AI feels more like Web 2.0 than Web 3.0. Both were maximally hyped by press and marketing departments, but Web 3.0 always felt like if a Ponzi scheme and vaporware had a baby. Web 2.0 was different. There was a there there. Google Maps, Flickr, Facebook were all real things. Web 2.0 marked the very real and immensely tangible beginning of the web as a viable platform. While there has undoubtedly been an unrelenting torrent of heinous marketing with regard to AI, there is also very clearly a there there. Even without the time to truly delve into the plethora of tools and techniques currently available, the likes of ChatGPT and Cursor have already been helpful in my work. My very limited experience with LLMs and the like gives me optimism that AI will bring a new generation of computerized tools that will help people build, create, and think. What worries me though is when I see people use AI, not as a tool to help them do those things, but to do those things for them. The best example of this is how LLMs are already being used to write.

I have been fortunate enough to have a now decades long career as a software engineer. As one might expect, my early success came from solving problems mostly through coding. What has really helped me thrive in my more senior roles of late, however, is writing. Writing regularly to this blog and elsewhere for over a decade has greatly improved my ability to distill vague ideas into cogent points. For me, practicing writing has been like practicing a strict form of thinking. John Gruber just recently talked about this connection between writing and thinking while guesting on Cortex:

But it’s that writing is thinking. And to truly have thought about something, I need to write it. I need to write it in full sentences, in a narrative sense, and only then have I truly known that I’ve thought about it.

Like John, writing makes me truly think about a subject by leading me to consider its various aspects and then forces me to organize all of those ideas into coherent prose. This process also forces me to organize these same ideas in my brain. While I agree with John that speaking extemporaneously can’t compare to the very thorough consideration involved with writing, I would argue that by making me a better thinker, the practice of writing has made me a better speaker1.

The idea that writing improves thinking isn’t unique. That’s why I suspect the liberal arts are filled with writing. It’s not so much about finding the next great academic, but creating a whole class of better thinkers. That’s ostensibly why a college degree is required for the jobs that ultimately pay people to think.

It’s this connection between writing and thinking that makes me worried about people using LLMs to write. Now not all writing is the same and I would argue that most of the writing people do, even professionally, is functionally basic communication. I’m also not all that concerned with AI tools that assist writing. An LLM that autocompletes or even rewords sentences doesn’t eliminate the process of writing. Where I see problems is when LLMs are used to do the actual writing in a way that precludes users from having to think.

Let’s assume two scenarios involving someone being asked to provide requirements for a given project. In scenario one, the person writes the requirements in five bullets, but is worried about the optics of such a short response. In scenario two, the person doesn’t yet know the requirements, but still wants to provide a response just to not be empty handed. In both scenarios, each person uses an LLM to generate a 1000 word specification document that they send to their colleagues. Both not only wasted their colleagues’ time by having them read a 1000 words of AI slop2, they also created an illusion of thought that may not have been needed in the case of person one or didn’t even happen in the case of person two.

And then there’s a third scenario. The person who has no intention of ever really thinking about any project and uses AI solely to merely keep up appearances. You might think that’s cynical or absurd, but I’ll bet you dimes to dollars this is already happening. There are many, many situations in jobs that pay people to think where avoiding thinking can be a successful strategy. That’s because thinking through ideas is the kind of time consuming, indeterminate, hard to measure, and even harder to justify task even when it’s absolutely necessary. Being the one who takes the time to think through something can easily become a “heads I win/tails you lose” proposition. Ideas that can’t be worked out can end up with a stink of failure while the best and most thought through ones can seem like common sense in hindsight3. Add to the risk/reward equation that the actual act of thinking is largely invisible. It’s the resulting documents that are seen at the end of the day, and how many bosses pay that close attention to their contents. Of those who do, how many could discern which were produced by an LLM? Many never had the time to really think about the subject, and why should they? That’s what they paid the third person who wrote the document to do, a person by the way, they already believe is an ace for being able to produced such documents on short notice.

I am still optimistic about AI the same way I have been optimistic about other major developments in computers, but those other developments never gave anyone the impression that computers could actually think. Before AI, no one looked at an image or document and questioned whether a human was involved. No one looked at Photoshop and Google Docs as an alternative to thinking. LLMs of today can already give the illusion of human thought. The idea of our attention being flooded with AI generated slop alone is worrisome, but what makes me way more worried is how often individuals will have the computer create and illusion of thought in lieu of actually thinking.


  1. And I would further argue that John’s decades of writing is precisely why he is such a damn good podcaster (even if he doesn’t like to admit being a podcaster.) 
  2. Undoubtedly, some meant to read that 1000 word spec sheet will also just run it through an LLM to summarize it back into five bullet points. 
  3. Anyone who follows Apple should be very familiar with this particular phenomenon. 
Realizing When It’s Actually Not Fine

On the most recent The Talk Show with Jason Snell, the conversation naturally (and rightly) turned to keyboards. Tech nerds with mechanical keyboards have become a bit of a joke these days and while some aspects of the market certainly merit ribbing, I think that stereotype is mostly unfair because keyboards are tools. Here is how Jason Snell put it:

…Don’t feel bad about it, because this is what we do. These are like the tools of our trade. This is your axe, this is your electric guitar, this is your screwdriver, this is your RAM truck, whatever it is. As a writer the keyboard, is as silly as it seems, totally matters because that’s our tool of our trade, is the keyboard.

Jason and John are indeed professional writers, but I would argue vehemently that keyboards are a primary and daily tool for anyone who writes. That includes coders for sure, but it also includes practically every modern day desk job. Teasing an office worker for having a preference in keyboards is just as petty as teasing a contractor for their preference in trucks, even when their truck is just as much a luxury vehicle.

The idea that office workers should care about their tools just as much as any other professional isn’t limited to keyboards, a point which both John and Jason naturally transition to. Here’s Jason again:

Look, if you can’t afford it, that’s fine. It’s totally understandable, but I think a lot of people end up suffering with crappy things to do their jobs because they’re like “no, no, it’s fine”, and sometimes the trick is realizing when it’s actually not fine and this is your profession. When I set up my own business, one of the first things I learned is it’s not “it’s a business expense” means “it’s free”, but “it’s a business expense” means “this is a tool I use to do my job”. I should probably pay for it, and that’s okay.

I am fortunate enough to work remote and thus have been able to tailor my workspace to my taste and comfort. I may not need a Studio Display, an Aeron stool, or a standing desk, but they all make doing my job easier by making it more enjoyable, and it drives me a little nuts whenever I go into the office to see most of my colleagues settle for the tools that were chosen for them because that’s whatever the company could buy in bulk at a good price. That’s not to say I expect my employer to provide everyone with $1500 displays, rather that I am disappointed that none of my colleagues eschew the $100 display provided in favor of something that would make their job easier. Sure, most of my colleagues probably don’t care about their display, but I bet at least some do and accept what they know is a crappy display simply because “it’s fine.”

There are very valid reasons for sticking with the status quo, namely “because I can’t afford anything better” or “I truly don’t value this enough to pay a premium for it”, but there are also plenty of excuses that don’t really hold up to scrutiny. Here are a few examples:

  • “It’s fine because only suckers buy their own equipment”, except you get to keep your equipment.
  • “It’s fine because I am tough enough to not need nice things”, but that doesn’t stop you from buying nice things outside of your job.
  • “I don’t want to be the guy with the weird keyboard”, which just makes you the even weirder guy who secretly wants a better keyboard.
  • “It’s fine because my job is not my life”, outside of that part of your life where you spend most of your waking hours doing your job.

The silliness of that last excuse ties into a precept that has been invaluable to my own spending habits. Where reasonable, allow yourself to spend more in the areas you spend lots of time. For example, get the best set of knives you can reasonably afford if you cook regularly, and just get a discount set if you don’t1. “It’s fine” really usually is fine for the things that only get occasional use. Just don’t automatically settle for “it’s fine” for the stuff you are going to use all the time. Once you concede it’s reasonable to invest a little more in the kitchen where you spend a handful of hours each week, then investing a even more in the office where you spend 40+ hours each week becomes a no-brainer… even if that means buying a better keyboard.


  1. Also avoid buying expensive stuff you know you’ll rarely use. This is why I don’t own an Apple Vision Pro. As much as I might want one, I know it’ll mostly sit in a drawer. 
iPadOS 95

We are coming up on the 30th anniversary of Windows 95. Windows 95 still didn’t make PCs as user friendly as Macintoshes of the day, but it was the first version of Windows that was good enough. Previous versions of the OS were terrible, and not just terrible in hindsight. Windows 1.0 all the way to Windows 3.1 were each terrible in their day. There is no better evidence of this than Windows 95 itself, an upgrade literally celebrated on the day of its release specifically because it heralded a clean break from a decade of mediocre releases. Here’s what I concluded on its 25th anniversary.

While modern day macOS has its roots in the original Macintosh System and NeXTSTEP, modern day Windows has its roots in Windows 95. Everything prior has largely been thrown away because even Microsoft knew it was garbage.

This may seem like mean spirited hyperbole to anyone who wasn’t around or aware during that era, but you don’t have to take my word for it. Go play around with Windows 3.1 right now on PCjs. It’s not good. Early versions of Windows are neat as a retro tech curiosity in large part because they are so unlike the modern versions used today. Now go check out Windows 95. While rudimentary, it’s still easily recognizable because every subsequent version of Windows was an iteration on what Windows 95 started.

iPad enthusiasts are excited by iPadOS 26 the same way PC enthusiasts were excited about Windows 95 for mostly the same reason. While iPadOS itself has been fine, its multitasking has been terrible. Like Windows 3.1, iPadOS’s various mechanisms for multitasking were all manageable, even useful in some cases, but they were never good. Like Windows 95, multitasking in iPadOS 26 is celebrated because it’s a fundamental departure from its predecessors. The similarities don’t stop there. Both Windows 95’s much improved usability and iPadOS 26’s much improved multitasking largely come from copying the Mac. Windows 95 embraced the Mac’s desktop metaphor with Mac-like window management and a more Finder-like Windows Explorer. iPadOS 26 embraces the Mac’s desktop metaphor, with Mac-like window management, and a more Finder-like Files app… oh and also there’s a menu bar.

Why Microsoft took a decade to decide to just copy the Macintosh is obvious — severe technical limitations of 80s era PC hardware and legal threats from Apple. Why Apple took a decade to just copy macOS for iPadOS multitasking is more of a mystery. iPads in 2015 were much more constrained hardware-wise, but I think that’s only part of the story. There is little doubt that Apple sees the iPad as an anti-computer of sorts. A device that does the stuff most people use computers for, but without the hassle and cruft. I wonder how much of Apple’s years long consternation about adding macOS style UX to iPadOS stems from a fear that the desktop metaphor itself was a cause of that undesired cruft. How can the iPad be the anti-computer if it looks just like a computer? Beyond embracing Mac-like features, I think the philosophical departure iPadOS 26 represents is a belief that traditional computers are complex for many reasons that have nothing to do with their interfaces. The iPad can have proper windowing, file management, a menu bar, etc… and still be the anti-computer so long as it continues to strictly manage or eliminate the undesired cruft of legacy computers. I love that large background tasks involve live activities and have to be user initiated. I’m sure some wish apps could run in the background a la macOS and Windows, but I would argue arbitrary backgrounding is exactly the sort of thing that makes those platforms complicated in ways that aren’t worth the squeeze for most users.

For the first time ever, multitasking in iPadOS 26 feels like something Apple can iterate on for decades, the same way they’ve done with macOS since System 1 and the same way Microsoft’s done since Windows 95. Just as nerds like me today look back at early versions of Windows as a weird era before Microsoft finally figured out desktop computing, future nerds will one day look back at early versions of Split-Screen, Slide Over, and Stage Manager as part of that weird era before Apple finally figured out iPad multitasking.

Productive, Portable, but Not Touchable

Tom Warren wrote his Microsoft Surface Pro 12-inch review for The Verge (emphasis added):

This Surface Pro redesign also presents some of the best bits of rival tablets into a piece of hardware that feels a lot more tablet-like than Microsoft has ever created. I’ve been using this Surface Pro in tablet mode more often as a result, simply because of the smaller size and the ease of switching between laptop and tablet modes. The operating system and apps still feel like more of a laptop, though. Microsoft also sells the Surface Pro without a keyboard, as if it’s a pure tablet, but as always, you really need to purchase that $150 keyboard to unlock the best experience.

The Surface Pro 12 sounds like the most compelling tablet-like device Microsoft has ever released, but I’ve long argued that a UX can’t simultaneously be touch friendly and information rich enough for multiasking while on a tablet sized screens. As much as iPadOS has favored touchability over information richness, Windows has done the opposite. There have been a plethora of touch enabled devices that run Windows, but none have been great tablet experiences specifically because of Windows.

I am deeply interested to see the rumored iPadOS multitasking improvements Apple is expected to announce at WWDC next week. My gut tells me that we’ll get another dud unless iPadOS 26 somehow becomes much more information rich and I don’t see how that happens without sacrificing touchability by shrinking its user interface, presumably when a keyboard and touchpad is connected.

A Lesson in Tone, as Illustrated by The Legend of Zelda

Spoiler warning: The following contains spoilers for The Legend of Zelda: Tears of the Kingdom.

If I had to pick one criticism of the New York Times, a paper that I pay for only as a means to help fund journalism, it would be how many of their headlines normalize, flatter and even empower Trump. Just yesterday, the paper ran a story entitled Beneath Trump’s Chaotic Spending Freeze: An Idea That Crosses Party Lines. This comes in a week of Trump related headlines that include: White House Press Secretary Makes Steely and Unflinching Debut, Defying Legal Limits, Trump Firings Set Up Tests That Could Expand His Power, and Amid the Chaos, Trump Has a Simple Message: He’s in Charge.

Describing how The Times and the wider mainstream media treat Trump is tricky. After all, none of the headlines I mentioned are factually inaccurate. It’s not so much their substance, but their tone. A subtle difference in tone can drastically change how a given headline reads.

The last two entries of The Legend of Zelda, The Breath of the Wild and its sequel, Tears of the Kingdom, are a great example of how slight differences tone can greatly impact meaning. Both games have an open world, that is to say you can practically go anywhere and fight anyone at any point. I have spent hours upon hours in both games killing countless monsters created by the game’s villain, Ganon. Lest the world of Hyrule run out of these monsters, both games use a mechanic that respawns them — the blood moon. In Breath of the Wild, the titular Zelda first notifies Link of the event with the following monologue:

Link… Be on your guard. Ganon’s power grows… it rises to its peak under the hour of the blood moon. By its glow, the aimless spirits of monsters slain in the name of the light will return to the flesh. Link… please be careful.

Zelda’s tone unambiguously warns and shows concern for our protagonist. The blood moon is bad.

Her monologue during the blood moon in Tears of the Kingdom is quite a bit different.

Witness the blood moon’s rise. When its red glow shines upon the land… the aimless spirits of slain monsters return to flesh. Just as they did in a war long past. The world is threatened once again.

Can you spot the difference? This is not so much a warning as it is a pronouncement, and there is zero concern for our hero. The blood moon is exciting. It turns out this change of tone has a very good reason — that isn’t actually Zelda. It’s a puppet doppelganger controlled by Ganon.

I don’t think The New York Times is a puppet of the Trump administration, but I do think the tone of these headlines betray that the paper might be a bit too excited by the prospect of getting to cover whatever bad things Trump and his cronies will do to our country.

In Search of the Automatic Platform for Personal Computers

Most people love their smartphones more than they do their personal computers. The biggest reason by far is that smartphones bring the internet to them everywhere, but another less obvious reason is that using smartphones is like driving an automatic1. They are way less demanding than Macs or PCs running Windows, and I don’t just mean maintenance. Certainly running updates and installing software is more manual in macOS and Windows, but that occasional maintenance pales in comparison to the constant manual user intervention involved with merely using those platforms. Where should this file live? Which of them sync? How should my windows be arranged? When should I close them? What should I even do with the desktop? Which apps actually need full disk access?

By contrast, the default experiences of iOS and Android don’t really demand any manual intervention or decision making from their users. There are no windows to deal with or files to manage. Any need for user intervention is optional and progressively disclosed behind apps and/or features. Photos, notes, music, and other documents are typically managed within their respective apps. Both iOS and Android have any number of settings that can be tinkered with. People can intervene more with their smartphones, but they don’t have to.

Personal computers do have a more automatic platform of sorts, the web. Like with smartphones, many people prefer web apps over traditional native ones. Web based apps are so automatic that they are effectively on demand. No one has to install GMail, Google Docs, or Slack. They just go to some website where they always get the latest version of whatever app when needed. Web apps also don’t require users to deal with windows or files. Their emails, flow charts, conversations, and other documents are all self contained within one browser window. Like smartphones, users can customize their experience and manually manage their documents, but again, they don’t have to.

Given most people prefer automatic platforms, why hasn’t Android, iOS, or the web completely overtaken macOS and Windows? Why aren’t offices filled with Chromebooks and iPads? I think most people who prefer automatic platforms end up using manual ones on personal computers for three reasons: familiarity, apps, and cross-app productivity. They want something familiar with what they already use and need specific apps that all work cohesively with the other ones they use.

Google’s ChromeOS found on Chromebooks is very familiar to many people who are already using web apps like Gmail, Google Docs, and Slack. ChromeOS has even visually converged with Windows. The problem is ChromeOS has limited software support outside of web apps and while web apps do support a growing number of use cases, gaps remain. This is why, I presume, Google is merging ChromeOS into its non-web based Android platform.

iPadOS has much better app support, but is unfamiliar and often times cumbersome to someone coming from Windows or macOS. In other words, iPads are a little weird for those who already have an expectation of what a computer is. This is best illustrated by the fact that Apple still doesn’t sell an iPad with a keyboard and trackpad included. They are optional add-ons that need to be purchased separately. Part of me wonders how many companies would jump at the chance to buy their teams a laptop running iPadOS in lieu of similarly priced MacBooks or even cheaper PC laptops.

That said, I think many professionals would still cling to macOS and Windows even in a world where ChromeOS was merged with Android or Apple did release a laptop running iPadOS, because neither would inherently address or improve those platforms’ limitations when it comes to cross-app productivity. Furthermore, I suspect most avenues of improving cross-app productivity on these platforms would be in tension with what makes them so automatic to begin with2. ChromeOS, Android, and iOS (including iPadOS) are automatic largely because they defer the complexity of manual intervention to apps that mostly exist in isolation of one another. This simplifies the platform, but makes working across apps much more cumbersome. I wrote about this when addressing Catalyst apps’ lack of cohesion in macOS.

The more complicated Mac builds ease almost entirely through cohesion. Wherever possible, Mac applications are expected to share the same shortcuts, controls, windowing behavior, etc… so users can immediately find their bearings regardless of the application. This also means that several applications existing in the same space largely share the same visual and UX language. Having Finder, Safari, BBEdit and Transmit open on the same desktop looks and feels natural.

By comparison, the bulk of iOS’s simplicity stems from a single app paradigm. Tap an icon on the home screen to enter an app that takes over the entire user experience until exited. Cohesion exists and is still important, but its surface area is much smaller because most iOS users only ever see and use a single app at a time. For better and worse, the single app paradigm allows for more diverse conventions within apps. Having different conventions for doing the same thing across multiple full screen apps is not an issue because users only have to ever deal with one of those conventions at a given time. That innocuous diversity becomes incongruous once those same apps have to live side-by-side.

I do think personal computers will become more automatic, either through the evolution of macOS and/or Windows, or the advent of some other platform. Apple once thought that “some other platform” was going to be i(Pad)OS and Google seemingly still believes it’s going to be some amalgam of ChromeOS and Android, but I don’t think either can overtake today’s manual incumbents. They’ve achieved being more automatic largely by only supporting one app at a time. That is perfectly suitable for smartphone and web apps, but for multiple apps running side-by-side on personal computers, people need an automatic platform that won’t slow them down.


  1. I first came up with the manual/automatic analogy when reviewing Apple’s Stage Manager, but I think it’s suitable beyond just window management. 
  2. Nothing better illustrates this tension than Apple’s struggles to bring cross-app multitasking to iPadOS. The company has made several attempts to bring basic multitasking to iPad and every time the company has gotten push back, both from those who think any multitasking needlessly complicates the iPad and from those who argue they haven’t gone far enough. 
“I Don’t Think About You At All”

Justin Long is back again, this time for Qualcomm. Tom Warren, reporting for The Verge:

Apple’s former “I’m a Mac” actor Justin Long defected to Intel a few years ago, and now he’s looking to switch to a Qualcomm-powered Windows PC.

Which is worse? Being the butt of a joke that is only effective because lots of folks remember your ads from over a decade ago or being spared because no one remembers your ads from just three years ago? Back when Intel featured Long in their ads, I observed that the campaign ultimately showed how commoditized they were. I can’t think of a better illustration of that point than an actual competitor of Intel also using Long to promote PCs, and oh by the way, no one cares because most people don’t even remember Intel even ran those ads.