In semi-particular order…
A Redefinition of Low-Power PC-Paradigm Computing
The reviews from tech journalists, the social media tech set, and users are in – and reaction is about as positive as I can remember for any previous Mac launch.
Heck, The Verge (a generally “tough-but-fair” reviewer when it comes to Apple) gave the carryover-design, non-touchscreen, two-TB3/USB4-port-only MacBook Air a 9.5/10, where recent, more “technologically-advanced” iPhones (5G, LiDAR on Pro, U1, non-potato-cam, etc.) get a 9/10.
So it’s not just because of the much-better-received second-generation scissor-switch design (alas, poor butterfly keyboard, we hardly knew thee). It’s “so much more” – mostly due to a single, incredibly savvy change-of-chip from space-heater Intel to an unnervingly-cool, winter-unfriendly, 5nm-process, energy-efficient 16B-transistor M1 SoC. That Rosetta 2s its way through Intel Mac apps with generally impressive ease.
That also somehow boasts single-core performance better than any Intel Mac ever shipped, and exhibits (albeit in brief sprints for the fan-free Air) multi-core CPU potential that’s a little bit better than…the state-of-the-art (albeit 14nm) Core i9 CTO option on the $2800+ (US) MBP16. Among other surprising feats of speed.
Suddenly, the humblest of Macs…fine, in overall PC context, semi-premium $1000+ fairly-mobile and moderate-priced $700+ small-form-factor PC computing…just got a lot faster. And just as suddenly, the competition is left with some serious catching up to do – yes, even on price. Because Apple has made a powerful statement on value.
Compare a base model Mac mini to the Lenovo SFF PC I tweeted about two weeks ago, and you can kind of see how Apple’s timing and choice of target market (the broadest demographic of the Mac installed base, by far) couldn’t have been better:
A New Definition of the Limits of Low-Power PC Computing
Sure, in most use cases (and post-ARM64-software optimization, pretty much all use cases), the M1 Macs change what’s possible all the way from energy consumption to prosumer-and-up workflows that would cause a Core i5 MacBook (or worse, a Core i3) to recoil in terror.
And then there’s the small side benefit of Apple offering very performant, superior-battery-life machines for a somewhat important subset of the Mac installed base – y’know, millions of third-party iOS, iPadOS, watchOS and/or macOS developers running Xcode, without which there is no “developer ecosystem”.
But there are limits to being a pioneer in Strange New Smartphone-Inspired PC Architecture™, as it were. Intentional, but still limiting all the same.
Yes, the M1 is kinda, sorta an “A14X Plus”. That’s because Apple challenged itself to create the most performant SoC it possibly could within an enclosure (capped at ~10W sustained thermal envelope) purposely designed to eliminate active cooling (though M1 certainly benefits from a little “fresh air”).
Apple’s enhanced thermal constraint, along with the unified memory architecture, necessitated extreme restrictions on how much logic board space, GPU power, and sheer thermal/power overhead could be applied to the M1’s design.
As it turned out, we should have seen this coming around two years ago.
The intrepid team at iFixit was one of the first to make the discovery with the 7nm-process A12X:
Computer, rotate and zoom in on the A12X itself!
SoC on the left, two DRAM modules parked right next to it. The exact same basic layout as the just-launched M1.
Given that DRAM isn’t process-shrinking anywhere near the rate of Application Processors like A14 Bionic, M1 and inevitable AMD/Qualcomm competition, it’s at least understandable – if frustrating – that this super-compact design would have a relatively modest RAM cap for the time being. There’s room to expand the boundaries of the M1 “Silicon City”, but that’s for future, higher-power-budget variants.
Long story short, the M1 can take you very far (comfortably meeting Johny Srouji’s “best-in-class” mandate for each M1 Mac vs. its relevant competitive set, I’d say), but it’s literally the absolute least that an M-chip can do, by design. So if you want features like:
• a 32/64GB-and-up RAM ceiling,
• 6+ performance CPU cores,
• a stronger GPU,
• more than 2TB SSD,
• 4+ Thunderbolt 3 / USB4 ports,
• a bigger display,
• being able to simultaneously run more displays,
or something else the M1 platform just can’t provide, then you might want to wait for the corresponding M-chip MBP16, iMac, or maybe i/Mac Pro 🥴 versions to launch, if at all possible. And maybe “Pro” versions of the Mac mini and 13″ MacBook already-called-a-Pro, who knows.
An Engine Swap That Was “Obvious in Hindsight”
First of all, everyone knows we’re still in the middle of a terrible COVID-19 pandemic, right? November 2020 being the worst month yet on a worldwide, newly-reported-cases-per-day basis?
And Apple still launched an impressive array of products, from Watch Series 6 and SE to iPad Air A14 and some mildly important iPhone 12 SKUs (which were delayed from “a few” to “several” weeks).
So yeah, perhaps some of Peanut Gallery Tech Twitter/Journopunditocracy can appreciate the overall context of 2020 here before complaining about the lack of shiny new things.
That aside, why did anyone expect Apple to start the Apple Silicon transition with radical redesigns anyway? It’s never been done in any past transition (Moto 68040 >> PowerPC, PowerPC >> Intel), in part because reengineering the hardware for a totally new chip architecture is probably tough enough as it is.
Aside from the other very good reasons others have given (for instance, Rene Ritchie points out that it’s much easier to work with existing design and thermal tolerances)…
and the fact that MacBook Pro (Late 2016) and MacBook Air (October 2018) designs aren’t that old (the keyboards are pretty new, in fact 🤣)…
…it’s really not a particularly good idea to “obsolete” the rest of the Mac lineup right out of the gate with major redesigns, especially when the remainder of the Apple Silicon Transition could take until sometime in 2022.
Apple’s Own Spin on “Modular”
I know, I know. Apple’s under constant existential threat from Android or WinTel or WinAMD or maybe even some more “open” flavor of RISC computing. Whether or not that’s looking particularly true anytime in the foreseeable or longer-term future, a theme you generally see with this line of argument is the spectre of “modularity” (y’know, Snapdragon 765 or 8cx this, Ryzen 5000-series that, etc.), a crashing wave of commoditization overwhelming whatever “high-margin moat” Apple has allegedly, tenuously trenched via integrated products.
Oddly, though, Apple’s making its next, bet-the-Mac big push on personal computing with
- an M1 in a Mac mini
- an M1 in a MacBook Air
- an M1 in a MacBook Pro
Isn’t that…modular too?!
Why yes, yes it is! And if you think about it, it’s much more aggressively modular than anything we’ve ever seen in personal computing (a single SoC for three distinct form factors, including a desktop). And it’ll probably happen again with another subset of the Mac lineup.
It’s radically different from anything we’ve ever seen in any Mac, and yet this “one-size-fits-all” approach is really quite sensible (and exhibited in iPhones since the 6/6 Plus).
Also interesting is the complete lack of compromise (where RAM isn’t a major consideration, which it’s not supposed to be on “low-end” systems given classic usage patterns).
Sure, M1’s thermal bandwidth allows it to work just a little bit better on the top-end, and for longer when it has access to active cooling and/or unlimited power, but for the first 10-15 minutes of a person’s computing day, the overall speed experience will be probably be almost exactly the same.
It saves on costs to target a single SoC for three different Mac form factors, of course, but when the entry-level Mac SoC is this fast, consumers really won’t care much. It’ll just take a little getting used to from the customization fans among us, because for Apple, the concept of choosing a faster CPU for a given Mac trim level may now be a thing of the past.
Legitimate gripes about LPDDR4X RAM expandability and (impressively fast) SSD storage upgrade prices aside, this move to M-chip modularity is very likely to save Mac users money over time ($200 or more for the MacBook enthusiasts), because Apple’s goal is to never make you want for CPU performance on a brand-new system ever again.
The Beginning of the Inevitably Uncomfortable Intel/Apple Silicon Feature Divide
It’s bound to happen, and unfortunately, Mac users having bought a system within the past two to three years or so might be feeling a bit “left out” as they approach Year 5-7 of their particular Mac’s lifecycle, if not sooner.
To be clear, Apple will not, cannot possibly break its commitment to support Intel Macs “for years to come”. Big Sur support looks back up to 7 years (2013 MacBook Air, MacBook Pro, and Mac Pro), and I don’t see that support lifecycle changing very much when whatever macOS version in 2024 or 2025 launches. As many non-thrill seekers know, Apple also tends to fully support security updates looking back up to two macOS releases, so macOS software security support could easily extend up to and slightly beyond 2026 for any new Mac with a “model year” of 2019 or later.
But there’s security support, and then there’s feature support. And there’s already one giant difference between an M1 and most (all?) Intel chips: neural processing power. The A14 Bionic and M1 Neural Engines are both capable of 11 trillion operations per second. In the case of M1 MacBook Pro, it’s claimed to be 11x faster at ML vs. the 2020 MBP13 running an 8th-gen 1.7GHz Core i7 CPU.
Of course, neural-network-type tasks will be faster with 11th, 12th, 13th? gen Intel Core chips…assuming any of them are available in quantity and in time for the final Intel Macs yet to launch. But Intel’s and Apple’s approaches in the short term are just too different. Intel seems to be leveraging its existing CPU/GPU-centric platform to enhance neural-computing performance; Apple’s gone full-bonkers Moar Cores™ with a discrete NPU, the same strategy as major RISC chip players.
For now, the Neural Engine’s main benefits seem more related to specialized image/video processing tasks (Face ID which is “old hat” to Apple by now, Pixelmator’s ML Super Resolution), or machine learning academia. Other features it enables, from Siri Suggestions to object detection, are well within the capabilities of “lower-end” neural processing power (read: 11T ops/sec are slight overkill for these tasks)
But that’s definitely changing over time, given the quadrillions of operations being left on the table every single day (and a certain ex-Google AI Chief champing at the bit to move the AI/ML industry forward). With an on-device engine that fast, why not “teach it” to accelerate all kinds of anticipatory tasks in typical users’ lives, including finally making Siri…um…better, or even available on-device, no cloud required just for timers and alarms?
It’s also possible that allowing Mac developers and Mac-based researchers access to the Neural Engine will unlock use cases Apple hasn’t thought of…or can Sherlock/acquihire in the future. 😁
Those benefits…will not really accrue to Intel users’ benefit over time, since their AI/ML performance is vastly inferior to the M1’s NPU. And aside from inevitable Neural Engine improvements each M-chip generation, the feature upgrade gap could only get worse if the Apple Silicon platform continues to add new hardware functionality including:
• hardware 8K encode/decode
• Mac *touchscreen* support
• further image/digital signal processing improvements/other ways to make videoconferencing as battery-efficient as possible
• possibly moving up to a full-system 256-bit encryption base via a more powerful Secure Enclave
• supporting higher AirPlay screen mirroring resolutions than 1080p, and in HDR
• any number of things people much smarter/more well-versed in macOS than me could think of
Apple’s hellbent on making Macs as compelling as possible, now that it’s finally able to assert full control over the entire Mac widget for the first time ever. It’ll be interesting to see how macOS updates 3-5 years from now offer differing amounts of functionality uplift depending on chip platform.