The Unshut

Technology is everywhere

The Verge redesign: an analysis

Posted on September 14, 2022

Disclaimer: this is a translation from the original post in Spanish at Incognitosis, my personal blog. The Unshut was a project to write in English about tech, but I decided to stop a couple of years ago and it hasn’t been updated since. This post has been translated with DeepL and lightly edited. Apologies for the quick process.

The Verge was born on November 1, 2011. The publication that has long been an absolute benchmark in the world of technology journalism took its first steps, and did so with a vibrant design.

Not that that design was particularly different from what was being seen in other media, but it was an almost logical step forward: a large carousel with a commitment to the visual with the highlights of each moment, and then chronologically ordered content. There were also interesting bets such as the comments section, which theoretically was going to give special prominence to the readers if they worked hard, although as far as I know that never took off.

The Verge circa 2011. Cool.

The design has remained the same for years and there were only minor changes. It took them a while to add the search button, and also to make the design responsive so that it would adapt better to mobiles.

As I commented in that topic, there must have been quite a few internal earthquakes in the team: of the original founders, all ex-Engadget, Nilay Patel is the only one really standing. Neither Joshua Topolsky (I’ll talk about him later) nor Paul Miller -they always looked like a trio of inseparable colleagues- are there, and lately there have been movements such as David Pierce, who left for Wired (and now has just returned to The Verge) or Dieter Bohn, who announced a few months ago that he had joined Google.

Be that as it may, The Verge has been going like gangbusters. Using the basketball analogy, it is as if they were the US national team and Xataka —where I work as a senior editor— was Spain. In traffic we do not seem to be so far away (36 million uniques/month at Xataka according to Similar Web, I will not go into whether they are right or not, The Verge 46 million), but for those who read technology what is clear is that The Verge is the king. Or they almost are.

They have earned that reputation, although in recent times I notice them lazy: few new topics every day, and of those few that are worthwhile, something that is a bit sad for a site that is really supported by manufacturers and advertisers: they give them products and exclusives to which other competitors have no access. That’s okay, that’s how the world works, but with that level of privilege they could do even better.

The fact is that today The Verge has released its redesign. There was no warning -at least, as far as I remember-, and of course the release has caused quite a lot of comments both in the official announcement and on social networks like Twitter, where criticism has been rampant.

When I heard about it, I went like a hotrod to its page. The first impression? “Ugh, how weird is this”. The homepage is a succession of diffuse blocks of white text over a black background. Alternating left and right are featured content and special content blocks, but there’s no clear structure, no “look, here’s what we think is the most important thing right now” highlighted at the top, as other media (and most newspapers) do.

No. At The Verge they’re not so much a media outlet as an aggregator. Suddenly they don’t particularly highlight anything, and their ‘Storystream’ dominates with a concept that reminds a bit of Facebook or Twitter. And it does so because here there is not only content from The Verge, but they are also adding blocks in which they recommend reading other media. The result? As I was saying:

The Verge is no longer so much a media, I insist, but an aggregator. 

At least that’s what it looks like when you go through that home page. As soon as you come across that special presentation post, as with a couple of small paragraphs that seem to be a kind of Twitter embedded in a web -I just saw that Axios has just qualified this redesign– and then mixed with more things. With headlines of their own content (without an inline), with blocks of topics classified in the same category, or with content from other embedded media that could also be a tweet, an Instagram post, a TikTok or YouTube video or who knows what else.

That idea doesn’t really work for me. It has some cool things, of course. I understand that they want to become a kind of “portal” like those of the first internet era. A site that technology readers have as a reference site because they know they will not only have good content and criteria from The Verge, but also recommendations on topics that they don’t get (or get late) from other media. “Hey, don’t worry, everything important in technology will be here. And if we don’t have it, don’t worry, we’ll send you to where you can read it”.

The idea is smart – and certainly powerful – but also dangerous. What I have seen in this first approach makes me think that The Verge wants to have less of its own content and use more of others’ content because, if users go to its home page as the first visit of the day, traffic to that home page will skyrocket and advertisers will flock to it. But aggregators are not doing so well as far as I know (Techmeme is cool, but it’s dedicated to aggregate and that’s it) and this is a kind of in the middle of nowhere.

I’m not just worried about that: I’m worried that Patel was saying in that introductory announcement that in analyzing what they thought was important was that “oh shit, we just need to blog more.” What? That doesn’t make much sense: the word blog has lost all its prestige, and those who started that way have always wanted to become a means to become “serious”.

One thing is certain in this media thing: in many cases we all end up rehashing the same thing. Apple releases the iPhone 14? You have to write a post about the iPhone 14, even if what you do is hardly going to be any different from the 200 other tech media outlets covering it. The Verge seems to want to get rid of the rehashes:

What’s most exciting about all this is that it will actually free up time for our newsroom: we won’t have to stop everything we’re doing and debate writing an entire story about one dude’s confused content moderation tweets. We can just post the tweets if they’re important, add the relevant context, and move on. That means we’ll get back hours upon hours of time to do more original reporting, deeper reviews, and even more incisive analyses — the work that makes The Verge great.

I disagree. Original reporting and analysis are important, no doubt, but what makes The Verge great (in my opinion) is that they give me 1) context and 2) opinion of value. The former is important because it makes you understand the scope of what they’re talking about. Why it’s important, how it affects me. The second, because it differentiates that content from other content: I know the names of those editors (and many others) because I read them often and I value how they write and give their opinion, because I find their criteria valid and because I really like to read what they think about a product, a service or anything that happens in technology. I like that they are critical but fair. Merciless but complimentary. Give to Caesar what is Caesar’s, both for the good and the bad. That’s what I try to do every day at my blog and at Xataka.

So, what that paragraph seems to me is an excuse. One to say: “let’s write less, others already do it”. I want to read why it is important that a new basic Kindle comes out. In this case The Verge has covered it, but with that message what they are telling me is that they will make much less content of that kind because everyone is already going to talk about the new Kindle.

The truth is that this message worries me, because as I said before The Verge publishes quite little. Considering the resources and money they have to manage, one would expect them to be much more productive, but here it seems that what they want, I insist, is to publish less of their own stuff and more of others. A quick look at the Archives suggests that the idea might work: this is after all like a Twitter account where they link to themselves, but also to others, so if you trust The Verge, you will “follow them” (that is, you will go to their home page) and they will get what they wanted, which is that you end up on that home page. The technique is cool, but the redesign still fails for several reasons. In my review I have noted several things, so here they go:

  • Logo. The Verge logo changes because, according to Patel, it proposed “an interface between the present and the future”. That as an explanation of the logo is fine, but now that logo is difficult to read, and if it is difficult to read it is simply not cool.
  • Goodbye to visual content. Text dominates here, which makes it more difficult to visually scroll through your entire home page. You zig-zag not only between blocks on one side and the other, but also between types of content: small paragraphs with embedded content or links, own topics with a larger headline but without an intro, special topics and highlights that have an image, headline and intro, blocks of several topics in the same category, tweets, YouTube videos, other embedded content… a mess of usability. You have to be very focused to “scroll through” that content, at least on a desktop computer. That lack of images may be to save on page weight: the home page is long in its vertical path, and in fact after each topic itself appears the StoryStream. I don’t know if I like that.
  • The content itself seems to lose prominence. It is what I was saying, that now it is “hidden” among other content or mini-paragraphs that link to own content but something old. It is as if I suddenly read an article on nuclear energy by my colleague and friend Juanky but I read it a day late and I highlight it in a mini-paragraph on the front page of Xataka linking it. It was already on the front page yesterday, but that’s okay, we reuse it like that. Weird. I wonder how this will impact the pace and length of The Verge’s topics (skimpy reviews for the most part, though they certainly cut to the chase and save the crap of other reviews that might go on too long).
  • Where is the most important thing. Having a block of featured content -bigger images, big headlines- is a first for Digital Media. Readers don’t want to complicate their lives, and they like to be visually facilitated in the arrangement of information. Here there are no more such aids, and suddenly we find that “Storystream” flow that confuses and mixes own content with outside content. They have created a “Must Read” section (also from the first year of media: don’t send anything to people, who take it badly) with contents that seem minor (small headlines, no images, no intros, meh). See right image with the blue block. Wrong.
  • Comments, a nightmare. It’s already lazy enough to read comments without having them presented in this way. In The Verge the new system, called Coral, makes that when you click on comments (if you count the link, which is not very good) that block to read and comment scrolls to the left the main content or completely covers it on the mobile. In my case, that I read most of the time in split screen, the result is a poop, because it covers only half of the content. Not only that (I): the colors and fonts don’t work for me and make those comments not very legible. Not only that (II): they let organize by highlights, older or newer, but they show the highlights from the beginning, which logically are usually positive (at least when talking about this redesign). Once again, usability and readability worsen compared to the old, more classic approach.
  • Blocks. In addition to the ‘Must Read’ block with the theoretically featured stories they have the Podcasts, Most Popular, Reviews, Science, Entertainment and, curious, Creator block (for topics from instagrammers, tiktokers, youtubers and so on). I don’t know how much some of these will be updated, but it seems difficult to pace them all well enough. We’ll see.
The Verge original pieces are on one side, and only the first one is highlighted although the rest may seem more important to the reader. Not only that: on the left, where theoretically is the StoryStream, there are links to external content and then, suddenly, two Tech topics (Fitbit, Sonos) that should also be in the pink box. Same category, same day of publication, why do they appear there?
  • Fonts. Kudos for using serif in the text -more professional and serious, less informal, that’s why I’ve been using it here for years- but the headings are deceiving. As you can see in the image, one sees an image and a headline believing that it is an own topic -which theoretically is relevant and is more prominent because of that- and no: one finds that this link leads to a topic in tweet format, a paragraph that then links to an external media. Weird and disappointing.
  • Who do you belong to? As I’ve been saying, The Verge is more than ever a portal, a content hub. Yours matters, but everyone else’s matters too. That’s not necessarily a bad thing (it’s the best thing about the whole redesign, in fact), but there’s no easy way to tell if something is theirs or from outside. Unless the post is in tweet format and you see that at the end the link takes you outside, nothing, but they also use that tweet format to relaunch their own content. Wrong.
Even the reviews look weird with those purchase link boxes and the score. Everything is like less readable, less clear.

That’s a bit of a summary of what I’ve seen in this first review. There are some additional curiosities, of course: there is no search option on the web itself: if you want to find something, Google it with the typical ‘site:theverge.com’ modifier accompanying the search terms.

Beyond that, what is clear is that The Verge has plenty of capacity and resources to make long and different topics. This last one is a good example -I haven’t read it, the topic doesn’t call me-, and I certainly hope they follow that line that certainly differentiates them. I insist: with their resources (or the ones they seem to have) this kind of things should be constant, and not in drips and drabs as they have been doing so far.

Then there is the other issue. That of the snapchatization of web design in media. I talked about it quite some time ago, when the aforementioned Joshua Topolsky founded The Outline in 2016. My relationship with that media was practically null: I don’t follow them and I didn’t even see their topics particularly catching on the internet. The design was original to a fault, but it turned me off even though the topics might be interesting. In the end the company was acquired by Bustle Digital Group, and just a year after that operation, The Outline stopped updating (its contents, curiously, are still available). Apparently Topolsky and Bustle’s CEO were radical opposites, so it didn’t look good.

When it left there it didn’t really leave (I don’t know this well) because Topolsky created Input Mag, which premiered on December 16, 2019 and posed a very similar bet to The Outline. Snapchat design that was a little garish, a little off-putting, and content that went outside of rehashes and went to bizarre topics.

There he has continued to write quite a lot, apparently. I don’t follow him, but not because I don’t want to, but because for some reason I’ve been blocked on his Twitter for years.

I don’t know what happened, because he’s a guy whose vision is very much aligned with mine when it comes to editorial issues (not so much design) and whose critical vision I like and who doesn’t bite his tongue. But something I must have said to him on Twitter that got him hot under the collar. Anyway.

Here I must say that giving an opinion on a redesign a few hours after seeing it is perhaps a bad idea, especially when the redesign is so radical. It is difficult not to shock and not to feel resistance to that redesign, because at the end of the day human beings do not like changes and that their routines no longer work. My criticisms are (I think) reasonable, and join many comments I have seen on Twitter:

https://twitter.com/fakebaldur/status/1569690589444165633

That last tweet is especially striking. Cybart is the head of Above Avalon and is quite well known in the tech media world, and his message -that the redesign is to capture more page views-, although a bit platitudinous, makes sense. This redesign is done to get more people reading your home page. I wouldn’t say it’s a change of business model as he claims: it’s reaffirming his model, which is advertising, and getting more exposure by capturing more and more readers with this new aggregator philosophy.

There are some good ideas and also good criticisms -especially in the comments of the official post- but I would say that a priori this design does not convince me almost nothing. What does is the idea of becoming an “aggregator”, a sort of iteration of Twitter is really interesting and it might achieve its goal: that of catching more users for longer on its home page.

It remains to be seen if it succeeds, of course, but in terms of design, the result, at least in its desktop version, does not convince me. On mobile there is not so much jump, the vertical structure helps, but on desktop? As I said, weird. It may again be a matter of getting used to it, but a priori it seems less usable and readable, more confusing and difficult. I’m not at all sure that they were right with these decisions, but time will prove them right (or not). If so, I’ll go over this to recognize that I’m a big mouth and that I have no idea about web design.

Which may also be the case.

The Apple M1 changes everything

Posted on November 12, 2020

Forty-nine minutes were enough for Apple. Boom.

That’s how long yesterday’s long-awaited keynote, with a very special title, ‘One more thing’, lasted. During this time Apple focused totally and exclusively on presenting its first computers with the Apple M1 chip, a processor that, I believe, changes everything.

It does so because it proposes an unparalleled revolution in a segment that has already moved a lot in the last 40 years. It’s been quite some time since the introduction of that first IBM PC with an Intel 8088 processor, and today we have before us fantastic desktops and laptops that allow us to work and play in a way that was unthinkable when we were messing with the old 8-bit microcomputers and the first PCs in the 80s.

In all this time something has remained almost unchanged: the vas majority of the processors used in our PCs and laptops used the x86 architecture, both in its 32 and 64-bit version. Intel has owned and ruled the market for four decades, but that reign looks like it’s coming to an end.

And the fault lies with the M1.

I don’t know if you saw the presentation. If you did not see it and prefer a quick version, here it is in 10 minutes. The pace was once again frantic, like on the iPhone 12 keynote a month ago.

This meant that, as I said on Twitter at the last minute, I didn’t find the devices presented particularly remarkable. And that’s the point. That they were, but not because they were those computers. Let’s take it one step at a time.

Design as a lost opportunity

I think this has been a huge missed opportunity for Apple. A generational change like this deserved renewed designs. I am not saying that they should have totally change the iconic design of MacBook Air or the recognizable formats of MacBook Pro or Mac mini.

With the latter I would have taken more risks, but that doesn’t matter: Apple knows very well how important it is for its users not only to have Apple products, but to show the rest of the world that they have them. Having nothing differential in those new computers’ design is (to me) a little sign for Apple, that is almost ashamed and don’t want to say that these computers are different.

How is this possible? I’ll talk about that later, but they are different. Very different. They are because of that M1 heart that makes them work. This and that makes the requirements in terms of format and form very different from those that existed until now.

I’m looking forward to seeing the teardown of these devices, but the MacBook Air presentation clip already showed how the motherboard of these devices is ridiculous in size and takes up only a fraction of the device’s chassis. Considering that this particular computer has a fanless design, they could have played even more with the design and done crazy things.

Or not. They could have done something, anything, no matter how small. Maybe take back the lightning Apple logo when the computers are on, maybe some kind of different logo with some kind of reference to how this is a computer with Apple ARM chips… I don’t know. Something. Whoever buys this equipment is not going to be able to show the world that they are the new, the cool ones. That, I think, is a lost opportunity for Apple, who knows very well how to differentiate their products and let the world see that differences. It is inexplicable.

A stingy first generation

If in the design I was expecting something differential, it’s really super striking that they have made some decissions for this generational leap. In fact, it’s super disappointing.

Let’s go by parts:

  • At most, 16 GB: Apple’s new “unified memory” is interesting because it poses a concept similar to that of the PS5 and Xbox Series X, which share graphics memory. I suppose the concept here is the same, but the problem is that the maximum for now is 16 GB. I do not know if the reason is cost, but that even the MacBook Pro, which is supposed to be aimed at professionals, can not be extended to 32 GB. Too bad.
  • No eGPU: not particularly relevant, but even though these computers have Thunderbolt ports, they won’t allow you to connect external graphics cards (eGPUs). I wonder if this is another sign that Apple is still not very interested in entering the world of gaming, a segment that as I said Apple is letting go without an obvious reason.
  • Touch Bar: we continue to have mandatory versions of the MacBook Pro with the Touch bar, which I think for many people is surplus to requirements but that Apple is determined to get into the pack. I see this as Microsoft’s obsession with Kinect. If people don’t use it, don’t make them pay for it. At most offer it as an expensive option and that’s it.
  • Two ports: a MacBook Pro 13 with two ports seems to me another sign that this computer has quite a bit of “Pro”, and that in connectivity again they could have made an effort. It is rather the poor man’s MacBook Pro, but the same thing happened to the Intel version launched in May, and to have four ports you had to go to the higher version, which also gave the option to upload to 32 GB of RAM.
  • Facetime HD 720p: Apple talked a lot about the new capabilities of their M1 to improve image quality in video conferencing thanks to the new ISP, but the improvements will only be in the software, because we still have the same FaceTime HD 720p webcam of recent years. I’m not saying it’s not enough – it probably looks pretty good – but it would have been a good time to make a quantum leap here too.
  • Touchscreen: I doubt we’ll ever see this, but given the ipadosification of macOS that is evident in Big Sur, it seemed plausible to think of a potential touchscreen version that would make interaction with those iPadOS apps that are supported on macOS more natural. As I say, it would be difficult to see something like this on MacBook because that would take away even more sense from the iPad/Air/Pro, but hey, it would have been something really curious.

Not all is bad news, of course. Although there are elements that certainly discourage the purchase of this first generation of equipment, there are two powerful arguments to go for them. The first is the performance offered by these Apple M1 chips, which I’ll talk about later on, but which looks absolutely prodigious.

The second is the battery life provided by both MacBook Air and MacBook Pro. Especially the latter, which promises 20 hours of video playback. That’s twice as long as its predecessors and much more than the vast majority of Windows-based laptops. That battery life is unfathomable, and as one in the promotional ad said, “it lasts longer than the hours I’m awake in the day”. A good way to express it, I would say.

In front of these promises, there is a danger: you’ll be dependent on a laptop that won’t let you do everything you want, or if it does, it won’t let you do everything as well as you did before.. The best example is Photoshop, which won’t be ready until next year. You’ll probably be able to run it anyway thanks to Rosetta 2, but we’ll see what’s the user experience there.

Apple talked very little about that part of the transition, although it already deepened it in June. I would say that they are very much on track, but the early adopters will, as always, be victims of the rush. This first generation of product is as I say stingy and will no doubt give some problems now that the transition to Apple’s ARM chips has just begun, but hey, if you decide to go for these laptops or the Mac mini at least you’ll be able to say you were the crazy ones, the rebels, the misfits…

Pricing gives good (and bad) news

Beyond the relevance of the processor, there is surprising news in this historic generational shift: the new Macs are (predictably) much more powerful than their Intel-based predecessors, but they are also cheaper.

Antonio Sabán summarized it very well in his post at Applesfera, but especially in that tweet that was a summary of some prices that say goodbye to the Intel tax but so far don’t say hello to the no less traditional Apple tax.

Of course, not everything is good news. The prices for the basic versions are good compared to their Intel-based predecessors, but things change when we are taking options because, my friends, on issues such as memory there is no longer room to expand the computer with third-party components (although this was already impossible in the MacBook).

In fact the unified memory becomes the perfect excuse for Apple to put the price you want to that improvement. If, as I guess, it’s GDDR5 (or maybe GDDR6) memory, it’s clear that it’s a bit more expensive than regular DDR4, but 230 euros for 8 GB is a lot of money.

The same with storage units: a WD SN550 M.2 NVMe SSD of 1 TB can be purchased for 100-120 euros. How much does Apple charge for it on the MacBook Air? 460 euros (plus the cost of the basic 128 GB). Maybe the performance there is somewhat better (but not much better, the WD reaches 2400 MB / s) and the form factor will be different (I guess it’s not an M.2 2280), but I’m sure you follow me. This is a little new example of the eternal business model of the razor and razor blades: the basic package is cheap, but as you want to extend it, you get crunched.

That Mac mini is cheaper than the Pro Stand for the Pro Display XDR.


That said, hey, good for those starting prices. The fact that they are better than those offered by Apple on its Intel-based computers is great news. One, moreover, surprising.

The processor that changes everything

Above the design debate – which tends to generate quite a lot of controversy and is totally secondary – is what makes them really different, and that is Apple’s M1 chip, a true prodigy if it actually does everything it promises.

And what it promises is no small thing. Three times the CPU performance of the previous generation (still based on Intel microphones), and five times the performance of the GPU, not to mention a range that leaves the current competition far behind. Up to 20 hours of video playback on the 13-inch MacBook Pro, for example.

Those promises are spectacular, and one could say here that Apple is pulling out all the stops. Well, no, dear readers. In fact, it may be pulling the rug out from under us. AnandTech’s spectacular report published recently makes it clear: we are looking at processors that leave behind the best that both Intel and AMD have.

Pay attention, because I have talked about leaving behind ALL the best that these two companies have. I am not talking about processors for laptops, no. I’m talking about how Apple’s M1 chip predictably outperforms (in single-core, at least) both the Core i7-1185G7 for laptops and, attention, the recently introduced AMD Ryzen 9 5950X for desktop PCs, which only in specific cases shows its power.

In the analysis they could not even test an M1, of course, and for those graphics they used an Apple A14 Bionic on which the M1 is theoretically based. The fact that this chip shows that power is already amazing, but the M1 should go even further.

AnandTech’s analysis was completed with graphics like this one in which it was quite clear how this processor managed to make a fool of the competition from Intel in the world of PCs (there is no Ryzen 9, but it would be a little above the A14).

https://twitter.com/marcoarment/status/1326225358283214849

It is an amazing achievement, one that makes me think that the era of Intel and AMD on the desktop could come to an end. Marco Arment, famous developer, expressed it well in that tweet in which he simulated the reaction of Intel to this announcement, and I think he is not wrong. If I were in the shoes of either of these two companies, I would be very worried.

Of course AnandTech’s is a delicate review, mainly because it is all based on estimations: if the M1 is not similar to A14 that data falls, but there is no reason to think it’s not. As I say, it’s should be a somewhat more powerful version thanks to the small margin of manoeuvre given by laptop chips compared to those of cell phones. The TDP is an absolute barrier to the performance of a microprocessor, and here I would say that Apple has not wanted to go too far: the fact that there is a MacBook Air fanless makes it very clear.

In fact, the difference between the basic MacBook Air processor with 7 cores in the GPU and that of the MacBook Pro or Mac mini with 8 cores in the GPU is very small. Apple did not speak at any time of different versions of the M1, but I would even make a bet here and say that the M1 is not a processor, but a family of processors.

The same thing happens when Intel launches its Tiger Lake family: it does not do that with a single processor, but with variants that offer different TDPs and different configurations of cores and clock frequencies or EUs for the GPU. If I am right, the MacBook Air M1 will be different from the MacBook Pro M1 13″ and also different from the Mac mini.

That idea makes sense even if they are not named versions. Considering that the MacBook Air has passive cooling one would expect that 1) its M1 chip works less fast, or 2) if it works just as fast, it does for a very little time and then starts throttling, with performance reduced to avoid overheating.

That’s not (so) necessary on Mac mini and MacBook Pro 13 because both products have a fan. Apple, I insist, did not give details of the clock frequency at which these computers work, but I’m sure that very soon we will unravel this mystery.

Wait, we’ve only seen the M1 on laptops

In the last point of AnandTech’s analysis, those responsible for this text went a little further, but they were not the only ones to speak not about M1, but about what comes after it.


And what’s behind it will be even better in performance. Apple stated that the M1 “features the world’s fastest CPU core in low-power silicon”. That means that either what they have analyzed in AnandTech with the A14 is significantly below the performance of the M1, or that this chip has even better variants.

I think the answer is a combination of both. The M1 is probably superior to the A14, but I also think that, as I said, this is more a kind of chip family with different clock frequencies, voltages and core configurations for the CPU and GPU.

That would make sense if we take into account that Apple already integrates the main memory as part of the package. The M1 is just the beginning because it is intended for “modest” laptops, and that means that we have an iMac, a MacBook Pro 16 and above all a Mac Pro with chips with much more room to manoeuvre.

It’s really logical. Thin laptop CPUs don’t usually have TDPs over 15 W, and the M1 is around 10-20 W. That, I assure you, will not be the TDP that Apple handles in the MacBook Pro 16’s chips, that are bigger in size and therefore capable of better cooling a more powerful chip and more consumption. Imagine what happens with the iMac and especially the Mac Pro.

Perhaps Apple has an M2 chip in the works. Perhaps the name is different and it is called P1. Whatever. This would be a CPU that instead of 8 cores could have twice running faster and having less efficient cores. The math works out, and if a 10W M1 chip manages to outperform the 105W Ryzen 9 5950X in single-core, imagine what Apple can do with a 45W chip (like the May 16, 2020 MacBook Pro Core i9-9980HK) for the MacBook Pro 16, or a 95W or 105W chip like the more powerful Intel and AMD desktop PCs.

It can be absolutely crazy.

In fact, I can’t imagine what could happen there, because if the improvement is proportional to consumption, we would be looking at processors that would skip several generations of what both Intel and AMD have now, which would probably take at least one or two years to reach something like this. The 5 nanometers of the M1 help a lot, without a doubt, but so does an architecture that if it meets expectations, will bring about a radical change not only in Macs, but also in the PCs we use every day.

I’m still freaking out -at ExtremeTech they were much more skeptical about the M1’s performance today-, but soon we will know what to expect from these computers: in a week’s time we will have them in the hands of users (I suppose the reviews will also appear next week) and that’s when the party will begin.

Apple’s party, of course. For Intel (and AMD) this could be more of a wake.

Disclaimer: this is a lightly edited translation done with DeepL. The original article was written in Spanish and published here.

Waiting for an ARM MacBook based on iOS

Posted on June 10, 2020

We will talk about this again in a couple of weeks, but it is impossible for me not to write a post about it today, of course. I can’t hold it in. I never have.

Next June 22nd Apple will hold its WWDC 2020 event – only in the online format because of the COVID-19 pandemic – and everything points to the company announcing at that time the third major transition of its desktops and laptops. The idea, as you know, is simple and powerful:

Goodbye Intel, hello ARM.

The dimension of the announcement is spectacular, not because of what it means for Apple -whose PC market share is around 18% according to GlobalStats- but because of what it means for this segment in general. Suddenly ARM architecture becomes a real threat to the desktop and to manufacturers like Intel or AMD. We’ll see how they face the next few years if the versions of Windows 10 for ARM also polish up their deficiencies and limitations.

But today we’re talking about Apple. Well, we’ve been doing it for a few years now. These MacBooks (or Mac mini, or iMac, for that matter) are expected to arrive in 2021 by all accounts, so Tim Cook and his boys will want to prepare the move and especially prepare the developers, who are the ones who will have to work hard to move their applications onto these new computers in the coming months.

The question for me is still unique:

iOS or macOS?

Everyone seems to be clear on this. The media and experts apparently do not value any other option than having Apple computers based on ARM and running macOS. That forces to make several important changes to move (almost) all the software available in macOS/x86 to macOS/ARM. There are conversations around emulation, Catalyst, virtualization and other options. Gruber talks about them at Daring Fireball, and Steven Sinofsky has done the same in a slightly convoluted Twitter thread.

I don’t know. It all seems to be very complicated, and while I understand that macOS is for many totally tied to the Mac, I also think that those ARM processors are totally tied to iOS. For me:

  1. A spectacular catalogue of applications for iOS is now available
  2. Mouse and window support now available
  3. There’s already a dock
  4. There’s already a file browser

Better window management and multi-user support are missing, but otherwise iOS is a system ready to make the leap. In macOS we are not seeing any hints of touch support: macOS has not been iosified, but just the opposite has happened: iOS has been macosified.

That’s the key for me: I see iOS much more prepared to take advantage of that software easier than the other way around. I understand that this generates resistance and that a Mac without macOS is something strange, but I already said it and I maintain it: Apple will not merge iOS with macOS because its future is iOS.

That’s the future. Period.

So, the WWDC announcement of the transition to ARM seems to be a no-brainer. The real question for me is what they will do with the operating system of those computers. If it’s MacOS, how will they handle the transition? I think Catalyst is going to be very important here, but I’m not a developer so I’m afraid I have no certainties here.

Let’s see what Apple has to say, but one thing is for sure: this is going to be Apple’s most important keynote since Apple probably introduced the iPad. Maybe more. Go get the popcorn ready, it’s promising.

Welcome to Microsoft’s definitive surrender on smartphones

Posted on October 4, 2019

In May 2017 Satya Nadella made a revealing comment. “Our next phones won’t look like phones,” he said, opening the door to a return to a field in which they had failed completely with Windows Phone and that Windows 10 also tried to conquer our smartphones.

It did not succeed on that occasion either, so Microsoft turned the page. It began to attack from the flanks, offering applications and services on platforms that had suffocated its own. He applied the old “if you can’t beat them, join them” and ended up having a minor position in a world in which they could have been leaders.

Yesterday, for an instant, it seemed to me that Microsoft was once again aspiring to the maximum.

Before that, of course, there was a more normal part of the conference. Panos Panay does very well on the stage -I would say he is the best in this area today- and told us the benefits of the new Surface Pro 7 (for me, not much to talk about) the Surface Laptop 3 (the 15 model with AMD is interesting, but expensive) and the promising Surface Pro X (interesting but expensive again). And then came the real stars of the event, of course.

First Panay surprised us with the Surface Neo, a device that we have been waiting for a long time. This device brought us closer to this new concept (or not so new) of a foldable tablet with dual screen. Less ostentatious and ambitious than Samsung’s Galaxy Fold or Huawei’s Mate X, but probably more “down to earth”. More realistic, less gimmick-y.

The product is really cool on the outside: two 9-inch screens that combined form a 13-inch display to which we can also add a mini keyboard (I guess the writing experience on it will be discreet at best) and the stylus that Microsoft insists on transform as the center of the man-machine experience. All great in appearance, because that’s what these presentations are for: to make everything look great.

When it seemed like it was all over, the Microsoft own’s OMT (One More Thing) moment. It wasn’t like that, but it was totally like that. Panay said “thank you, goodbye“, pretended to retire, and then came back and said with a very serious attitude “We’re not done yet“. And then, the bombshell: a video that I would have like to end after 45 seconds. Just until the girl says “Hello?

From there, the fiasco. At least for me. The Surface Duo is not the resurrection of Windows Phone or Windows 10 on mobile phones. No.

It is the definitive surrender of Microsft on mobiles. That’s what it is.

The reason, of course, is that after those 45 seconds it became clear that the Surface Duo is an Android phone. Just like that. I don’t care if Panos Panay insists that we don’t call it that and wants us to say that “it’s a Surface”. It’s not. If it goes with Android, it’s not. Or it’s not that as a whole. Suddenly Surface, the platform that got Microsoft to partially control the hardware and software -as Apple does totally with its products, and as Google does partially with Pixel and Chromebook- lost some strength. It surrendered to Android.

It gave up.

I understand the strategy. The Android ecosystem is fantastic, so if you want to offer a mobile device you have to take advantage of it to be part of those hundreds, probably billions of Android users. That’s a very strong temptation: to be a rebel doesn’t usually work in technology (or life, for that matter), so the fact that Microsoft ends up making an Android device is, to a certain extent, logical.

The problem is that this is not going to make the Surface Duo differential. I’m pretty sure about one thing: by the time the Surface Duo appears there will be several products advertised or available with the same form factor. Let’s see, if Samsung and Huawei have already managed to bring out devices with flexible screen, rather more technically complex, I doubt that any manufacturer (and here I will emphasize the chinese ones) will take advantage of the idea to have its particular Xiaomi Duo, Oppo Duo, Realme Duo or OnePlus Duo. Microsoft will have competitors everywhere, and it won’t be able to differentiate itself too much. It has not succeeded with their Surfaces with Windows 10, which are great but have equally great competitors, and will have even less chances on the mobile segment.

It really makes me angry. During those 45 seconds I thought that Surface Duo was actually the new Microsoft phone with Windows 10 (or Windows 10X, I don’t care). But it isn’t that. It’s another Android phone.

And then there’s the other thing. I wonder if this, again, is a solution looking for a problem.

Lately I’ve been talking a lot about this, but it’s something that’s increasingly relevant to me when talking about any product. Does this in my hands really solve a problem? Does it make my personal or professional life more comfortable and better? Or is it simply a product that the manufacturer has made just because he could?

Neither the Surface Neo nor the Surface Duo seem to me to be products that I would buy quickly and willingly. Neither the Galaxy Fold or the Mate X, of course. Price is the first reason against it, but there are some more important questions I’d like to ask Nadella and Panay. Questions I’d like them to answer me,.

Why should I buy a Surface Neo instead of a convertible or a conventional laptop? What does it do better? And logically, the same with the Duo: why should I buy a Surface Duo instead of a conventional smartphone? What does it do better?

Okay, you’ll tell me the answer is easy for these guys. Those products are perfect not only for consumption scenarios, but also for productivity.

But I don’t see what they bring over those other products we already have. Neither to consume content, nor to produce it. Only in very, very specific scenarios could a product like the Neo be interesting to use for production. And with the Duo I don’t need you to tell much more. Typing there for an hour would be pretty much a nightmare. And the same goes with the Neo, I insist, which reminds me of the UMPC of the past -which in turn are the evolution of the Nokia Communicator- although they bring the advantage of the double screen and the virtual or physical keyboard (like this wonderful Sony VAIO P below).

I can do something quite similar to what the Neo and the Duo propose with the products that we currently have nowadays. I can take a Bluetooth mini-keyboard with me, something like the Logitech K480 for example, and work with the smartphone or tablet writing texts quite decently. There are smaller keyboards and foldable keyboards, so what’s the advantage?

The double screen, of course. Double space to work and enjoy. Fantastic.

Except for the drawbacks. For example, having to unfold the phone or tablet over and over again when we want to take advantage of that extended mode, which will also have a great intermediate hinge not so great for watching movies or playing games full-screen by combining those two displays.

Or the other big problem: the cameras. I don’t know if you’ve noticed, but the Surface Duo has just one front camera on one of the screens. This could change and I guess it will, but taking pictures with these devices isn’t going to be as easy or straightforward as it is with a smartphone. It seems that simple operation could be not that straightforward or quick, and although it is early to make judgments, the form factor does not favor having a good photographic mobile, and harms that important function that differentiates the best (and more expensive) smartphones we have each year.

It is true that there are striking features and there is certainly value in a proposal that helps make the experience more productive, but what Microsoft has showed me does not make me think that this is better than what we have today. It’s something I did think about when I saw the iPhone, for example, but that feeling is null with the Surface Neo and the Surface Duo. They’re just cool products, that’s all.

And in the case of Duo, I insist, they are an unconditional surrender of Microsoft in a segment that it has never known how to or been able to conquer.

What a tragedy.

Google Glass is back, but nothing is really different

Posted on July 18, 2017

When Google Glass was launched in April 2012 almost everyone got excited. Augmented Reality was the star of the hype cycle back then, and the possibilities for the device seemed endless. 

Three years later the product collapsed. Privacy and security issues proved to be too important both for Google and users, which became less and less interested in a technology that made us all look a little dumb.

It was expensive, too.

Why would Google launch another version of Google Glass? One would expect that this time the things that failed on the previous version would be corrected. 

They aren’t. Google Glass is still a niche product, enterprise focused, with a very limited set of use cases. It’s a little more powerful and has a better, bigger battery, but again, privacy issues are still there and users will look as dumb as they looked a few years ago. 

And it is as expensive as the previous version. 

There’s another big problem for Google Glass. As it happened (happens) with smartwatches, this device solves a problem that didn’t exist in the first place. Everything that Google Glass does can be done on a phone, and in fact Apple —with its ARkit— seems to have understood this better than Google. 

I’m affraid Google Glass is mostly useless: without real differentiation and really special use cases, it’s little more than an expensive business toy. Good luck with that, Google.

The Games That Can Keep Mobile Gaming Fresh

Posted on June 20, 2017

Mobile options have largely taken over the gaming industry in the last few years, or at least have carved out a market to rival consoles. There are thousands of smartphone gaming options covering every possible genre and satisfying all different kinds of players. But some big changes might pull attention away from smartphone gaming in the traditional sense.

The obvious one is virtual reality, which is already compatible with most high-end smartphones, and is introducing an entire new way to play video games. And VR isn’t alone. Apple is making some big hints that its imminent foray into augmented reality is going to be a big deal—so much so that CEO Tim Cook can barely contain his excitement. Apple’s AR isn’t just aimed at the gaming industry, but it’s sure to have a huge affects on gaming if it’s such a big part of Apple’s mobile plan moving forward.

VR and AR are very exciting, and should bring about some really great gaming experiences. But fans of traditional smartphone games might be concerned that their favorite medium is going to suffer in the face of new tech. To reassure everyone here are a few predictions regarding genres that should keep producing new, fun games for smartphones.

Strategy Games

There will be plenty of strategy games released for VR and AR, and some of them are undoubtedly going to be brilliant. We can already imagine board games from Scrabble to Stratego played out on tables through AR, and there have also been demonstrations of AR tower defense games. But there’s a certain quality to this genre in simple, touchscreen 2D that makes them particularly fun to play. It already seems like the genre isn’t ready to migrate away from standard mobile formats. Case in point, the legendary tower defense series Plants vs. Zombies is getting a new edition later this year in the form of “Plants vs. Zombies Heroes.”

2D Fighting

Fighting games have been popular throughout pretty much the entire history of gaming, from standalone arcade machines to the latest and greatest consoles. They’ve also proven to be adaptable on mobile platforms. Marvel, DC, Capcom, and other companies have all had success using tapping and swiping controls to make fighting games intuitive for smartphones and tablets. Though someone will surely try to pull the genre into VR and/or AR, this is one type of game that just seems as if it will always be best on a screen.

Casino Games

Online casino games will be tried in AR and VR, but mobile casinos have also come a long way. Even more, they’ve already undergone their own transitions to become more immersive. In particular, live casinos that now use high quality HD cameras to stream professional dealers have become popular, not just on desktops but on mobile devices, too. With this level of realistic immersion, it’s hard to see what poker and blackjack gamers would really want with VR. This is a genre that seems ready to grow even more popular on mobile, with or without new VR devices.

Point-and-Click Adventures

Adventure games are going to be spectacular in VR, and some already are. But point-and-click adventures, from slow-moving mysteries to beautiful, expansive experiences have become ideal games for mobile platforms. The interesting thing is that a lot of them come from smaller studios and indie developers. The games could probably be made more impressive on VR, but this is a genre that may just stay put because it’s more feasible for developers to work within the traditional smartphone and tablet space.

Apple: thanks for making the iPhone more expensive, dear journalists

Posted on February 16, 2017

Apple knows well how to play with expectations. They usually disappoint when they launch products, but the disappointment isn’t as big as it could be thanks to big and small media sites.

Those sites (like The Unshut) are happy to talk about every single possible and hypothetical detail on the future Apple smartphones, and all those rumors that keep appearing on the news (I wonder how much of them are leaked by Apple itself) prepare us both for the good and the bad.

Surprises are overrated, Apple would say.

It happened last year with the absence of the headphone jack: weeks before the unveiling of the iPhone 7/Plus every tech journalist in the world had expressed his opinion on that decision. When Apple finally confirmed that omission, we were already prepared for that.

That’s big.

The same will happen with the iPhone 8: we already know for sure that it will cost over $1,000, something that would be a bigger deal if it was revealed as a new fact on the launch day. It won’t be a surprise anymore: Apple already knows they can put this price tag to the new iPhones, because we will be prepared for that. From Appleinsider (and others):

Kuo goes on to estimate an “iPhone 8” price tag starting at $1,000, reiterating a figure first divulged in a report this month. The price hike is attributed to a 50 to 60 percent bump in production costs compared to the anticipated “iPhone 7s” LCD models.

Apple should thank all tech journalists for talking so loud about them. They should thank me, for that matter. So there you have it, Apple: you’re welcome.

Source: Apple’s ‘iPhone 8’ to replace Touch ID home button with ‘function area

Nokia 3310: the immortal phone

Posted on February 13, 2017

I do not remember the exact model, but my father had a Nokia with incredible speakers. I would say that the whole building was aware of when somebody called him, but that technological prodigy (at that time) also had other advantages shared by the devices of the time. Among others, of course, was those batteries that never seemed to die. They remind me of that old ‘Highlanders’ movie. If Connor MacLeod had had a cell phone, it would have been that one for sure.

The fact is that some people are still using this kind of phones. An old friend resisted tech trends for years, and kept his old Nokia (I don’t know if it was this particular model) until he realized that what he wanted was not a new phone, but a camera with which he could also make calls. That was the argument of his surrender, because I doubt that he would have otherwise accept that defeat.

These days the indestructible Nokia 3310 has been part of the news again. A British user told media he had been using it for 17 years, withstanding —of course— the laughter of colleagues. Anyone who has resisted all this time clearly has plenty of arguments to be invulnerable to any criticism or suggestion, but some of those reasons could convince others that a Nokia 3310 is precisely what they (we) need in our lives.

This is a subject largely covered in media, among other things because a phone of this kind allows you to escape the digital whirlwind and, as they said in The Guardian, regain your life. You could say goodbye to the social networks and WhatsApp, something that for many people would probably be like living an empty life.

But you could do it, and in fact there have been strange and bold ideas to detoxify a little of that dependence on the mobile. There are “feature phones 3.0“, cellphones without ‘smart’ capabilities that basically inherit the virtues of those old Nokia devices and that adapt them to modern times with some improvements like having more space to listen to music (this If allowed). It is for example what is achieved with the Punkt MP 01, a funny product whose motto is that you can just focus. It removes everything “accessory” in the smartphone world, and offers you a basic, cool phone that has a ridiculous price: 295 euros. Phew.

The NoPhone

The NoPhone is even funnier, and that product is precisely what the name suggests. A block of plastic with the size and shape of a smartphone, but that is just that:  an absurd and stupid plastic block so you at least have the feeling that you have something in your pocket. It’s like chopsticks for smokers, a way to fool our minds into being quiet, I suppose.

The idea is to help you forget about your phone, something its creators have used to laugh about. That product has been even surpassed with the NoPhone Air, which only teaches you the package of that phone because, attention, there’s nothing inside. Well, yes, there is air. Air that does not take photos, does not store data and does not have Wi-Fi or has a headphone connector. It is “the invisible phone for people who use their phone too much“. A perfect gift ($5) for addicts to these rectangles that dominate our lives and to which we should pay a little less attention.

The NoPhone Air

The funniest thing about these nice phones is that they make more sense than we think. These last two are a reminder of how far we’ve come, but both the Punkt model and the Nokia 3310 are a much more useful resource than we think. Not to fight against the passage of time or social networks, no. To fight attacks against our privacy.

In fact you should buy one of these Nokia 3310 if you go traveling to places where your data and privacy are at risk. Say, for example, the United States, that country in which they are posing to ask you for the passwords of Instagram or Facebook. You know, to see if you are or you relate to terrorists (because someone is going to boast about it there) when you pass through their customs. Which is precisely why if you travel there or to other countries with these types of policies the best you can do is not to take your smart devices with you.

If you do, do not take the ones you use normally, of course. Use an old laptop and a rusty cell phone, formatted and totally clean, without having just used them, and in which you certainly have not gotten into your social networks. I would say you could preinstall Tails or some security focused Linux distro in that laptop, but that would probably make you look more suspicious. No. Throw in a 10 year old netbook with Windows XP (or better, a Windows Me, to whack the staff). If they stop you and want to analyze it give them access password kindly. Let’s see if they can analyze something.

The same can apply to your mobile phone: that Nokia 3310 can do wonders to make you pass customs flawlessly. If you cant to take pictures of the trip, buy a camera in some big store, take them, send them those photos with an encrypted file through WeTransfer (for example), and return the camera to get back your money. There are lots of ways to make life more difficult to people who wants to know everything about you with the old excuse —”everyone is guilty until proven innocent“—, so take that into account.

There you have it. Maybe it’s not a bad idea to buy a Nokia 3310. You can access inmortality for just €12.63 at Aliexpress. That’s not too much to ask for eternal life, right?

The ARM MacBook that will (never?) come

Posted on February 2, 2017

Apple Inc. is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel Corp. processors, according to people familiar with the matter.

The new report comes from Bloomberg, and there we can find  (not much) information about the codenamed T310, an ARM chip that would be the next Apple’s step on that theoretical path to abandon Intel chips some day.

The T310 could be used to enable a new low-power mode on Apple’s MacBooks, but it’s not exactly clear if the chip will in fact replace the Intel chip on every front in that scenario, or will limit itself to certain low-power tasks. Apple has already integrated a T1 ARM chip to manage the Touch Bar, and the new one could be use for a “Power Nap” mode that:

allows Mac laptops to retrieve e-mails, install software updates, and synchronize calendar appointments with the display shut and not in use

This is interesting in its own right, and would mean that that ARM chip is indeed capable of running macOS apps that (again, this is relevant) are theoretically coded on an x86 instruction code, not an ARM one. I wonder if there is some kind of emulation here, or those apps have two binaries to run in either processor when needed.

Both scenarios are interesting, and could lead to that future in which the ARM MacBook will, indeed, come. It seems that will take more time than we thought it would, though.

Source: Apple Said to Work on Mac Chip That Would Lessen Intel Role – Bloomberg

Nintendo Switch and the curse of being original

Posted on January 20, 2017

I’ve never been a Nintendo user. This legendary maker has always developed consoles and franchise games which always seemed childish to me. Too simplistic, too faithful to a type of games that no longer were what I was looking for. Too loyal to their heritage.

I did my little experiment a few years ago with the Nintendo Wii, of course. I fell into the Wii fever like many millions of people did before (and after) and then realized that I had a brief, shallow interest in Wii Sports although I recognized the concept as brilliant to casual players. Although playing with family and friends was fun, most of the time one ended up playing alone, and then the thing was not so funny. I sold it a month later.

Like many other Nintendo consoles before and like others that have been launched later, the Wii beat all its competitors in one area: originality. The products of this manufacturer have always managed to try to impose new trends and give a twist to those that already were there, and that is what they tried to do with an almost forgotten Wii U and what they are trying to do again with the new Nintendo Switch.

Does this console make sense today? As you can guess, I’m too confident on that. The hybrid console concept may have certain appeal, but Switch does not compete here with the Xbox One or the PS4. It does not even try. It competes with our smartphones, and I’m afraid it has already lost that battle.

It has because everybody already has a smartphone and because the human being is lazy by nature. You will not take two devices in the backpack when you can take just one. Even if you can take ‘The Legend of Zelda: Breath of the Wild’ everywhere, the competition with a smartphone it’s too tough: that device is ubiquous and versatile. You don’t need nothing else (most of the time).

It doesn’t help the fact that we’be got a limited number of games available (by the way, we’ll see how FIFA delivers at the Switch) or that the price of the console is at the level of a PS4 / Xbox One which offer superior experiences on the technica side. This isn’t certainly a guarantee of better game experiences, but most of the games the vast majority of people want are developed for those platforms. That “me against all” fight of Nintendo makes third party titles difficult to spot.

I’m sure there is a market for the Switch, but I would say that market has been dwarfing over the years. This looks more like a second (expensive) console than a main console for the vast majority of video game fans, and as I said a few months ago, I think Nintendo should accept its reality and take advantage of what it could do on smartphones with little effort. In Ars Technica they go further and claim that this is the last time that Nintendo rolls the dice to look for luck, and although it’s a pity to read and say that, I think they are right. It may be the swan song of a company that is cursed because of its obsessive quest for originality.