The desire to call something “innovation” when it isn’t, purely for marketing purposes.

I think Apple is at a point where they want to “innovate” with anything, even if it means using an interface that’s difficult to read for people with visual impairments—and for the rest of us as well… It’s very different from when Jobs completely changed the design of the computer with the iMac—adding built-in speakers, an integrated screen, a built-in modem (something novel at the time), and making one of the most controversial decisions back then: removing the floppy drive and betting on CDs and internet connectivity.

But these kinds of “innovations” are simply an excuse to avoid admitting they’re running out of ideas—and lacking geniuses like Steve Jobs or Jony Ive behind the scenes.

And the head of design should be fired immediately.

I’m more inclined to start — even if only conceptually — using the camera of any smartphone to design prototypes of AR UIs, NUIs, and spatial interfaces, so we’re prepared for when devices become available that allow for natural integration of AR into regular glasses — not the current bulky monstrosities. And we shouldn’t just think in terms of UI design, but also AI design:
How do we integrate AI assistants into the interaction flow of an AR system? What role will they play? Will they be just guides like Siri, or more like JARVIS?
Could AI display different interfaces in a user’s glasses based on usage patterns? Could it adapt to their tastes and preferences to, for example, highlight nearby art galleries, libraries, the best-rated local food spots, or historical facts about the city they’re visiting?
Will we end up — like in a Black Mirror episode — enabling users to display their personal info and location so that others wearing AR glasses can recognize or find them?
Will things in the physical world have visible QR markers so objects, monuments, or anything else can be read and recorded — and if they’re not already registered, could they then be added to the system?

I believe the next big shift will be spatial interfaces, and the sooner we begin to think about and design for them, the better.

To use them for good — to make cities more inclusive, more universal, and accessible for everyone. That interfaces enhance and enrich our real-world experience. That they assist and delight blind people by helping them understand the places they pass through, telling them whether there’s a lake stretching for several kilometers in front of them or a meadow with cows and horses. That they teach children and tourists about the history of the city they live in or are visiting.
That AR glasses explain the life of the author of a book they see in the library, or show the rating that book has among other users.
There’s a world of possibilities to take the mixed reality experience to the next level — one that is enhanced by global knowledge.

We shouldn’t let perfect AR glasses arrive without a clear plan for how to leverage them. Now is the perfect time to get ahead and lay the groundwork with applications and workflows that harness the power of AR + AI to create the next generation of systems and apps we’ll use in the years to come.