AI-Native Consumer Hardware: The Next Strategic Advantage
Agents, brain computer interfaces, wearables, and more
Like most Americans, I don’t go anywhere without my iPhone. As a proud member of Gen Z, I barely remember a time before the iPhone (I got my first iPhone in 6th grade). I use my iPhone for everything: as a map, a book, a flashlight, a camera, an email inbox, a television, and of course, sometimes, as a telephone. I cannot imagine how I would live my life without some kind of smartphone. However, I believe we’re on the cusp of a major transformation, one where the smartphone gives way to a new generation of AI-native devices that will redefine how we live, work, and connect.
Big tech companies and startups alike know that the race to develop the next generation of AI-native devices is on: OpenAI, Apple, and Meta are all investing billions of dollars into new technology, and a swath of new startups has emerged to work on this technology. In late May OpenAI announced it was acquiring legendary Apple designer Jony Ive’s AI-native device startup IO for $6.5B (basically nothing is known so far about what this device might look like). A week later, Meta announced that it was partnering with Anduril to compete for the US Army’s $20B mixed reality program previously known as the Integrated Visual Augmentation System (IVAS).1 At their respective developer conferences, Google announced a new AI model that can run locally on a phone, and Apple announced that developers can now access local AI models deployed on iPhones in app development. New startups are developing everything from AI-powered jewelry (necklaces, bracelets, rings) to augmented and virtual reality headsets (AR/VR) to holographic “spatial computing” devices to touch screen devices.
It’s not yet clear what the exact form factor of this new consumer device will be – companies are experimenting with many potential options. Whatever the ultimate form factor, this technology will certainly be adopted for national security use cases.
Past Attempts at Developing New Consumer Hardware
Investors and entrepreneurs alike have been searching for the next big consumer device since the iPhone itself came out. Certainly some consumer hardware products with limited AI features have seen success: Apple has sold more than 250,000,000 Apple watches, Meta has sold more than 20,000,000 Quest headsets, and Oura Ring has sold more than 2,500,000 rings. However, none have seen the same scale of adoption as smartphones, and none come close to replacing the smartphone as the primary device users keep on their body at all times.
There have also been several public failures on the quest to develop a new consumer device. Perhaps most infamously, a team of former Apple designers raised more than $230M to build the Humane AI pin, a $700 wearable pin equipped with an AI assistant and mini holograph projector touted as the future of “spatial computing.” After receiving poor reviews and falling far short of sales goals, HP acquired the company for just $116M and shut down the product. Similarly, Apple’s Vision Pro fell well short of revenue expectations, with fewer than 500,000 units sold against an internal target of 3 million. Due to weak consumer demand, Apple scaled back production in mid-2024.
Driving Trends
Two key trends will drive the development of this next generation consumer device: 1) Hardware (chips, cameras, sensors, screens, optics, batteries, etc) will continue to get better, cheaper, and smaller, driven by Big Tech investment. 2) GenAI will simplify how users interact with these devices and unlock more powerful features.
Big Tech companies, particularly Meta and Apple, have spent billions of dollars working to improve consumer device hardware. Meta alone has spent close to $100B on its Reality Labs division since 2014. These investments have led to some truly remarkable hardware improvements: while the Apple Vision Pro was not a commercial success, the underlying hardware is state of the art and extremely impressive. Similarly, tech critics have praised the technical capabilities of Meta’s lightweight next-generation Orion AR glasses which are equipped with holographic lenses and Meta AI models. Orion will not be available to the public for several years (likely around 2027), as the price point is still prohibitively expensive for most consumers, but the prototype shows the art of the possible. This lavish spending is unlikely to slow down any time soon, as OpenAI’s acquisition of IO and Anduril and Meta’s IVAS partnership will continue to drive spending.
My Predictions for the Technology
I expect a new breakout consumer device will be equipped with agentic, multi-modal edge-deployed AI models capable of understanding a user’s environment and taking actions based on that environmental context. As such, this device will have several sensors that feed into the AI models. These models will be able to act as an assistant, capable of answering questions about a user’s environment, searching the internet, and conducting tasks autonomously, all while taking the user’s environment into account. These new devices will be truly AI-native – they will likely have a unique, new kind of operating system (OS) designed to allow full AI agentic control. Startups like Wafer Systems are already experimenting with building OS-layer agents capable of taking full control of devices.
I’m not entirely sure what form factor this device will take: companies are already experimenting with pendants, glasses, rings, bracelets, headsets and more. It may even take the form of a family of connected devices (ex: glasses, a ring, a bracelet, and a necklace), each device capable of sensing different information about a user and their environment. I predict that there will be some mixed reality component to this new device, perhaps in the form of holograms.
This new device may feature an entirely new human-machine interface (HMI) paradigm. Today, we primarily interact with our devices either using a keyboard and a mouse or a touch screen. In 2010, Apple rolled out Siri, allowing users to control their devices using only their voice. Today voice agents are widely used, but not the primary HMI technology. In the future, I expect that natural language voice will dominate HMI, enabling hands free use of new devices, as agent technology improves and proliferates. Users can task their devices and ask questions using natural language voice.
Advances in neurotech may take this natural language HMI even further. Historically, brain-computer interface (BCI) technology has not been viable for consumer usage. Non-invasive technologies like fMRI2 and EEG3 are often impractical to use due to high costs, lengthy setup times, or poor signal-to-noise ratios (SNRs). Invasive technologies have much higher SNRs, but typically require high risk brain surgery, making them inaccessible to average consumers. However, new breakthroughs in deep learning and BCI hardware are bringing brain-computer interfaces closer to becoming a practical mode of computer interaction. Companies like Neuralace, Alljoined, Conduit, Telepath Technologies, Neuralink, and others are working to build BCI technology that could have more mass market appeal. On the non-invasive front, equipped with next-generation EEG and fMRI hardware, researchers are collecting large BCI datasets to train deep learning models that improve SNRs, enabling systems to predict which word a user is thinking and even reconstruct images based on brain activity. Neuralace in particular has made significant progress applying large datasets and deep learning to “thought to text” applications, achieving breakthrough results. On the invasive front, researchers are working to improve the form factor and make brain-computer interfaces more accessible by exploring novel delivery methods such as injecting or inhaling electrodes, then guiding them into the brain using magnets, offering a far safer alternative to traditional brain surgery.
If BCI technology improves enough, in the future, we will not need to type or talk to our devices to communicate with them – we will simply think commands at our devices, completely hands-free, using this new “thought to text” technology. This technology can also unlock silent communication, allowing users to “think” messages to others rather than texting or calling them. Additionally, “thought to image” technology will enable users to share what they are seeing with other people without needing to describe the situation using words.
The Department of Defense (DoD) has long been interested in BCI technology and funded BCI research. In the future, soldiers operating in sensitive environments could use BCI-enabled devices to communicate silently and hands-free with friendly forces, while also monitoring the health, cognitive state, and brain activity of their teammates. They may also be able to use new BCI technologies to command and control swarms of unmanned systems. Following a $35M DARPA project, DARPA research showed that paralyzed pilots could command and control an F-35 and could even control multiple aircraft at once. DARPA has also experimented with using BCI for “silent talk,” allowing soldiers to communicate without speaking aloud, and the Air Force and the Army have both worked to use EEGs and other BCI technology to monitor soldiers’ brain activity and health while doing tasks like piloting aircraft (ex: to monitor if they are drowsy or alert, stressed or calm, etc). This kind of technology could also be integrated with mixed reality headsets like the future of IVAS, allowing users to communicate with and control virtual environments simply using thought.
National Security Applications
A next generation AI-native hardware device (whether equipped with BCI or not) will certainly make its way to the national security community. I expect that DoD adoption of this new technology will follow a similar path as the smartphone. First, the smartphone proliferated throughout the consumer industry, then, several years later, DoD rolled out a hardened version of the device equipped with DoD-specific software like ATAK.4 Hopefully DoD will be able to adopt this next generation system more quickly than they adopted smartphones, which took several years to integrate throughout the Department. A next-generation, AI-native consumer device holds significant potential for national security customers, offering hands-free operation via natural language or BCI, deep user context through integrated sensor data, and AI agents capable of executing tasks autonomously, enabling users to stay mission-focused and amplifying their operational effectiveness as a true force multiplier.
The DoD market has the potential to be large – $22B is already appropriated for IVAS, and according to Obviant data, DoD has spent more than $600M on TAK and related technologies. Back in 2015, when the iPhone was taking off, the U.S. government was spending more than $1.2B each year on mobile devices. Some particularly promising national security use cases for this new technology include: enhanced situational awareness; next-gen command and control (C2); maintenance, repair, and overhaul (MRO); manufacturing; and next-gen engineering tools.
Situational awareness & C2: A new device, particularly one with some mixed reality functionality, could replace handheld devices like ATAK with more immersive blue force tracking, threat overlays, sensor fusion, and AI-guided decision support for improved situational awareness and command and control. Battlefield management and situational awareness could be transformed by integrating these devices into CJADC25 architectures. Note that the headset that Anduril is building with Meta, called Eagle Eye, will be integrated with Lattice, Anduril’s CJADC2 software platform. Based on user commands and full situational awareness, AI agents deployed on the device could task autonomous systems and other automated systems to complete a given mission. Afterwards, AI agents can draft intelligence and after action reports based on data collected during a mission.
Maintenance, Repair, and Overhaul: MRO is a major bottleneck for DoD. An agentic device can guide personnel through complex repairs in the field using manuals, visual inputs, and real-time diagnostics, allowing non-experts to quickly repair and maintain complex systems. Computer vision can automatically detect defects and highlight areas on a system that need attention. The Marines have experimented with using AR to enable experts to remotely guide a non-expert through a maintenance process. Similarly, the Army has used AI-enabled AR to show a technician where to apply paint on a rocket launch system for maintenance.
Manufacturing: Similar to MRO applications, a new hardware device could support manufacturers by making them more effective and efficient. With full user context, the device can catch any defects or other problems introduced in the manufacturing process. Additionally, a new device could walk manufacturers through complex tasks. For example, Taqtile builds an AI-enabled AR application that walks technicians through work instructions during assembly.
Engineering Tools: A new device with some mixed or virtual reality component could enable engineers to visualize, design, and interact with complex 3D models in immersive environments, accelerating prototyping and improving collaboration. When combined with AI, these tools support real-time simulation, guided assembly, and faster iteration from concept to deployment. Agents may be able to help generate and optimize engineering designs in real time based on natural language descriptions or even 2D drawing.
Consumer use cases for this technology are seemingly endless and include gaming, knowledge augmentation, cooking assistants, fitness coaches, productivity enhancers, and much more. A new consumer device could also improve consumer security. One of the largest threats to consumers today is “social engineering,” a set of tactics hackers and criminals use to trick victims into handing over money or sensitive information like passwords. This new class of consumer devices could offer built-in protection through on-device AI agents that continuously monitor user activity and automatically detect suspicious behavior, including social engineering scams. These AI models could actively listen to potentially fraudulent phone calls and analyze chat messages in real time, flagging or interrupting interactions with scammers before harm is done. This means vulnerable populations such as the elderly would be far less likely to fall victim to classic schemes like the infamous “Nigerian prince” type scam. Beyond social engineering, edge-deployed AI could also detect traditional malware, autonomously adjusting security settings or performing updates to neutralize threats. This level of protection would be especially critical for high-risk users, such as government officials and journalists, who are frequent targets of sophisticated cyber attacks on their mobile devices.
Frankly, I don’t know exactly when this next generation of consumer devices will be developed and gain widespread adoption. However, I predict that it will occur some time in the next five years. In order for this shift to occur, we will need to see breakthroughs in edge deployable, efficient AI models, battery technology, connectivity technology, chip technology, low cost sensors, materials, and more. Just like with the personal computer and then the mobile era, when this shift occurs, I expect that startups will see significant opportunity to build at the application layer, building new capabilities that would not have been possible without the new capabilities present in this hardware.
Just as the iPhone redefined mobile computing and catalyzed an entire app-based economy, the next-generation consumer device will usher in a new paradigm, merging multimodal AI, advanced sensors, edge computing, and potentially even brain-computer interfaces. These devices will not merely augment human capability, they will actively partner with users, offering hands-free, context-aware, and secure assistance in real time. Whether empowering a warfighter on the battlefield, guiding a technician through a complex repair, or protecting a grandparent from a scam, the promise of this new platform is profound. The winners of this shift – startups, investors, governments, and consumers – will be those who recognize that the AI-native future is not science fiction. It’s already being built.
I highly recommend listening to this interview with Palmer Lucky about Anduril’s new partnership with Meta on IVAS.
fMRI = functional magnetic resonance imaging. fMRI is a non-invasive imaging technique that measures brain activity by detecting changes in blood flow, providing high-resolution maps of neural function in real time.
EEG = electroencephalography. EEG is a non-invasive technique that measures electrical activity in the brain using sensors placed on the scalp, commonly used for neurological diagnostics and brain-computer interface research.
ATAK = Android Team Awareness Kit. ATAK is a geospatial situational awareness app used by military, law enforcement, and first responders to share real-time location, imagery, and mission data on a secure, mobile platform.
CJADC2 = Combined Joint All Domain Command and Control. CJADC2 is a DoD-wide initiative designed to improve the integration and interoperability of U.S. military forces across all domains and services. The goal is to provide a unified, cohesive approach to military operations, enabling faster and more efficient decision-making and response times. For more, see the DoD’s Summary of the Joint All-Domain Command and Control Strategy.