Artificial Intelligence - Yanko Design https://www.yankodesign.com Modern Industrial Design News Fri, 04 Jul 2025 10:12:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 192362883 Copilot Fellow Concept is an AI Pendant That Feels More Like a Friend Than a Gadget https://www.yankodesign.com/2025/07/04/copilot-fellow-concept-is-an-ai-pendant-that-feels-more-like-a-friend-than-a-gadget/?utm_source=rss&utm_medium=rss&utm_campaign=copilot-fellow-concept-is-an-ai-pendant-that-feels-more-like-a-friend-than-a-gadget Fri, 04 Jul 2025 16:20:09 +0000 https://www.yankodesign.com/?p=563314

Copilot Fellow Concept is an AI Pendant That Feels More Like a Friend Than a Gadget

Ever feel like AI is always hiding in the background, tucked away on your phone or buried behind a dozen browser tabs? Most of us...
]]>

Ever feel like AI is always hiding in the background, tucked away on your phone or buried behind a dozen browser tabs? Most of us interact with digital assistants through screens, which, let’s be honest, makes technology feel a little distant from our actual lives. But what if AI could be more present, accessible, and even a little bit stylish? That’s where the Copilot Fellow concept comes in, and honestly, it’s hard not to get a little excited about the idea.

The Copilot Fellow isn’t just another gadget to add to your collection. Picture a pill-shaped device, smooth and minimal, with a flat front and back. The front features a camera and a bold Copilot button, the heart of the design. Tap it, and you’re instantly connected to your AI assistant, ready to ask a question, set a reminder, or get a quick weather update. It’s designed to be intuitive, something you can operate without fiddling around or losing your train of thought.

Designer: Braz de Pina

What really makes Copilot Fellow stand out, though, are the four shortcut buttons, two on each side, that you can program for your favorite prompts or voice commands. Imagine setting one for “What’s on my calendar?” and another for “Send a quick note.” There’s no scrolling through endless menus or getting lost in settings. It’s simple, direct, and focused on the way you actually use AI day to day.

Now, here’s a little twist to that premise: While the front is all about that single, satisfying Copilot button, the back features a discreet screen. If you want to read your prompts or see some quick info, it’s there. But since it’s hidden away on the reverse side, it never gets in the way. This design choice keeps your interactions as screen-free as you want them to be, which feels like a breath of fresh air in our notification-heavy world.

One of the coolest things about Copilot Fellow is how you can wear it. You’re not locked into any one style, it works as a pendant around your neck, or you can just toss it in your pocket. It feels less like another gadget and more like a little presence you carry with you, always ready to help but never demanding attention. There’s something almost companion-like about it, which is a big leap from the usual “Hey Siri” or “Okay Google” voice floating out of your phone.

It’s important to remember that this is still a concept design, and Microsoft is unlikely to make one itself. But honestly, wouldn’t it be cool if someone DIY’ed their own version? The simplicity and flexibility make it feel approachable, even for tinkerers. Copilot Fellow reimagines how we might invite AI into our lives: more personal, more tangible, and a lot more stylish. Would you wear your AI around your neck, or are you sticking with the old-school phone in your pocket? Either way, this concept makes us rethink what AI gadgets could be.

The post Copilot Fellow Concept is an AI Pendant That Feels More Like a Friend Than a Gadget first appeared on Yanko Design.

]]>
563314
Sony Game Controller Concept dazzles with a ‘goo’ inspired translucent see-through design https://www.yankodesign.com/2025/06/29/sony-game-controller-concept-dazzles-with-a-goo-inspired-translucent-see-through-design/?utm_source=rss&utm_medium=rss&utm_campaign=sony-game-controller-concept-dazzles-with-a-goo-inspired-translucent-see-through-design Sun, 29 Jun 2025 17:20:51 +0000 https://www.yankodesign.com/?p=562117

Sony Game Controller Concept dazzles with a ‘goo’ inspired translucent see-through design

Is this the future of transparent tech? The designer at ‘An Improbable Future’ definitely thinks so. With its almost organic-yet-cyberpunk aesthetic, this ‘goo’ controller feels...
]]>

Is this the future of transparent tech? The designer at ‘An Improbable Future’ definitely thinks so. With its almost organic-yet-cyberpunk aesthetic, this ‘goo’ controller feels weirdly alien yet equally inviting. There’s no way your hands can resist wanting to grab the controller’s handles and just experiment a bit with gameplay.

Before you get your hopes up, An Improbable Future’s creation is an AI-driven one. The designer works on alternate-reality products (most with Sony’s branding) visualized using AI image and video generation tools. Their catalog includes everything from the standard tape recorder and camera, to even odd products like cars. And Sony’s worked on all of them either way…

Designer: An Improbable Future

This controller’s silhouette may be simple, but the design and materials make it stand out. It feels like a controller within an alien-like skin. The outer ‘goo’ is translucent, revealing the electronics within. The handles are completely see-through, with this cloudy, frosted finish. the action buttons exist beneath the ‘skin’, creating a rather odd yet somewhat appealing sensation while playing. Like the controller is a living thing.

This feels in a way like the future of the tech movement created by companies like Nothing. While Nothing pushed for products to ditch opaque exteriors for transparent ones, this controller takes a step further by ditching the transparent plastic/glass for something that feels less ‘inorganic’. The controller has organic curves, and even an organic-ish material that somewhat resembles something collagen or cellulose-inspired, for lack of a better term. It might put some people off, I’m not one of those people.

The rest of the controller’s fairly normal. Standard joypads on the left and right, under-shell lighting that shines through, and I can’t really tell what that dark/navy blue element between the handles is, but my guess is either some transmitter, battery pack, or plug-and-play speaker. An Improbable Future doesn’t ‘explain’ their work… and AI famously doesn’t justify its design choices either.

The post Sony Game Controller Concept dazzles with a ‘goo’ inspired translucent see-through design first appeared on Yanko Design.

]]>
562117
RIP Snow Globes: OBBOTO brings a miniaturized AI-powered 360° Vegas Sphere to your Desktop https://www.yankodesign.com/2025/06/20/rip-snow-globes-obboto-brings-a-miniaturized-ai-powered-360-vegas-sphere-to-your-desktop/?utm_source=rss&utm_medium=rss&utm_campaign=rip-snow-globes-obboto-brings-a-miniaturized-ai-powered-360-vegas-sphere-to-your-desktop Sat, 21 Jun 2025 01:45:52 +0000 https://www.yankodesign.com/?p=560334

RIP Snow Globes: OBBOTO brings a miniaturized AI-powered 360° Vegas Sphere to your Desktop

 Remember when snow globes were the pinnacle of desktop decoration? Those quaint glass domes with their floating plastic flakes have officially been relegated to...
]]>

Remember when snow globes were the pinnacle of desktop decoration? Those quaint glass domes with their floating plastic flakes have officially been relegated to the analog past. The OBBOTO Glowbot has arrived to claim that precious real estate on your desk, essentially shrinking the iconic Las Vegas Sphere down to a personal-sized emotional companion. It’s essentially what you get when someone decides to create the world’s first STEM snow globe with 2,900+ individually addressable pixels, AI-powered interactions, and enough emotional intelligence to make your smart speaker look positively stoic by comparison.

The spherical design immediately draws comparisons to the $2.3 billion Las Vegas Sphere, but unlike its gigantic inspiration, the OBBOTO fits comfortably on your nightstand while still delivering a visual punch that makes traditional smart home devices look painfully utilitarian. Its hemispherical form houses thousands of light nodes that can display anything from infinity emojis to timers to ambient patterns that pulse with your music. The OBBOTO team has managed to pack an impressive amount of personality into a device that, at first glance, might be mistaken for a particularly sophisticated mood lamp. That, however, would be selling it dramatically short.

Designer: OBBOTO

Click Here to Buy Now: $159 $249 (36% off) Hurry! Only 72 of 800 left. Raised over $220,000

The OBBOTO excels at emotional expression in ways that make Amazon’s Echo devices seem emotionally stunted. The infinity emoji feature transforms the sphere into a reactive face that can display a range of emotions, creating a surprisingly effective illusion of companionship. The sphere’s expressions change based on interactions, time of day, and ambient conditions, giving it an almost pet-like quality without the inconvenience of feeding schedules or walks. This emotional reactivity creates a strangely compelling presence that bridges the gap between a utilitarian smart home device and something that feels genuinely alive. The fact that it can also function as a sunrise alarm clock, gradually illuminating your bedroom with a simulated dawn, adds practical functionality to what might otherwise be dismissed as a novelty item.

Built with an impressive array of sensors, the OBBOTO brings genuine situational awareness to your desk. With built-in motion, brightness, and tap sensors, your GlowBot stays perpetually aware of its surroundings. It lights up when you walk by, dims when the sun comes out, and perks up with a friendly tap. The experience feels less like operating a gadget and more like keeping a tiny creature that knows you’re there and vibes accordingly. This ambient intelligence creates interactions that feel natural rather than forced, responding to your presence without demanding your attention.

The intelligence behind OBBOTO goes beyond simple sensor triggers. Powered by AI, OBBOTO reacts to time, weather, and movement in ways that feel playful and surprisingly clever. From sunrise glows to moody pixel shifts, it adapts like the geeky little desk buddy you never knew you needed. The onboard AI combines environmental data with your interaction patterns to create responses that feel increasingly personalized over time. This subtle learning capability means your OBBOTO develops a sort of digital personality that reflects your habits and preferences.

The companion app transforms OBBOTO from clever gadget to creative canvas. Unlike most IoT devices that force you into rigid patterns, the OBBOTO app hands you the creative keys. You can upload your own pixel art, program mini light shows, and schedule custom routines. The app also allows you to share emotions with other OBBOTO users, sending surprise emojis like a wink, heart, or even a silly poop to another OBBOTO. This social dimension adds an unexpected layer of connection between devices, creating a network of emotional expression that transcends typical smart home functionality.

Music visualization with OBBOTO creates a sensory experience that standard smart lights can’t match. Turn on Music Mode, and OBBOTO syncs its glow to your beats, transforming from companion to visual rhythm section. The sphere pulses, shifts, and flows with your music, creating a synchronized light show that enhances rather than distracts from the audio experience. The 2,900+ pixel nodes create surprisingly complex visualizations that respond to different frequencies and intensities, making each listening session visually unique.

OBBOTO’s communication style breaks from the voice-assistant paradigm that dominates smart home tech. What’s its language? Light, pixels, and pure personality. No words, just winks, colors, and pixel-powered expressions. From ambient fades to emoji bursts, every flicker says: “Hey, I see you.” This visual language creates a more ambient, less intrusive form of interaction than the typical smart speaker experience. Rather than interrupting your thoughts with voice responses, OBBOTO communicates through presence, creating a companionable atmosphere rather than demanding conversational engagement.

The practical features of OBBOTO extend beyond its emotional appeal. It functions as a sunrise alarm clock, gradually illuminating your bedroom with simulated dawn light. The white noise generator provides ambient soundscapes for focus or relaxation. Time, weather, and reminders appear as visual cues rather than audio interruptions. These utility functions integrate seamlessly with OBBOTO’s personality, never breaking character to deliver information. The result is a device that remains consistently charming even when performing mundane tasks.

OBBOTO was incubated by Switchbot, a company that most Yanko Design readers will recognize from our consistent admiration of their clever retrofit smart home solutions. This lineage explains the OBBOTO’s perfect balance of whimsy and functionality. Switchbot has consistently demonstrated an understanding of how to make IoT devices that solve real problems without sacrificing personality, and the OBBOTO represents perhaps their most playful creation yet. The emotional intelligence built into the sphere feels like a natural extension of Switchbot’s approach to smart home technology, bringing a level of character and charm that has been noticeably absent from the category.

OBBOTO Glowbot pre-orders at $159, a good 90 bucks off its $249 final MSRP. Deliveries are slated for October 2025, and each Glowbot comes with a 12-month warranty. The box includes the Glowbot itself, a power adapter, and quick-start materials. Customization kits and add-on accessories will be rolling out alongside the core device, expanding the Glowbot’s already impressive palette of expression. For anyone seeking a desktop companion that balances emotional connection with practical functionality, OBBOTO offers a compelling alternative to the increasingly homogeneous world of smart home technology.

Click Here to Buy Now: $159 $249 (36% off) Hurry! Only 72 of 800 left. Raised over $220,000

The post RIP Snow Globes: OBBOTO brings a miniaturized AI-powered 360° Vegas Sphere to your Desktop first appeared on Yanko Design.

]]>
560334
Meta x Oakley’s $399 AI Smart Glasses pack 3K Video Camera and 8-hour Battery in an IPx4 Design https://www.yankodesign.com/2025/06/20/meta-x-oakleys-399-ai-smart-glasses-pack-3k-video-camera-and-8-hour-battery-in-an-ipx4-design/?utm_source=rss&utm_medium=rss&utm_campaign=meta-x-oakleys-399-ai-smart-glasses-pack-3k-video-camera-and-8-hour-battery-in-an-ipx4-design Fri, 20 Jun 2025 20:45:51 +0000 https://www.yankodesign.com/?p=560322

Meta x Oakley’s $399 AI Smart Glasses pack 3K Video Camera and 8-hour Battery in an IPx4 Design

If you told me a few years ago that Meta would be coming for the action camera market, I might have laughed. They’ve tried hard...
]]>

If you told me a few years ago that Meta would be coming for the action camera market, I might have laughed. They’ve tried hard to make cameras cool (remember their Portal device that bombed so badly because of impending backlash from the Cambridge Analytica scandal?), but somehow it never worked. Two years ago, something sort of clicked with their collaboration with Ray-Ban and set us on this new collision course. Here we are now, with Meta’s new Oakley Meta HSTN AI smart glasses gunning for the GoPros, Insta360s, and DJI Actions currently rocked by sportspeople and adrenaline-junkies.

These things are real, they start at $399, and they are gunning for more than just your face – they want to take a bite out of the action camera market, too. The Ray-Ban Meta glasses were Meta’s tentative foot in the door. The Oakley Meta HSTN kicks that door off its hinges. The company has already sold millions of Ray-Ban Meta units, proving that consumer appetite for smart eyewear exists. Now they’re doubling down with hardware specifically engineered for athletes, weekend warriors, and content creators who need hands-free recording during high-intensity activities. The newly debuted HSTN boasts 3K Ultra HD video capture, packs eight hours of battery life, and comes wrapped in Oakley’s unmistakably athletic DNA.
Designers: Meta & Oakley

The incremental upgrade from the Gen 2 Ray-Bans is absolutely obvious. With the HSTN, Meta wants to put the action in action camera and probably sprinkle a bit of AI secret sauce too. The design, internals, and target audience are vastly different compared to the Ray-Bans, even though the broader intent is the same. The Oakley Meta HSTN is designed for sweat, speed, and spectacle. Mbappé is their poster child (among other world-class athletes), because if you want to sell performance, you tap a World Cup winner, not a coffeehouse poet. The message is loud: these glasses were made for people who move, not just people who pose.

Let’s get nerdy with the hardware. The Oakley Meta HSTN packs a 3K Ultra HD camera right into the bridge of the frame. It’s a huge leap over the Ray-Ban Gen 2’s 1080p shooter, making this the first mainstream smart glass product with specs that could keep up with action cams from DJI, Insta360, and GoPro. You get up to 8 hours of battery on a single charge, with a quick-charge feature that juices you up to 50 percent in 20 minutes. The included case isn’t just a glorified pouch; it’ll provide up to 48 hours of additional charging. Water resistance? IPX4 rated, so you can sweat, sprint, or get caught in the rain without babying your tech. Open-ear speakers are stealthily nestled in the arms, serving up playlists, podcasts, or play-by-play, with enough volume to cut through the noise at a game or on a run.

Meta’s AI is baked in, granting hands-free access to voice commands, contextual info, and smart capture. You want to ask your glasses about the weather, get training tips, or even analyze your splits mid-run? That’s the pitch. There’s a multi-mic array to keep your commands crisp and clear, even when you’re surrounded by shouting fans or roaring engines. The glasses connect to the Meta AI app, and require a Meta account, with AI and voice features rolling out to more countries as the product line expands. For creators, athletes, and anyone who wants to shoot unique POV content, these specs mean you can finally leave the chest rig and headband at home without sacrificing quality.

Compared to the Ray-Ban Meta, the Oakley Meta HSTN is a different animal. The Ray-Bans are about stealth, elegance, and everyday wear; the Oakleys are built for performance and presence. The battery is beefier, the camera is sharper, and the durability is much higher. While Ray-Bans are for brunch and city walks, Oakleys are for the gym, the trail, the field, and the skate park. The HSTN model even has a limited edition with gold accents for $499, because if you’re going to flex, you might as well sparkle. Standard models start at $399, which undercuts some high-end action cams and puts pressure on the traditional wearable market. Meta isn’t just challenging Snap Spectacles here; they’re poking the bear in GoPro’s den.

The timing could not be more perfect. The action camera market is hungry for innovation, and Meta knows it. DJI, Insta360, GoPro – pay attention. Meta and Oakley are coming straight for your users with a product that is lighter, more discreet, and, crucially, wearable all day. If Meta can deliver on video quality, stabilization, and ease of sharing, this could mark the first time a face-worn device genuinely competes with a chest-mounted GoPro for active creators.

The design has real presence. Oakley’s HSTN frame is unmistakable, bold, and built for movement. The open-ear audio means you don’t sacrifice awareness for music. The 3K camera means your highlight reel will actually look good on a big screen, not just a phone. The IPX4 water resistance means you can train hard without flinching. Add in Meta’s relentless AI push and you have a product that’s more than a stunt: it’s a shot across the bow for every company still stuck in the “wearable for the living room” mindset.

The global rollout strategy reveals Meta’s serious commitment to this product category. Initial availability across North America, Australia, and several European countries, followed by expansion to Mexico, India, and the UAE by year-end, demonstrates significant manufacturing and distribution investment. The debut at major sporting events like Fanatics Fest and UFC International Fight Week positions the glasses directly within performance culture rather than general consumer electronics. Meta understands that authentic adoption by athletes and sports influencers will drive broader market acceptance more effectively than traditional tech marketing approaches.

The post Meta x Oakley’s $399 AI Smart Glasses pack 3K Video Camera and 8-hour Battery in an IPx4 Design first appeared on Yanko Design.

]]>
560322
Humane AI Pin gets resurrected thanks to a new free Open-Source Platform https://www.yankodesign.com/2025/06/19/humane-ai-pin-gets-resurrected-thanks-to-a-new-free-open-source-platform/?utm_source=rss&utm_medium=rss&utm_campaign=humane-ai-pin-gets-resurrected-thanks-to-a-new-free-open-source-platform Thu, 19 Jun 2025 19:15:59 +0000 https://www.yankodesign.com/?p=559999

Humane AI Pin gets resurrected thanks to a new free Open-Source Platform

So you splurged $700 on that sleek Humane AI Pin last year, dazzled by promises of an AI-powered future clipped to your lapel. Now it...
]]>

So you splurged $700 on that sleek Humane AI Pin last year, dazzled by promises of an AI-powered future clipped to your lapel. Now it sits in your drawer, a pricey trinket that does little more than project the time onto your palm when it’s not busy being a conversation starter about tech investments gone wrong. “What’s that thing?” “Oh, just my $700 mistake.”

Welcome to the club of Humane Pin owners, early adopters who bet on the wrong horse. When Humane shuttered its services shortly after launch, the Pin became the tech equivalent of a fancy paperweight, its cloud-dependent features evaporating into the digital ether. But before you relegate it to your personal museum of tech failures (right next to your 3D glasses and Google Glass), there’s a fascinating plot twist unfolding in the form of PenumbraOS, an experimental SDK that might just redeem your investment.

Designer: PenumbraOS

Developer Adam Gastineau has spent roughly 400 hours reverse-engineering the device to create PenumbraOS, transforming the abandoned wearable into a legitimate development platform. Before you second-guess things, it’s not some hacked-together demo, it’s a comprehensive attempt to liberate the hardware from its original constraints, treating the Pin as a specialized Android device with unique capabilities. Check Adam’s post demonstrating PenumbraOS in action here.

The magic of PenumbraOS lies in its ability to allow “untrusted” applications to perform privileged operations, essentially cracking open the device’s potential. At its heart is MABL, a modular user-facing assistant app that replaces the now-defunct official AI interface. While still experimental, the architecture is solid enough to build upon, offering a framework for developers to explore what’s possible with the Pin’s unique hardware suite.

What makes this particularly interesting is the hardware itself. The Humane Pin isn’t just any failed gadget, it’s a device with genuinely innovative features, including that miniature projector, an array of sensors, and a form factor that still represents where wearable tech might eventually go. Through PenumbraOS, these components are getting a second chance to shine, freed from the walled garden that ultimately doomed them.

For the tech-curious with a Pin gathering dust, this represents an unexpected opportunity. Instead of an embarrassing reminder of tech hype gone wrong, your Pin could become a playground for experimentation, joining a small but dedicated community of developers exploring the road not taken by Humane.

The project is openly seeking contributors on GitHub, turning what was once a closed ecosystem into an open-source adventure. It’s a beautiful example of the hacker ethos: when companies fail, communities can step in to salvage the technological potential left behind. Now, if only someone did something about Spotify’s Car Thing.

So, before you toss that AI Pin into your junk drawer permanently, consider giving it a new life with PenumbraOS. After all, the most interesting tech stories often happen after the official narrative ends, when passionate developers refuse to let good hardware go to waste, even if the original vision didn’t quite pan out.

The post Humane AI Pin gets resurrected thanks to a new free Open-Source Platform first appeared on Yanko Design.

]]>
559999
Angry AI Study Lamp Keeps You Honest and Off Your Phone https://www.yankodesign.com/2025/06/17/angry-ai-study-lamp-keeps-you-honest-and-off-your-phone/?utm_source=rss&utm_medium=rss&utm_campaign=angry-ai-study-lamp-keeps-you-honest-and-off-your-phone Tue, 17 Jun 2025 13:20:05 +0000 https://www.yankodesign.com/?p=559363

Angry AI Study Lamp Keeps You Honest and Off Your Phone

Picking up our phones in the middle of a work session or while studying has become a reflex that is almost impossible to resist. The...
]]>

Picking up our phones in the middle of a work session or while studying has become a reflex that is almost impossible to resist. The smartest among us do not even try to rely on sheer willpower, instead turning to apps and services that promise to block out distractions. Sadly, these digital helpers do not always deliver, and sometimes they ask for more time, money, or personal data than most of us are willing to give.

That is where this AI-powered DIY lamp comes in, bringing back the classic method of shaping behavior: immediate, memorable consequences. In this case, it takes the form of a desk lamp that does not just shine a light, but gets hilariously angry when it catches you slacking off. There is something delightfully old-school about a gadget that reacts with a scolding glare and a grumpy voice, all in the name of keeping you on track.

Designer: Arpan Mondal (Makestreme)

It all began as a bit of a joke, the kind of idea you toss out with no real intention of making it happen. But as the pieces started coming together, it became clear that this could actually be a clever way to help people build better habits. Now, the lamp is a real, working device that glows a fierce red and shouts at you whenever it spots a phone on your desk; no excuses, no mercy, just instant feedback.

Building the lamp is surprisingly approachable for anyone with a bit of DIY experience. The main ingredients are a Raspberry Pi 4, a Pi Camera HQ, a ring of programmable LEDs, and a small speaker. Toss in a 3D-printed or laser-cut shell, and you have got all you need. The camera keeps watch over your workspace, while the Raspberry Pi runs a custom-trained AI model that can spot a phone in an instant.

What sets this lamp apart is how simple and effective it is. All the image processing happens right on the device, so you do not have to worry about your camera sending anything to the cloud. The lamp does not keep logs or track your habits, it just reacts in real time, rewarding you with a calming white glow when you stay focused and unleashing its full fury when you slip up. The immediate feedback makes it much harder to ignore your own bad habits.

That does not mean it is perfect. You will need a bit of tech know-how to put everything together and train the AI if you want to tweak what it looks for. The lamp also needs to be placed just right, with good lighting, to do its job well. But for anyone willing to dive into a fun project and who needs a little extra push to stay off their phone, this lamp is hard to beat.

The post Angry AI Study Lamp Keeps You Honest and Off Your Phone first appeared on Yanko Design.

]]>
559363
Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material https://www.yankodesign.com/2025/06/12/apples-liquid-glass-hands-on-why-every-interface-element-now-behaves-like-physical-material/?utm_source=rss&utm_medium=rss&utm_campaign=apples-liquid-glass-hands-on-why-every-interface-element-now-behaves-like-physical-material Thu, 12 Jun 2025 17:20:17 +0000 https://www.yankodesign.com/?p=558413

Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material

Liquid Glass represents more than an aesthetic update or surface-level polish. It functions as a complex behavioral system, precisely engineered to dictate how interface layers...
]]>

Liquid Glass represents more than an aesthetic update or surface-level polish. It functions as a complex behavioral system, precisely engineered to dictate how interface layers react to user input. In practical terms, this means Apple devices now interact with interface surfaces not as static, interchangeable panes, but as dynamic, adaptive materials that fluidly flex and respond to every interaction. Interface elements now behave like physical materials with depth and transparency, creating subtle visual distortions in content beneath them, like looking through textured glass.

Designer: Apple

This comprehensive redesign permeates every pixel across the entire Apple ecosystem, encompassing iOS, iPadOS, macOS, and watchOS, creating consistent experience regardless of platform. Born out of close collaboration between Apple’s design and engineering teams, Liquid Glass uses real-time rendering and dynamically reacts to movement with specular highlights. The system extends from the smallest interface elements (buttons, switches, sliders, text controls, media controls) to larger components including tab bars and sidebars. What began as experimental explorations within visionOS has evolved into a foundational cornerstone across all of Apple’s platforms.

Yanko Design (Vincent Nguyen): What was that initial simple idea that sparked Liquid Glass? And second, how would you describe the concept of “material” in this context to everyday users who don’t understand design?

Alan Dye (VP of Human Interface Design, Apple): “Well, two things. I think what got us mostly excited was the idea of whether we could create a digital material that could morph and adapt and change in place, and still have this beautiful transparency so it could show through to the content. Because I think, initially, our goal is always to celebrate the user’s content, whether that’s media or the app.”

 

This technical challenge reveals the core problem Apple set out to solve: creating a digital material that maintains form-changing capabilities while preserving transparency. Traditional UI elements either block content or disappear entirely, but Apple developed a material that can exist in multiple states without compromising visibility of underlying content. Dye’s emphasis on “celebrating user content” exposes Apple’s hierarchy philosophy, where the interface serves content instead of competing with it. When you tap to magnify text, the interface doesn’t resize but stretches and flows like liquid responding to pressure, ensuring your photos, videos, and web content remain the focus while navigation elements adapt around them.

“And then in terms of what we would call the data layer, we liked the idea that every application has its content. So Photos has all the imagery of your photos. We want that to be the star of the show. Safari, we want the webpage to be the focal point. So when you scroll, we’re able to get those controls out of the way, shrink the URL field in that case.”

Apple has established a clear priority system where Photos imagery, Safari web pages, and media content take precedence over navigational elements, instead of treating interface chrome and user content as equal elements competing for attention. This represents a shift from interface-centric design to content-centric design. The practical implementation becomes apparent when scrolling through Safari, where the URL field shrinks dynamically, or in Photos, where the imagery dominates the visual hierarchy while controls fade into the background. Controls fade and sharpen based on what you’re doing, creating interfaces that feel more natural and responsive, where every interaction provides clear visual feedback about what’s happening and where you are in the system.

“For everyday users, we think there’s this layer that’s the top level. Menu systems, back buttons, and controls. And then there’s the app content beneath. That’s how we determine what’s the glass layer versus the application layer.”

Dye’s explanation of the “glass layer versus application layer” architecture provides insight into how Apple technically implements this philosophy. The company has created a distinct separation between functional controls (the glass layer) and user content (the application layer), allowing each to behave according to different rules while maintaining visual cohesion. This architectural decision enables the morphing behavior Dye described, where controls can adapt and change while content remains stable and prominent.

The Physical Reality Behind Digital Glass

During one of Apple’s demo setups, my attention was drawn to a physical glass layer arranged over printed graphics. This display served as tangible simulation of the refractive effect that Liquid Glass achieves in the digital realm. As I stood above the installation, I could discern how the curves and layering of the glass distorted light, reshaping the visual hierarchy of the underlying graphics. This physical representation was more than decorative flourish; it served as a bridge, translating the complex theoretical underpinnings of Apple’s design approach into something tactile and comprehensible.

That moment of parallax and distortion functioned as a compelling real-world metaphor, illustrating how interface controls now transition between foreground and background elements. What I observed in that physical demonstration directly translated to my hands-on experience with the software: the same principles of light refraction, depth perception, and material behavior that govern real glass now influence how digital interfaces respond to interaction.

Hands-On: How Liquid Glass Changes Daily Interactions

My hands-on experience with the newly refreshed iOS 26, iPadOS 26, macOS Tahoe, and watchOS 26 immediately illuminated the essence of Liquid Glass. What Apple describes as “glass” now transcends static texture and behaves as a dynamic, responsive environment. Consider the tab bars in Music or the sidebar in Notes app: as I scrolled through content, subtle distortions became apparent beneath these interface elements, accompanied by live refraction effects that gently bent the underlying content. The instant I ceased scrolling, this distortion smoothly resolved, allowing the content to settle into clarity.

My focus this year remained on the flat-screen experience, as I did not demo Vision Pro or CarPlay. iOS, iPadOS, and macOS serve as demonstrations of how Liquid Glass adapts to various input models, with a mouse hover eliciting distinct behaviors compared to direct tap or swipe. The material possesses understanding of when to amplify content for prominence and when to recede into the background. Even during media playback, dynamic layers expand and contract, responding directly to how and when you engage with the screen.

The lock screen clock exemplifies Liquid Glass principles perfectly. The time display dynamically scales and adapts to the available space behind it, creating a sense that the interface is responding to the content instead of imposing rigid structure upon it. This adaptive behavior extends beyond scaling to include weight adjustments and spacing modifications that ensure optimal legibility regardless of wallpaper complexity.

On macOS, hovering with a mouse cursor creates subtle preview states in interface elements. Buttons and controls show depth and transparency changes that indicate their interactive nature without overwhelming the content beneath. Touch interactions on iOS and iPadOS create more pronounced responses, with elements providing haptic-like visual feedback that corresponds to the pressure and duration of contact. The larger screen real estate of iPadOS allows for more complex layering effects, where sidebars and toolbars create deeper visual hierarchies with multiple levels of transparency and refraction.

The difference from current iOS becomes apparent in specific scenarios. In the current Music app, scrolling through your library feels like moving through flat, static layers. With Liquid Glass, scrolling creates a sense of depth. You can see your album artwork subtly shifting beneath the translucent controls, creating spatial awareness of where interface elements sit in relation to your content. The tab bar doesn’t just scroll with you; it creates gentle optical distortions that make the underlying content feel physically present beneath the glass surface.

However, the clear aesthetic comes with notable trade-offs. While the transparency creates visual depth, readability can suffer in certain lighting conditions or with complex wallpapers. Apple has engineered an adaptive system that provides light backgrounds for dark content and dark backgrounds for light content, but the system faces challenges when backgrounds contain mixed lighting conditions. While testing the clear home screen option, where widgets and icons adopt full transparency, the aesthetic impact is striking but raises practical concerns. The interface achieves a modern, visionOS-inspired look that feels fresh and contemporary, yet this approach can compromise text legibility, with busy wallpapers or varying lighting conditions creating readability issues that become apparent during extended use.

The challenge becomes most apparent with notification text and menu items, where contrast can diminish to the point where information becomes difficult to parse quickly. Apple provides the clear transparency as an optional setting, acknowledging that maximum transparency isn’t suitable for all users or use cases. This represents one of the few areas where the visual appeal of Liquid Glass conflicts with practical usability, requiring users to make conscious choices about form versus function.

Even keyboard magnification, when activated by tapping to edit text, behaved not as resizing but as fluid digital glass reacting organically to touch pressure. This response felt natural, almost organic in its execution. The system rewards motion with clarity and precision, creating transitions that establish clear cause and effect while guiding your understanding of your current location within the interface and your intended destination. Across all platforms, this interaction dynamically ranges between 1.2x and 1.5x magnification, with the value determined by specific gesture, contextual environment, and interface density at that moment instead of being rigidly fixed.

This logic extends to watchOS, where pressing an icon or notification amplifies the element, creating magnification that feels less like conventional zoom and more like digital glass stretching forward. On the small watch screen, this creates a sense of interface elements having physical presence and weight. Touch targets feel more substantial with reflective surfaces and enhanced depth cues, making interactions feel more tactile despite the flat display surface.

While this interaction feels natural, the underlying mechanics are precisely controlled and deeply integrated. Apple has engineered a system that responds intelligently to context, gesture, and content type. Apple’s intention with Liquid Glass extends beyond replicating physical glass and instead represents recognition of the inherent qualities of physical materials: how light interacts with them, how they create distortion, and how they facilitate layering. These characteristics are then applied to digital environments, liberating them from the restrictive constraints of real-world physics.

Why This Matters for Daily Use

The result is a system that is elastic, contextually aware, and designed to recede when its presence is not required. Most individuals will not pause to dissect the underlying reasons why a particular interaction feels improved. Instead, they will perceive enhanced grounding when navigating iPadOS or watchOS, with sidebar elements conveying heightened solidity and magnification effects appearing intentional. Apple does not overtly publicize these changes; it engineers them to resonate with the user’s sense of interaction.

This translates to practical benefits: reduced cognitive load when navigating between apps, clearer visual hierarchy that helps you focus on content, and interface feedback that feels more natural and predictable. When you’re editing photos, the tools recede to let your images dominate. When you’re reading articles in Safari, the browser chrome adapts to keep text prominent. When you’re scrolling through messages, the conversation content remains clear while navigation elements provide subtle depth cues.

Liquid Glass represents a fundamental recalibration of how digital interfaces convey motion, spatial relationships, and control. The outcome is an experience that defies easy verbal articulation, yet one that you will find yourself unwilling to relinquish.

The post Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material first appeared on Yanko Design.

]]>
558413
Tailor Is A Playful Tabletop Robot That Brings AI Voices to Life https://www.yankodesign.com/2025/05/30/tailor-is-a-playful-tabletop-robot-that-brings-ai-voices-to-life/?utm_source=rss&utm_medium=rss&utm_campaign=tailor-is-a-playful-tabletop-robot-that-brings-ai-voices-to-life Fri, 30 May 2025 17:00:06 +0000 https://www.yankodesign.com/?p=555520

Tailor Is A Playful Tabletop Robot That Brings AI Voices to Life

Imagine a future where your favorite AI assistant isn’t hiding in your phone or smart speaker but is sitting right beside you, nodding along and...
]]>

Imagine a future where your favorite AI assistant isn’t hiding in your phone or smart speaker but is sitting right beside you, nodding along and making eye contact. That’s the dream behind Tailor, the tabletop robot concept, which puts a friendly face and a little personality on the invisible voices we’ve all grown used to. No more talking into the void; Tailor makes digital conversation feel delightfully grounded.

Tailor isn’t some clunky robot with blinking lights and awkward limbs. Instead, its charm lies in a gentle, tiltable head that acts as both a screen and a face, propped up by a hinge that works like a neck. When you speak, Tailor listens, perks up, or even tilts with curiosity, bringing a sense of presence to those every day chats with your AI. It’s a bit like having a pet that reacts to your mood, only this one’s powered by the Tail AI system.

Designer: Sseo Kimm

1

There’s something oddly comforting about watching Tailor’s head move and its screen-face animate in response to your words. It takes the coldness out of technology, making every interaction feel a little warmer, a little more genuine. Instead of invisible algorithms, you get a companion who is right there on your desk, ready to nod, tilt, or glance around as if it’s sharing the moment with you.

The magic is in the details: the way Tailor’s head gently pivots when it’s thinking, or how it rests in a relaxed pose when waiting for your next command. Its body is sleek, with soft edges and a neutral color scheme that helps it blend into any room, but it’s the expressive movement that catches your attention. The hinge lets Tailor look attentive or bashful, depending on the mood of the exchange, and its digital face keeps things simple and inviting.

The Tail AI system that powers Tailor is designed to roam across your digital life, but this robot concept gives it a real seat at the table, literally. It’s easy to imagine Tailor quietly keeping you company, responding with subtle gestures whether you’re asking for the weather or searching for a lost file. The physicality of the robot bridges the gap between abstract AI and the tangible world you can actually touch.

There’s a playful side to Tailor, too. Watching it react to your voice with a tilt or a nod feels like a secret handshake between you and your gadget. It turns routine interactions into moments of connection, making even the simplest tasks feel kind of special. The robot’s approachable design keeps things light, never veering into uncanny territory, and always seeming ready to listen.

Even though Tailor only lives as a concept for now, it hints at a future where artificial intelligence isn’t just a voice in the air. It becomes something you can look at, talk to, and maybe even feel a little attached to. For anyone who’s ever wished their AI helper could be a bit more like a friend, this design is a peek into a friendlier, more tactile tomorrow.

The post Tailor Is A Playful Tabletop Robot That Brings AI Voices to Life first appeared on Yanko Design.

]]>
555520
PIEK Is An AI-Powered Guitar Tutor Concept That Makes Learning Fun for Beginners https://www.yankodesign.com/2025/05/29/piek-is-an-ai-powered-guitar-tutor-concept-that-makes-learning-fun-for-beginners/?utm_source=rss&utm_medium=rss&utm_campaign=piek-is-an-ai-powered-guitar-tutor-concept-that-makes-learning-fun-for-beginners Thu, 29 May 2025 16:20:24 +0000 https://www.yankodesign.com/?p=555400

PIEK Is An AI-Powered Guitar Tutor Concept That Makes Learning Fun for Beginners

Learning guitar has always had a certain mystique, the kind that draws in people eager to strum their favorite tunes or just jam along with...
]]>

Learning guitar has always had a certain mystique, the kind that draws in people eager to strum their favorite tunes or just jam along with friends. There’s an undeniable thrill in picking up a guitar for the first time, imagining all the songs you’ll soon be able to play. Yet, as any beginner quickly discovers, getting started isn’t always as simple as it seems, and the obstacles can pile up before you even learn your first chord.

It’s easy to think that buying a guitar is the only real hurdle, but the reality is a little more complicated. The cost of amplifiers, cables, and other must-have accessories can catch new players off guard. And even once you’ve assembled all your gear, the price of private lessons or music school can feel out of reach, especially if you’re just hoping for a casual hobby. Many turn to YouTube tutorials as a budget-friendly option, but these videos only go so far.

Designer: Haneul Kang

Forming good habits early on is crucial when learning guitar, but beginners practicing on their own often pick up bad techniques without realizing it. Things like awkward wrist angles, clumsy picking, or crooked finger placement creep in and become hard to reverse over time. The problem with online videos is that they can’t watch you play or tell you when you’re making a mistake. You’re left guessing whether your posture is correct or your rhythm is steady.

That’s where PIEK, a clever AI-powered guitar tutor concept, comes into play. Shaped like a familiar guitar pick, PIEK clips onto your guitar headstock and uses a camera and smart sensors to analyze your hand movements. It’s not just watching; it’s learning with you, offering instant feedback so you can catch and fix bad habits before they stick. Practicing feels less like guesswork and more like having a patient guide by your side.

PIEK SOLO is the compact version of the device, designed for quick, mobile use. It attaches right to the headstock, tracking your picking hand with a camera and sensors that notice everything from your picking strength to your finger patterns. The connected app beams feedback straight to your phone, so you know whether your technique is on point or needs a little nudge. Its lightweight build and simple design mean you can clip it on and start learning right away, wherever inspiration strikes.

The PIEK DUO takes things a step further for those who want even deeper insights. Instead of clipping onto the guitar, DUO is a freestanding device that sits in front of the player, using a wide-angle camera to track both hands. Its LED strip pulses like a metronome that adjusts to your tempo, giving you a visual cue to help lock in your groove. With its AI-driven rhythm detection, DUO knows exactly when you’re falling behind or rushing ahead, and will let you know.

For the most thorough experience, you can use SOLO and DUO together, watching both hands in real-time. This dual setup means you get feedback on your fretting and strumming, making it easier than ever to smooth out those early rough patches. For anyone who’s struggled to learn guitar on their own, or who found traditional lessons out of reach, PIEK offers a fun, affordable, and smart new way to practice.

The post PIEK Is An AI-Powered Guitar Tutor Concept That Makes Learning Fun for Beginners first appeared on Yanko Design.

]]>
555400
Everything We Know About Jony Ive’s $6.5 Billion Dollar ‘Secret’ AI Gadget https://www.yankodesign.com/2025/05/27/everything-we-know-about-jony-ives-6-5-billion-dollar-secret-ai-gadget/?utm_source=rss&utm_medium=rss&utm_campaign=everything-we-know-about-jony-ives-6-5-billion-dollar-secret-ai-gadget Wed, 28 May 2025 00:30:24 +0000 https://www.yankodesign.com/?p=554662

Everything We Know About Jony Ive’s $6.5 Billion Dollar ‘Secret’ AI Gadget

Let’s be honest, the tech world hasn’t felt this electric since Steve Jobs pulled the original iPhone from his pocket. Sure, we felt a few...
]]>

Let’s be honest, the tech world hasn’t felt this electric since Steve Jobs pulled the original iPhone from his pocket. Sure, we felt a few sparks fly in 2024 when Rabbit and Humane announced their AI devices, but that died down pretty quickly post-launch. However, when news broke that OpenAI had acquired Jony Ive’s mysterious startup “io” for a staggering $6.5 billion, the speculation machine kicked into overdrive. What exactly are the legendary Apple designer and ChatGPT’s creators cooking up together? The official announcement speaks vaguely of “a new family of products” and moving beyond traditional interfaces, but the details remain frustratingly sparse.

What we do know with certainty is limited. OpenAI and Ive’s company, io, are building something that’s reportedly “screen-free,” pocket-sized, and designed to bring AI into the physical world in a way that feels natural and ambient. The founding team includes Apple veterans Scott Cannon, Evans Hankey, and Tang Tan, essentially the hardware dream team that shaped the devices in your pocket and on your wrist. Beyond these confirmed facts lies a vast expanse of rumors, educated guesses, and wishful thinking. So let’s dive into what this device might be, with the appropriate grains of salt at the ready.

The Design: Ive’s Aesthetic Philosophy Reimagined

AI Representation

If there’s one thing we can reasonably predict, it’s that whatever emerges from Ive’s studio will be obsessively considered down to the micron. His design language at Apple prioritized simplicity, honest materials, and what he often called “inevitable” solutions, designs that feel so right they couldn’t possibly be any other way. A screen-free AI device presents a fascinating challenge: how do you create something tactile and intuitive without the crutch of a display?

I suspect we’ll see a device that feels substantial yet effortless in the hand, perhaps with a unibody construction milled from a single piece of material. Aluminum seems likely given Ive’s history, though ceramic would offer an interesting premium alternative with its warm, almost organic feel. The absence of a screen suggests the device might rely on subtle surface textures, perhaps with areas that respond to touch or pressure. Ive’s obsession with reducing visual complexity, eliminating unnecessary seams, screws, and buttons, will likely reach its logical conclusion here, resulting in something that looks deceptively simple but contains remarkable complexity.

Color choices will probably be restrained and sophisticated, think the elegant neutrals of Apple’s “Pro” lineup rather than the playful hues of consumer devices. I’d wager on a palette of silver, space gray, and possibly a deep blue, with surface finishes that resist fingerprints and wear gracefully over time. The environmental considerations that have increasingly influenced Ive’s work will likely play a role too, with recycled materials and sustainable manufacturing processes featured prominently in the eventual marketing narrative.

Technical Possibilities: AI in Your Pocket

AI Representation

The technical challenge of creating a screen-free AI device is immense. Without a display, every interaction becomes an exercise in invisible design, the device must understand context, anticipate needs, and communicate through means other than visual interfaces. This suggests an array of sophisticated sensors and input methods working in concert.

Voice recognition seems an obvious inclusion, likely using multiple microphones for spatial awareness and noise cancellation. Haptic feedback, perhaps using Apple-like Taptic Engine technology or something even more advanced, could provide subtle physical responses to commands or notifications. The device might incorporate motion sensors to detect when it’s being handled or carried, automatically waking from low-power states. Some reports hint at environmental awareness capabilities, suggesting cameras or LiDAR might be included.

The processing requirements for a standalone AI device are substantial. Running large language models locally requires significant computational power and memory, all while maintaining reasonable battery life. This points to custom silicon, possibly developed with TSMC or another major foundry, optimized specifically for AI workloads. Whether OpenAI has the hardware expertise to develop such chips in-house remains an open question, though their Microsoft partnership might provide access to specialized hardware expertise. Battery technology will be crucial; a device that needs charging multiple times daily would severely limit its utility as an always-available AI companion.

The User Experience: Beyond Screens and Apps

AI Representation

The most intriguing aspect of this rumored device is how we’ll actually use it. Without a screen, traditional app paradigms become irrelevant. Instead, we might see a return to conversational computing, speaking naturally to an assistant that understands context and remembers previous interactions. The “ambient computing” vision that’s been promised for years might finally materialize.

I imagine a device that feels less like a gadget and more like a presence, something that fades into the background until needed, then responds with uncanny intelligence. Perhaps it will use subtle audio cues or haptic patterns to indicate different states or notifications. The lack of a visual interface could actually enhance privacy; without a screen displaying potentially sensitive information, the device becomes more discreet in public settings. Of course, this also raises questions about accessibility, how will deaf users interact with a primarily audio-based device?

Integration with existing ecosystems will be crucial for adoption. Will it work seamlessly with your iPhone, Android device, or Windows PC? Can it control your smart home devices or integrate with your calendar and messaging apps? The answers remain unknown, but OpenAI’s increasingly broad partnerships suggest they understand the importance of playing nicely with others. The real magic might come from its predictive capabilities, anticipating your needs based on time, location, and past behavior, then proactively offering assistance without explicit commands.

Market Positioning and Price Speculation

AI Representation

How much would you pay for an AI companion designed by the man behind the iPhone? The pricing question looms large over this project. Premium design and cutting-edge AI technology don’t come cheap, suggesting this will be positioned as a high-end device. Looking at adjacent markets provides some clues, Humane’s AI Pin launched at $699, while Rabbit’s R1 came in at $199, though both offer significantly less sophisticated experiences than what we might expect from OpenAI and Ive.

My educated guess places the device somewhere between $499 and $799, depending on capabilities and materials. A lower entry point might be possible if OpenAI adopts a subscription model for premium AI features, subsidizing hardware costs through recurring revenue. The target market initially appears to be tech enthusiasts and professionals, people willing to pay a premium for cutting-edge technology and design, before potentially expanding to broader consumer segments as costs decrease and capabilities improve.

As for timing, the supply chain whispers and regulatory tea leaves suggest we’re looking at late 2025 at the earliest, with full availability more likely in 2026. Hardware development cycles are notoriously unpredictable, especially for first-generation products from newly formed teams. The $6.5 billion acquisition price suggests OpenAI sees enormous potential in this collaboration, but also creates substantial pressure to deliver something truly revolutionary.

The Competitive Landscape: A New Category Emerges

AI Representation

The AI hardware space is still in its infancy. Early entrants like Humane have struggled with fundamental questions about utility and user experience. What makes a dedicated AI device compelling when smartphones already offer capable assistants? The answer likely lies in specialized capabilities that phones can’t match, perhaps always-on contextual awareness without battery drain, or privacy guarantees impossible on multipurpose devices.

OpenAI and Ive are betting they can define a new product category, much as Apple did with the iPhone and iPad. Success will require not just technical excellence but a compelling narrative about why this device deserves space in your life. The competition won’t stand still either, Apple’s rumored AI initiatives, Google’s hardware ambitions, and countless startups will ensure a crowded marketplace by the time this device launches.

The most fascinating aspect might be how this hardware play fits into OpenAI’s broader strategy. Does physical embodiment make AI more trustworthy, useful, or personable? Will dedicated devices provide capabilities impossible through software alone? These philosophical questions underpin the entire project, suggesting that Ive and Altman share a vision that extends beyond quarterly profits to how humans and AI will coexist in the coming decades.

What This Could Mean for the Future of Computing

AI Representation

If successful, this collaboration could fundamentally reshape our relationship with technology. The screen addiction that defines contemporary digital life might give way to something more ambient and less demanding of our visual attention. AI could become a constant companion rather than an app we occasionally summon, always listening, learning, and assisting without requiring explicit commands for every action.

The privacy implications are both promising and concerning. A device designed from the ground up for AI interaction could incorporate sophisticated on-device processing, keeping sensitive data local rather than sending everything to the cloud. Conversely, an always-listening companion raises obvious surveillance concerns, requiring thoughtful design and transparent policies to earn user trust.

For Jony Ive, this represents a chance to define the post-smartphone era, potentially creating his third revolutionary product category after the iPod and iPhone. For OpenAI, hardware provides a direct channel to users, bypassing platform gatekeepers like Apple and Google. The stakes couldn’t be higher for both parties, and for us, the potential users of whatever emerges from this collaboration.

Waiting for the Next Big Thing

AI Representation

The partnership between OpenAI and Jony Ive represents the most intriguing collision of AI and design talent we’ve seen yet. While concrete details remain scarce, the ambition is clear: to create a new kind of computing device that brings artificial intelligence into our physical world in a way that feels natural, beautiful, and essential.

Will they succeed? History suggests caution; creating new product categories is extraordinarily difficult, and first-generation devices often disappoint (raise your hands if you own a bricked Humane AI Pin or Rabbit R1. Yet the combination of OpenAI’s technical prowess and Ive’s design sensibility offers reason for optimism. Whatever emerges will undoubtedly be thoughtfully designed and technically impressive. Whether it finds a permanent place in our lives depends on whether it solves real problems in ways our existing devices cannot.

For now, we wait, analyzing every patent filing, supply chain rumor, and cryptic statement for clues about what’s coming. The anticipation itself speaks volumes about the state of consumer technology: in an era of incremental smartphone updates and me-too products, we’re hungry for something genuinely new. Jony Ive and Sam Altman just might deliver it.

The post Everything We Know About Jony Ive’s $6.5 Billion Dollar ‘Secret’ AI Gadget first appeared on Yanko Design.

]]>
554662