Brilliant Labs is announcing the launch of Frame, which the company touts as the world’s first glasses with an integrated multimodal AI assistant. If your definition of smart glasses includes placing visual overlays in your environment and seeing text floating as you wander about, then Frame — not the dozens of screen-mirroring glasses — is the one you’ve been waiting for.
Frame uses multimodal, generative AI agents to let you navigate and interact with the real world, much like the Humane AI Pin and Rabbit R1. Ask “Noa,” the AI assistant that is always on, questions about what’s in front of you, how many calories you’re about to consume, or what’s written on that foreign signage, and it will identify the best AI model to respond, ranging from GPT-4V for visual-based queries to the Stable Diffusion model for image generation to Perplexity AI for search and navigation.
Bobak Tavangar, CEO of Brilliant Labs, tells me that the Perplexity integration is arguably the most notable AI partnership of the Frame, with its search capabilities rivaling Google and being able to generate fast and reliable results. “On a more technical level, we’ve tried a lot of stuff, and there’s just no one as fast. Speed matters when you’re in that moment and only have a few seconds to know about something before moving to the next thing,” he explains.