AI Glasses, Artificial Intelligence

Privacy by Design: Building Trust in India’s First Generation of AI Eyewear

In February 2026, a powerful image flashed across Indian media: Prime Minister Narendra Modi at the Bharat Mandapam, wearing a sleek pair of spectacles that were not just glasses, but a window into India’s technological future. The device was Sarvam Kaze, India’s first indigenous AI-powered smart glasses, developed by Bengaluru-based startup Sarvam AI. As the Prime Minister tested its real-time capabilities, a message was sent to the nation and the world: India is entering the age of wearable AI.

Yet, alongside the excitement, a more sobering question lingers. A device that sees what you see and hears what you hear is not just another gadget. It is a repository of deeply personal data—and a potential vector for surveillance, whether intentional or inadvertent. As Meta’s Ray-Ban glasses launch in India and homegrown alternatives prepare for market, a fundamental challenge confronts every manufacturer, platform provider, and business leader: How do we build trust into the very DNA of these devices?

The answer lies in a philosophy that must guide every stage of design, engineering, and deployment: Privacy by Design.

The Privacy Paradox of Always-On Wearables

When Meta’s Ray-Ban smart glasses went on sale in India in May 2025, they brought with them a host of privacy concerns that India’s current legal framework is ill-equipped to address.

The core issue is simple but profound. Most people can easily tell when someone is filming with a phone camera. That is not the case with smart glasses. Meta includes a small LED on the glasses that lights up when recording is active. If the LED is blocked, the system notifies the user and disables the ability to record until it is cleared. This is a responsible design choice—but it places full responsibility on the wearer.

In public spaces like cafes, metros, or places of worship, bystanders often remain completely unaware that someone is recording them. The Digital Personal Data Protection (DPDP) Act does not currently require users to inform others when recording in public spaces. It also does not mandate that the LED indicator remain visible. If users choose to ignore it, people around them have no way to know they’re being recorded.

The Act gives control only to those who actively use a service. Bystanders—people recorded incidentally in public spaces—have no direct relationship with the platform and therefore cannot exercise rights to access, correct, or erase their data. This creates a fundamental asymmetry of power that privacy-by-design principles seek to address.

The Response: India’s Emerging Governance Framework

The Indian government is not blind to these challenges. In November 2025, the Ministry of Electronics and Information Technology (MeitY) unveiled the India AI Governance Guidelines, a comprehensive framework to ensure safe, inclusive, and responsible AI adoption across sectors.

The guiding principle that defines the spirit of the framework is simple, yet profound: “Do No Harm”. The guidelines focus on creating sandboxes for innovation while ensuring risk mitigation within a flexible, adaptive system. They comprise seven guiding principles (Sutras) for ethical and responsible AI, key recommendations across six pillars of AI governance, and practical guidelines for industry, developers, and regulators.

At the India AI Impact Summit 2026, Prime Minister Modi expanded on this vision with the introduction of “MANAV” —a framework that blends technological advancement with human values. The acronym stands for:

  • Moral and Ethical Systems
  • Accountable Governance
  • National Sovereignty
  • Accessible and Inclusive AI
  • Valid and Legitimate Systems 

Under the first pillar, Modi emphasised that AI must be grounded in strong moral and ethical systems. Fairness, transparency, and meaningful human oversight are non-negotiable principles in AI design and deployment.

The Hardware Imperative: Building Privacy into the Silicon

For manufacturers and product designers, these principles must translate into tangible engineering decisions. Privacy cannot be an afterthought, patched on after a product is built. It must be architected into the hardware from the very first schematic.

On-Device Processing: The Sarvam Model

Sarvam AI’s approach offers a compelling blueprint. Ahead of the Sarvam Kaze debut, the company introduced Sarvam Edge, an AI model designed to run entirely offline on smartphones and other consumer hardware. This marks a decisive shift away from cloud-dependent inference.

The company’s reasoning is powerful: “Intelligence should work everywhere. Not summoned from distant servers, not gated behind connectivity, not metered by the query. Just there, immediate and local”.

By eliminating server round-trips, responses become instantaneous. There is no round-trip to a data center, no queueing behind other users, no variance based on network conditions. Critically, your data never leaves your device. There is no server logging your queries, no database storing your conversations.

For AI-enabled glasses, this is transformative. It means real-time translation, object recognition, and environmental awareness—all processed on-device, with no data ever leaving the wearer’s possession. This is privacy by design at its most fundamental level.

Visual Indicators: The Meta Lesson

Meta’s approach to the recording indicator on its Ray-Ban glasses demonstrates both the potential and the limits of hardware-based privacy. The glasses include an LED that lights up when recording is active, and the system is designed to detect if that LED is covered.

This is not a trivial feature. When a car customisation company began selling vinyl covers on TikTok claiming to block or dim the recording indicator light, users discovered that the glasses recognise when the LED is covered and halt recording. The privacy feature is not just a suggestion; it is enforced at the hardware level.

Yet, as critics note, the system still places significant responsibility on the wearer. The LED indicator is small and easily missed by bystanders who may not know what to look for. This is a reminder that hardware indicators, while essential, are not sufficient on their own.

Trusted Execution Environments

For enterprise applications, particularly in sensitive environments like healthcare, additional hardware safeguards are necessary. Smart glasses worn in hospitals pose emerging patient privacy risks, as they could collect and share patient images and protected health information without individuals even noticing.

Hardware-based trusted execution environments (TEEs) can ensure that sensitive data – such as facial images or medical information—is processed in isolated, secure areas of the processor, inaccessible to the main operating system or any applications running on it. This creates a hardware-enforced boundary between the data and any potential malicious software.

The Platform Responsibility: Beyond the Device

Privacy by design extends beyond the hardware to the platforms and ecosystems that surround it.

Lenskart’s Trust Framework

Lenskart, which is preparing to launch its own “B by Lenskart” smart glasses, has articulated a comprehensive approach to privacy that offers lessons for the entire industry.

The company recognises that customer trust completely drives customer loyalty. A single breach in this trust has a direct impact on the comfort level of customers coming back and engaging with the brand.

For every interaction where customers log in with their phone number, upload their prescription, or scan their face for fitting recommendations, they do so with the underlying belief that the brand will not hand over this information to third parties and will keep this data fully secured.

Lenskart’s approach includes:

  • Every interaction, online or in-store, that involves sensitive data like prescriptions, is done after taking an OTP from the user
  • Retrieval of these records also requires authentication
  • Security systems and processes are key criteria for partner selection
  • APIs have rate limiting and authentication built into them 

This is privacy by design applied at the platform level—ensuring that even if the hardware is secure, the ecosystem around it maintains the same standards.

Developer Ecosystems and Consent

Both Sarvam and Lenskart are opening their platforms to developers, enabling third-party applications to be built for their smart glasses. This creates enormous innovation potential, but also introduces new privacy vectors.

A platform that allows third-party apps must also provide granular, user-understandable consent mechanisms. Users should know exactly what data each app can access, for how long, and for what purpose. This is not just good practice; it is increasingly a legal requirement under frameworks like the DPDP Act.

The Enterprise Challenge: Managing Unmanaged Devices

For CIOs and CISOs, the proliferation of smart glasses presents a new category of risk. These are unmanaged devices that could be brought into the workplace by employees or visitors, without the organisation even knowing this technology is present.

Unlike cell phones, which are now a familiar part of the workplace landscape, smart glasses are inconspicuous. They blend in with the normal workforce. People may not be aware they are being photographed or videotaped.

This creates potential for both malicious insider risks and inadvertent breaches. A hospital visitor wearing smart glasses could inadvertently record patients in the background of a video. An employee could use them to exfiltrate sensitive documents or trade secrets.

For enterprises, addressing this requires a combination of policy and technology. Clear acceptable-use policies must be established and communicated. Technical measures—such as detecting and blocking unauthorised recording devices—may be necessary in highly sensitive environments.

The Path Forward: A Framework for Trust

As India’s first generation of AI eyewear prepares to enter the market, the companies that succeed will be those that treat privacy not as a compliance burden, but as a competitive advantage. Based on the emerging best practices and regulatory frameworks, a comprehensive approach to privacy by design for smart glasses should include:

At the Hardware Level:

  • On-device processing for sensitive AI tasks, minimising data transmission
  • Visible, hardware-enforced recording indicators that cannot be easily disabled
  • Secure elements and trusted execution environments for sensitive data
  • Physical camera shutters or covers for absolute privacy assurance

At the Platform Level:

  • Transparent data policies that clearly explain what data is collected and why
  • Granular consent mechanisms for users and bystanders, where feasible
  • Strong authentication for access to stored data
  • Regular security audits and penetration testing

At the Policy Level:

  • Alignment with India’s AI Governance Guidelines and MANAV framework
  • Clear usage guidelines that users are educated about, not just presented with
  • Mechanisms for bystanders to raise concerns about inappropriate recording
  • Industry-wide standards for privacy indicators and data handling

At the Ecosystem Level:

  • Vetting processes for third-party developers and applications
  • API-level controls on data access and usage
  • User education about privacy features and how to use them
  • Channels for reporting privacy concerns and requesting data removal

Conclusion: Trust as the Ultimate Differentiator

The race to dominate India’s smart glasses market is intensifying. Global players like Meta are already here, with their Ray-Ban glasses priced from ₹39,900 onwards. Homegrown challengers like Sarvam Kaze are preparing to launch in May 2026. Lenskart is building its own ecosystem. The competition will be fierce, and the early winners will capture significant market share.

But in the long run, the ultimate differentiator will not be camera resolution, battery life, or even AI capabilities. It will be trust.

A device that sees and hears everything around it is an intimate companion. Consumers will only embrace it if they believe it respects their privacy and protects their data. Enterprises will only deploy it if they are confident it will not become a vector for data breaches.

India’s emerging regulatory framework, from the AI Governance Guidelines to the MANAV philosophy, provides a clear direction: technology must serve humanity, not the other way around. The companies that internalise this principle—that build privacy into their hardware, their platforms, and their culture—will be the ones that earn the trust of Indian consumers.

The Prime Minister’s endorsement of Sarvam Kaze was a powerful statement of confidence in India’s ability to build world-class AI hardware. Now, the industry must rise to meet that confidence by building devices that are not just intelligent but also worthy of the trust placed in them.

As Sarvam AI itself puts it: “The question is no longer whether India can train powerful models. The question is whether they can run everywhere, every day”. To that, we must add another: whether they can do so in a way that respects the privacy and dignity of every Indian.


Ready to build smart eyewear solutions that earn trust through privacy-by-design engineering?
Contact Cionlabs to discuss how our hardware design and development expertise can help you create AI wearables that are not just intelligent but inherently trustworthy.