Lyte: Former Apple Vision Leaders Fund $107M Startup to Build Robots’ “Visual Brain”

Related

Pentagon CIO Kirsten Davies Announces New Team Appointments

What happened Pentagon Chief Information Officer Kirsten Davies announced several...

Carnival Corporation Probes Data Breach After Claims of 8.7 Million Records Theft

What happened Carnival Corporation is investigating a potential data breach...

Grinex Exchange Blames Western Intelligence for $13.7M Crypto Hack

What happened Kyrgyzstan-based cryptocurrency exchange Grinex suspended operations on April...

Payouts King Ransomware Uses QEMU VMs to Bypass Endpoint Security

What happened Sophos researchers have documented two active campaigns in...

Share

What happened

Lyte, a new robotics startup founded by former Apple engineers Alexander Shpunt, Arman Hajati, and Yuval Gerson, has raised $107 million to develop advanced perception technology for robots that acts as a unified “visual brain.” The funding round was backed by investors including Fidelity Management & Research, Atreides Management, Exor Ventures, Key1 Capital, and VentureTech Alliance. Lyte’s flagship platform, LyteVision, fuses multiple sensing modes into a single system designed to give machines real‑world perception and real‑time context interpretation.

Who is affected

This technology targets developers and operators of autonomous systems, from warehouse robots and mobile platforms to humanoids and autonomous vehicles like robotaxis. While not a cybersecurity product, it impacts security stakeholders because perception systems are often attack surfaces for manipulation or sensor spoofing. As robots are deployed in critical environments (logistics, healthcare, public spaces), their ability to securely sense and interpret surroundings becomes a safety and risk function beyond traditional network security concerns.

Why CISOs should care

Robotic perception systems will increasingly intersect with enterprise infrastructure: sharing data with control systems, cloud services, and AI models. Vulnerabilities in sensor fusion, firmware, or interpretation layers could be exploited to cause physical disruption, data integrity failures, or safety incidents. As robots advance from controlled factory floors to dynamic real‑world environments, security teams must expand their threat models to include perception integrity and robustness against adversarial inputs.

3 practical actions

  1. Inventory and assess perception systems: Identify where autonomous vision and sensor systems are used or planned in your organization. Include them in risk assessments alongside traditional IT/OT assets.
  2. Integrate security testing for sensor fusion: Work with robotics vendors to ensure perception components are tested for adversarial manipulation, spoofing, and malicious input. This should be part of procurement and ongoing assurance.
  3. Update incident response playbooks: Expand IR plans to cover robotic and automation incidents, coordinating with safety, facilities, and physical security teams to respond to compromised perception or control failures.