STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA

 

  • STMicroelectronics to integrate ST sensors, microcontrollers, and motor control solutions with NVIDIA robotics ecosystem to help developers design, train, and deploy humanoid robots and other physical AI systems with higher efficiency, reliability, and scalability
  • First steps with integration of Leopard Imaging stereo depth camera enabled by ST with the NVIDIA Holoscan Sensor Bridge, and the addition of the high-fidelity sim-to-real model of ST IMU in NVIDIA Isaac Sim ecosystem

 

 

STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, today announced the acceleration of global development and adoption of physical AI systems, including humanoid, industrial, service and healthcare robots. ST is integrating its comprehensive portfolio for advanced robotics, into the reference set of components compatible with the NVIDIA Holoscan Sensor Bridge (HSB). In parallel, high-fidelity NVIDIA Isaac Sim models of ST components are being integrated into both companies’ robotics ecosystems to support faster, more accurate sim-to-real research and development. The first deliverables available to developers today include the integration of Leopard’s depth camera enabled by ST with the NVIDIA HSB and the high-fidelity model of an ST IMU into NVIDIA’s Isaac Sim ecosystem.

“ST is well engaged within the robotics community, providing robust support and a well-established ecosystem,” said Rino Peruzzi, Executive Vice President, Sales & Marketing, Americas & Global Key Account Organization at STMicroelectronics. “Our collaboration with NVIDIA aims to unleash the next wave of cutting-edge robotics innovation with developer and customer experience streamlined at every step, from the inception of AI algorithms to the seamless integration of sensors and actuators. This will accelerate the evolution of sophisticated AI-driven physical platforms.”

“Accelerating the development of next-generation autonomous systems requires high-fidelity simulation and seamless hardware integration to bridge the gap between virtual training and real-world deployment,” said Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA. “The integration of STMicroelectronics’ sensor and actuator technologies with NVIDIA Isaac Sim, Holoscan Sensor Bridge and Jetson platforms provides developers with a unified foundation to build, simulate and deploy physical AI at scale.”

 

 

Simplifying sensor and actuator integration with the Holoscan Sensor Bridge

With the NVIDIA HSB, developers can unify, standardize, synchronize, and streamline data acquisition and logging from multiple ST sensors and actuators, a critical foundation for building high fidelity NVIDIA Isaac models, accelerating learning, and minimizing the sim to real gap.

The goal is to simplify the process of connecting ST sensors and actuators to NVIDIA Jetson platforms through pre-integrated solutions for the combination of STM32 MCUs, advanced sensors (including IMUs, imagers, and ToF devices) and motor‑control solutions, particularly for humanoid robot designs. Leopard Imaging’s stereo depth camera for robots is the perfect example. Using ST imaging, depth and motion-sensing technologies, it is expected to support a broad wave of designs across Physical AI OEMs, academic research groups and the industrial robotics community.

 

Reducing cost, complexity challenges with high-fidelity modeling for Omniverse Isaac

Advanced robotics developers face high development costs, in addition to modeling challenges. High‑fidelity simulations with extensive randomization demand substantial GPU and CPU resources and large datasets. Selecting which parameters to randomize, and over what ranges, requires deep domain expertise. Poor choices can result in unrealistic scenarios or inefficient training. Finally, excessive variability can confuse models, slow convergence, and degrade real‑world performance when randomization no longer reflects plausible conditions.

ST and NVIDIA’s objective is to provide accurate, hardware-calibrated models for the comprehensive portfolio of ST components matching the requirements of advanced robotics. Following the availability of the first model of an IMU, ST is working to bring developers models of ToF sensors, actuators and other ICs derived from benchmark data collected on real ST hardware, using ST tools to capture accurate parameters and realistic behavior, resulting in models optimized to NVIDIA’s Isaac Sim ecosystem. NVIDIA HSB is being integrated into ST’s toolchain collaboratively.

As a result, ST and NVIDIA envision that more accurate models will significantly improve robot learning. With models that closely mirror real-world device behavior, robots can learn from simulations that better reflect actual conditions, shortening training cycles and lowering the cost of building and refining humanoid robotics applications.

More information on NVIDIA Holoscan Sensor Bridge (HSB) is accessible here.

More information on ST solutions to accelerate physical AI development with NVIDIA is accessible here.

 

 STMicroelectronics and Leopard Imaging accelerate robotics vision with NVIDIA Jetson-ready multi-sensor module

  • Multimodal module combining 2D imaging, 3D depth sensing, and human-like motion perception
  • NVIDIA Holoscan Sensor Bridge ensuring multi-gigabit plug and play connectivity with Jetson platforms
  • Fully supported by NVIDIA Isaac open robot development platform

STMicroelectronics and Leopard Imaging have introduced an all-in-one multimodal vision module for humanoid and other advanced robotics systems. Combining ST imaging, 3D scene-mapping, and motion sensing with the NVIDIA Holoscan Sensor Bridge technology, the module integrates natively with NVIDIA Jetson and NVIDIA Isaac open robot development platform, simplifying and accelerating vision system design within the size, weight, and power constraints of humanoid robots.

“Humanoid robotics is moving beyond research projects and demonstrations to deliver powerful new machines for a wide range of roles in manufacturing and automotive factories, logistics and warehousing, as well as retail and customer service,” said Marco Angelici, Vice-President of Marketing and Application for Analog Power MEMS and Sensors, at STMicroelectronics. “Our collaboration with Leopard Imaging brings market-leading ST sensors and actuators, seamlessly integrated into the NVIDIA robotics ecosystem, to accelerate the deployment of physical AI applications with human-like awareness.”

“Accessing to ST sensors and actuators directly within the ecosystem has allowed us to standardize and streamline data acquisition and logging for humanoid robot vision across the HSB interface,” said Bill Pu, CEO of Leopard Imaging. “Robot builders can use our multi-sensing vision module with Isaac tools to accelerate learning and quickly bridge the ‘sim-to-real’ gap.”

Powered by the NVIDIA Holoscan Sensor Bridge, the new module integrates seamlessly with NVIDIA Jetson over ethernet for real-time sensor data ingestion and NVIDIA Isaac open robot development platform, which offers open AI models, simulation frameworks and libraries for developers. The new module includes a build system and application programming interfaces (APIs), artificial intelligence (AI) algorithms curated for mobile robots, sample applications, domain randomization, and a simulation environment containing sensor models.

ST continues to integrate its sensors, drivers, actuators, controllers, and development tools into the NVIDIA robotics ecosystem as a key NVIDIA robotics and edge AI partner, including high-fidelity models and proof-of-concept modules.

 

The Leopard Imaging Systems vision module incorporates:

For vision-based sensing, the ST VB1940 automotive-grade RGB-IR 5.1-megapixel image sensor with combined rolling shutter and global shutter modes. ST has also released a mass market and industrial version V**943, part of the ST BrightSense product family, existing in monochrome or RGB-IR, in die or packaged sensor.

For motion sensing, the LSM6DSV16X 6-axis inertial measurement unit (IMU) embeds ST machine-learning core (MLC) for AI in the edge, sensor-fusion low-power (SFLP), and Qvar electrostatic sensing for user-interface detection.

For 3D depth sensing, the VL53L9CX dToF all-in-one LiDAR module, part of the ST FlightSense product family, provides 3D depth sensing with accurate ranging up to 9 meters. With its resolution of 54 x 42 zones (near 2,300 zones) combined with a wide 55°x42° FoV providing 1° angular resolution, short and long-distance measurements and small objects detection are achievable at up to 100 fps.

 

About STMicroelectronics

At ST, we are 48,000 creators and makers of semiconductor technologies mastering the semiconductor supply chain with state-of-the-art manufacturing facilities. An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of cloud-connected autonomous things. We are on track to be carbon neutral in all direct and indirect emissions (scopes 1 and 2), product transportation, business travel, and employee commuting emissions (our scope 3 focus), and to achieve our 100% renewable electricity sourcing goal by the end of 2027. Further information can be found at www.st.com

 

About Leopard Imaging Inc.
Headquartered in Silicon Valley and founded in 2008, Leopard Imaging is a global leader in AI vision innovation, advancing computational imaging performance across autonomous machines, smart drones, AI-enabled IoT, robotics, automation, and medical technologies. Additional information is available at www.leopardimaging.com