Indoor service robot for deep learning with cluster computing-based AI unit (for recognition/judgment) and MCU control unit.
AI unit consists of a computer cluster with 1–4 industrial-use NVIDIA edge supercomputing modules, connected via Gigabit switch.
Runs in a ROS2 environment for high-performance deep learning model development.
AI unit includes interfaces like Gigabit Ethernet, 802.11ac Wi-Fi, Bluetooth, USB 3.0, USB OTG, CAN, I2C, SPI, and GPIO to support unmanned vehicle operations.
Control unit includes STM Cortex-M4 processor, motor driver, ultrasonic sensor, proximity sensor, ZigBee V3.0, and CAN transceiver for remote operation.
Equipped with camera and 360-degree Lidar for autonomous driving, allowing the robot to assess surroundings.
Features an 11.6-inch 1080p touch display for GUI-based service robot interface.
High-performance digital microphone and speaker for voice control and check-in of robot condition.
Ultrasonic and PSD sensors to detect obstacles and determine driving distance; DC motor with encoder for movement.
Wi-Fi, Bluetooth, and ZigBee V3.0 enable remote robot control and service robot interaction.
Large capacity battery and efficient charging system to extend operation time.
Includes Ubuntu Linux optimized for ROS2 and CUDA-based deep learning.
Visual Studio Code support for integrated development and application environment.
Provides learning modules and training for deep learning-based service robot operation.