ALIGMO

Active Learning for Intelligent Grasping of Multiple Objects

Motivation

Industrie-Roboterarm an einer Montageplattform mit Roboterhand und Materialschale IFL
Robotic cell for evaluating the developed AI methods

In robotics-based intralogistics, there remains a substantial need for grasping solutions that operate reliably under varying boundary conditions—such as changing sensor setups, different end effectors, and dynamic object portfolios. Many established systems are tightly coupled to specific hardware and environmental parameters and therefore require extensive reconfiguration and additional data collection whenever the product range or process environment changes. The ALIGMO project addresses this challenge by developing a hardware-agnostic AI software solution for grasping unknown objects that can be adapted to new domains with minimal additional effort.

The project is implemented in close cooperation between the Institute for Materials Handling and Logistics Systems (IFL) at the Karlsruhe Institute of Technology (KIT) and voraus robotik GmbH. As a practice-oriented development and validation environment, a robotic cell is being set up jointly in which a robot is integrated based on the vorauscore. This demonstrator enables iterative evaluation of the developed AI methods under realistic conditions and serves as the interface between algorithmic research (IFL) and system-level integration (voraus robotik GmbH).

Objectives

The objective of ALIGMO is to realize a scalable, hardware-agnostic AI software solution for object detection and grasp-point estimation for unknown objects, which can be efficiently adapted to new items, environments, and sensor systems through active learning. Implementation follows clearly defined functional stages:

  1. Single-Shot–Single-Pick: Development of high-performance object detection and grasp-point estimation from a single image as the basis for robust single-pick processes.
  2. Active-Learning Extension: Integration of active-learning mechanisms for data-efficient model updates and domain adaptation during operation to minimize the need for manual annotation.
  3. Single-Shot–Multi-Pick: Extension towards planning multiple grasp operations from a single image acquisition, including robust handling of occlusions and the derivation of a process-logically meaningful grasp sequence.

The jointly established robotic cell (robot + vorauscore) serves as an end-to-end demonstration and integration platform to experimentally validate the developed methods and to systematically demonstrate their transferability to industrial applications.