Department News
[June Lab Interview] Interactive & Network Robotics Laboratory Student Interview
Author
익명
Date
2023-09-14
Views
350
Q1. Please briefly introduce your lab. (Respondent: Researcher Ji-seok Kang)
Our laboratory mainly conducts robotics research based dynamics and mathematics. The detailed fields currently being researched are largely 1. Aerial manipulation, 2. Autonomous robots, 3. Robot manipulation, and 4. Haptics and VR, Control and path planning of robot systems, optimal state estimation, dynamics simulation and machine learning. We are conducting research a variety of topics in various directions, including haptics/remote control. The professor’s preferred research direction can be summarized as “implementation of a sophisticated system based rigorous mathematical/mechanical theory.” It can be said to be a laboratory that conducts challenging research as it handles both theory and hardware.
Q2. Can you tell us about the four research areas you mentioned earlier and the research topics you are working?
[Air manipulation] (Respondent: Researcher Jeongseop Lee)
Recently, research utilizing drone systems for aerial work has been actively conducted. In these studies, methods such as attaching a robotic arm to a drone or developing a new drone system are used, but these have limitations that make it difficult to provide sufficient flight time and work power for aerial work due to technical issues such as battery capacity.
To overcome these limitations, our laboratory developed a multi-distributed rotor-based robotic arm (LASDRA) system. This system was constructed by connecting a drone system capable of omni-directional operation with joints, thereby implementing a giant robot arm system for aerial work with a long length (more than 7 m) and a high degree of freedom (more than 15 degrees of freedom). This system has a structure connected from the mothership to a joint-link, which solves the problems of battery, flight time, and sensing, and allows for modular design and control where each link can operate independently. In addition to theoretical development of control/estimation/optimization for these aerial work robots, hardware and system development is in progress.
Specifically, we conducted a study distributed impedance-based motion control considering the rotor-based drive and modular characteristics of the LASDRA system, which is different from existing robot arms, the operation of each link (drone system) is controlled independently and control errors between adjacent links are compliantly absorbed by the rotor's reverse driveability.. This motion control method enables a modular control design, but at the same time, there is a limitation in that the work force is concentrated the first link and it is difficult to estimate the end work force. To address this issue, we investigated the development of a joint locking system that improves the work force of the system by locking the joints between each link in the desired position when working, and a centralized and distributed momentum-based work force estimation technique using F/T sensors.
[Autonomous robot] (Responder: Researcher Ji-Seok Kang)
This field focuses developing autonomous navigation and flight technologies for mobile robots, which is an important topic traditionally addressed in robotics. Achieving autonomous navigation and flight for mobile robots requires sensor fusion, path/trajectory planning, and dynamic motion control technologies. We are conducting research to develop these essential technologies individually and integrate them into complete systems.
One representative research involves the use of monocular cameras (usinglye camera) for visual-inertial localization and mapping with advantages in terms of cost and size. The most challenging aspect in this research is the scale initialization problem. In the case of monocular vision systems, to determine the scale of the visual system, the robot needs to be moved arbitrarily at the beginning. This means that the robot initially moves without knowing its current pose and the surrounding terrain, which can lead to collisions. Therefore, it is essential to perform initialization with minimal movement while maintaining accuracy and stability immediately after initialization. In our lab, we have developed a method to initialize scale very quickly (within 0.5 seconds) and accurately (100% success rate in 1000 random attempts).
Additionally, we have addressed the issue of inaccurate localization due to sensing disturbances caused by the foot-ground interaction in legged robots. To solve this problem, we mimic the neck of animals by connecting the sensor module, which includes cameras and IMU, to the robot's body using spring-damper systems to create a dynamic vibration absorber. We have developed a mechanism to adjust the absorption frequency (notch frequency) of the constructed absorber to match the walking pattern.
Furthermore, we are researching cooperation localization and mapping among multi-robot systems, extreme environment motion control and path planning for autonomous delivery vehicles, and the development of autonomous underwater robots for seafloor garbage collection, among other areas.
[Robot Manipulation] (Respondent: Researcher Son Bu-geon)
This field involves research simulation techniques that mimic complex and diverse physical behaviors and control based them. Existing commercial simulators have limitations in terms of accuracy and speed when simulating various work scenarios. In our laboratory, we are working developing simulation techniques that accurately and rapidly simulate using mathematical theories or artificial intelligence techniques.
Specifically, we have developed node-wise/diagonalization techniques for representing deformable objects like ropes or clothing and unit-system-based simulations. These algorithms allow for efficient parallelization, enabling us to produce fast and accurate simulation results.
Additionally, for complex tasks like screw tightening, where contact points are numerous and constantly changing, we have developed a data-driven learning approach to select representative contact points and contact directions. We've also developed attitude estimation algorithms for screws and learning-based control algorithms for screw tightening based this research.
Finally, when manipulating thin objects like dish drying racks during robot operations, we have developed an algorithm that reconstructs these thin objects as 3D line segments using images. This technology is efficient in terms of time and memory and is robust to self-occlusion, allowing for the representation of objects with sufficient information. We have also developed an optimization-based dish pose estimation and path planning research based a differentiable simulator for dish storage dish drying racks. This is a field that develops autonomous driving and flight technology for mobile robots, which are important topics traditionally dealt with in robotics.
[Haptics and VR] (Respondent: Researcher Jin-wook Heo)
Our lab, along with Metaverse, is focusing research related to ‘hands’ among the rapidly growing haptic and VR research fields. In order to implement manipulation and applications using hand movements with a high degree of freedom in a small space in a virtual environment, we are researching accurate and fast hand gesture tracking and manipulation interfaces, various hand haptic equipment, and interactive simulation.
In 2021, our lab published the VIST (Vision-Inertial Sensor Fusion-based Tracking) technology in the Science Robotics journal. VIST is a technique that combines inertial sensors and camera sensors through complementary coupling, allowing for robust and highly accurate hand motion tracking even in various situations. Our VIST technology uses IMU sensors without relying a compass, and the complementary fusion of vision and IMU sensors makes it more robust and accurate compared to other hand motion tracking technologies (vision-based, IMU and compass-based, soft sensor-based).
Additionally, our lab is also researching hand haptic feedback equipment. We are designing hardware for CHD (Compact Haptic Device), a device that can deliver delicate three-degree-of-freedom tactile feedback to the fingertips, and researching high-performance control techniques. CHD, a wearable fingertip haptic device, can accurately render the direction and strength of contact force, which is essential for precise manipulation of objects, enabling more precise work in a virtual environment. In addition, we are also researching texture expression and thermal haptic feedback through vibration haptics.
In our lab, we are using the haptic gloves developed in this way, which combine hand motion tracking technology and haptic feedback technology, for various research such as creating virtual reality game/virtual concert scenarios, developing swarm drone operation interfaces, and interactive simulation of virtual reality object manipulation. In addition, VIST technology is being generalized and used for feedback control of wearable soft robots.
Q3. What was the most difficult part of conducting this research and how did you overcome it? (Respondent: Researcher Ji-seok Kang)
Firstly, it was challenging to clearly understand the reasons why tasks that are easy for humans are difficult to automate with robots. These issues often require looking at the problem with more insight rather than having a clear mathematical foundation. Going through multiple iterations to validate such insights seems to be the key to overcoming these challenges.
Another difficult aspect is when doubts arise about whether the research being conducted will be useful for actual robot tasks or if it makes a significant contribution. Such doubts can persist from the start of the research to the point of publication. However, I believe that good research often begins with these doubts and can lead to even better research.
Q4. Do you have any special devices or equipment in your lab? If not, is there any equipment you often use outside? (Respondent: Researcher Ji-seok Kang)
The lab is equipped with high-spec server PCs capable of running high-performance algorithms. We also have a number of robotic arms and grippers to verify robot manipulation algorithms. In the case of mobile robots (LASDRA, drones, etc.), we are conducting research by designing and manufacturing them to suit the research purpose, and we are equipped with motion capture equipment to accurately estimate the position of the robot. We have various HMDs for VR/AR/MR such as Meta Quest 2, VIVE pro, and MS HoloLens for VR research, and we have equipment called Omega that can provide force haptic feedback. We also have the VIVE tracking system used for body tracking. The haptic/VR equipment we have developed in-house includes the hand tracker VIST, which utilizes robust and accurate hand motion tracking technology, and equipment that can provide haptic feedback of vibration/fingertip sensation/heat sensation to the hand.
(From left to right: Researcher Ji-seok Kang, Researcher Jeong-seop Lee, Researcher Bu-geon Son, Researcher Jin-wook Heo)
Our laboratory mainly conducts robotics research based dynamics and mathematics. The detailed fields currently being researched are largely 1. Aerial manipulation, 2. Autonomous robots, 3. Robot manipulation, and 4. Haptics and VR, Control and path planning of robot systems, optimal state estimation, dynamics simulation and machine learning. We are conducting research a variety of topics in various directions, including haptics/remote control. The professor’s preferred research direction can be summarized as “implementation of a sophisticated system based rigorous mathematical/mechanical theory.” It can be said to be a laboratory that conducts challenging research as it handles both theory and hardware.
Q2. Can you tell us about the four research areas you mentioned earlier and the research topics you are working?
[Air manipulation] (Respondent: Researcher Jeongseop Lee)
Recently, research utilizing drone systems for aerial work has been actively conducted. In these studies, methods such as attaching a robotic arm to a drone or developing a new drone system are used, but these have limitations that make it difficult to provide sufficient flight time and work power for aerial work due to technical issues such as battery capacity.
To overcome these limitations, our laboratory developed a multi-distributed rotor-based robotic arm (LASDRA) system. This system was constructed by connecting a drone system capable of omni-directional operation with joints, thereby implementing a giant robot arm system for aerial work with a long length (more than 7 m) and a high degree of freedom (more than 15 degrees of freedom). This system has a structure connected from the mothership to a joint-link, which solves the problems of battery, flight time, and sensing, and allows for modular design and control where each link can operate independently. In addition to theoretical development of control/estimation/optimization for these aerial work robots, hardware and system development is in progress.
Specifically, we conducted a study distributed impedance-based motion control considering the rotor-based drive and modular characteristics of the LASDRA system, which is different from existing robot arms, the operation of each link (drone system) is controlled independently and control errors between adjacent links are compliantly absorbed by the rotor's reverse driveability.. This motion control method enables a modular control design, but at the same time, there is a limitation in that the work force is concentrated the first link and it is difficult to estimate the end work force. To address this issue, we investigated the development of a joint locking system that improves the work force of the system by locking the joints between each link in the desired position when working, and a centralized and distributed momentum-based work force estimation technique using F/T sensors.
[Autonomous robot] (Responder: Researcher Ji-Seok Kang)
This field focuses developing autonomous navigation and flight technologies for mobile robots, which is an important topic traditionally addressed in robotics. Achieving autonomous navigation and flight for mobile robots requires sensor fusion, path/trajectory planning, and dynamic motion control technologies. We are conducting research to develop these essential technologies individually and integrate them into complete systems.
One representative research involves the use of monocular cameras (usinglye camera) for visual-inertial localization and mapping with advantages in terms of cost and size. The most challenging aspect in this research is the scale initialization problem. In the case of monocular vision systems, to determine the scale of the visual system, the robot needs to be moved arbitrarily at the beginning. This means that the robot initially moves without knowing its current pose and the surrounding terrain, which can lead to collisions. Therefore, it is essential to perform initialization with minimal movement while maintaining accuracy and stability immediately after initialization. In our lab, we have developed a method to initialize scale very quickly (within 0.5 seconds) and accurately (100% success rate in 1000 random attempts).
Additionally, we have addressed the issue of inaccurate localization due to sensing disturbances caused by the foot-ground interaction in legged robots. To solve this problem, we mimic the neck of animals by connecting the sensor module, which includes cameras and IMU, to the robot's body using spring-damper systems to create a dynamic vibration absorber. We have developed a mechanism to adjust the absorption frequency (notch frequency) of the constructed absorber to match the walking pattern.
Furthermore, we are researching cooperation localization and mapping among multi-robot systems, extreme environment motion control and path planning for autonomous delivery vehicles, and the development of autonomous underwater robots for seafloor garbage collection, among other areas.
[Robot Manipulation] (Respondent: Researcher Son Bu-geon)
This field involves research simulation techniques that mimic complex and diverse physical behaviors and control based them. Existing commercial simulators have limitations in terms of accuracy and speed when simulating various work scenarios. In our laboratory, we are working developing simulation techniques that accurately and rapidly simulate using mathematical theories or artificial intelligence techniques.
Specifically, we have developed node-wise/diagonalization techniques for representing deformable objects like ropes or clothing and unit-system-based simulations. These algorithms allow for efficient parallelization, enabling us to produce fast and accurate simulation results.
Additionally, for complex tasks like screw tightening, where contact points are numerous and constantly changing, we have developed a data-driven learning approach to select representative contact points and contact directions. We've also developed attitude estimation algorithms for screws and learning-based control algorithms for screw tightening based this research.
Finally, when manipulating thin objects like dish drying racks during robot operations, we have developed an algorithm that reconstructs these thin objects as 3D line segments using images. This technology is efficient in terms of time and memory and is robust to self-occlusion, allowing for the representation of objects with sufficient information. We have also developed an optimization-based dish pose estimation and path planning research based a differentiable simulator for dish storage dish drying racks. This is a field that develops autonomous driving and flight technology for mobile robots, which are important topics traditionally dealt with in robotics.
[Haptics and VR] (Respondent: Researcher Jin-wook Heo)
Our lab, along with Metaverse, is focusing research related to ‘hands’ among the rapidly growing haptic and VR research fields. In order to implement manipulation and applications using hand movements with a high degree of freedom in a small space in a virtual environment, we are researching accurate and fast hand gesture tracking and manipulation interfaces, various hand haptic equipment, and interactive simulation.
In 2021, our lab published the VIST (Vision-Inertial Sensor Fusion-based Tracking) technology in the Science Robotics journal. VIST is a technique that combines inertial sensors and camera sensors through complementary coupling, allowing for robust and highly accurate hand motion tracking even in various situations. Our VIST technology uses IMU sensors without relying a compass, and the complementary fusion of vision and IMU sensors makes it more robust and accurate compared to other hand motion tracking technologies (vision-based, IMU and compass-based, soft sensor-based).
Additionally, our lab is also researching hand haptic feedback equipment. We are designing hardware for CHD (Compact Haptic Device), a device that can deliver delicate three-degree-of-freedom tactile feedback to the fingertips, and researching high-performance control techniques. CHD, a wearable fingertip haptic device, can accurately render the direction and strength of contact force, which is essential for precise manipulation of objects, enabling more precise work in a virtual environment. In addition, we are also researching texture expression and thermal haptic feedback through vibration haptics.
In our lab, we are using the haptic gloves developed in this way, which combine hand motion tracking technology and haptic feedback technology, for various research such as creating virtual reality game/virtual concert scenarios, developing swarm drone operation interfaces, and interactive simulation of virtual reality object manipulation. In addition, VIST technology is being generalized and used for feedback control of wearable soft robots.
Q3. What was the most difficult part of conducting this research and how did you overcome it? (Respondent: Researcher Ji-seok Kang)
Firstly, it was challenging to clearly understand the reasons why tasks that are easy for humans are difficult to automate with robots. These issues often require looking at the problem with more insight rather than having a clear mathematical foundation. Going through multiple iterations to validate such insights seems to be the key to overcoming these challenges.
Another difficult aspect is when doubts arise about whether the research being conducted will be useful for actual robot tasks or if it makes a significant contribution. Such doubts can persist from the start of the research to the point of publication. However, I believe that good research often begins with these doubts and can lead to even better research.
Q4. Do you have any special devices or equipment in your lab? If not, is there any equipment you often use outside? (Respondent: Researcher Ji-seok Kang)
The lab is equipped with high-spec server PCs capable of running high-performance algorithms. We also have a number of robotic arms and grippers to verify robot manipulation algorithms. In the case of mobile robots (LASDRA, drones, etc.), we are conducting research by designing and manufacturing them to suit the research purpose, and we are equipped with motion capture equipment to accurately estimate the position of the robot. We have various HMDs for VR/AR/MR such as Meta Quest 2, VIVE pro, and MS HoloLens for VR research, and we have equipment called Omega that can provide force haptic feedback. We also have the VIVE tracking system used for body tracking. The haptic/VR equipment we have developed in-house includes the hand tracker VIST, which utilizes robust and accurate hand motion tracking technology, and equipment that can provide haptic feedback of vibration/fingertip sensation/heat sensation to the hand.
(From left to right: Researcher Ji-seok Kang, Researcher Jeong-seop Lee, Researcher Bu-geon Son, Researcher Jin-wook Heo)