본문 바로가기

AI & Robotics

상처를 꼬메주는 로봇

https://techxplore.com/news/2020-06-intel-google-uc-berekely-ai.html

 

UC Berkeley professors have previously used YouTube videos as a guide for robots to learn various motions such as jumping or dancing, while Google has trained robots to understand depth and motion. The team applied that knowledge to their latest project, Motion2Vec, in which videos of actual surgical procedures are used for instruction. In a recently released research paper, researchers outline how they used YouTube videos to train a two-armed da Vinci robot to insert needles and perform sutures on a cloth device.

The medical team relied on Siamese networks, a deep-learning setup that incorporates two or more networks sharing the same data. The system is optimal for comparing and assessing relationships between datasets. Such networks have been used in the past for facial detection, signature verification and language detection.

"Any human can watch almost any one of those videos and make sense of it, but a robot currently cannot—they just see it as a stream of pixels. So the goal of this work is to try and make sense of those pixels. That is to look at the video, analyze it, and… be able to segment the videos into meaningful sequences."

Machine learning has contributed much to biotechnology indecent years. The ability of AI to rapidly process huge volumes of data has yielded progress in detecting lung cancer and stroke risk based on CAT scans, calculated risk of heart disease and cardiac arrest based on EKG and MRI imagery, classified skin lesions from photos and detected signs of diabetic distress in eye images. 

 

https://youtu.be/GTP7mQ-_wno

 

학부생 때 잠깐 연구했던 분야가 나왔다. 이정도까지 되었다는게 신기하다. 

자세한 내용은 논문을 찾아봐야겠다.

 

 

 

reference

sites.google.com/view/motion2vec

www.ajaytanwani.com/docs/Tanwa … n2Vec_arxiv_2020.pdf