https://techxplore.com/news/2020-06-intel-google-uc-berekely-ai.html
UC Berkeley professors have previously used YouTube videos as a guide for robots to learn various motions such as jumping or dancing, while Google has trained robots to understand depth and motion. The team applied that knowledge to their latest project, Motion2Vec, in which videos of actual surgical procedures are used for instruction. In a recently released research paper, researchers outline how they used YouTube videos to train a two-armed da Vinci robot to insert needles and perform sutures on a cloth device.
The medical team relied on Siamese networks, a deep-learning setup that incorporates two or more networks sharing the same data. The system is optimal for comparing and assessing relationships between datasets. Such networks have been used in the past for facial detection, signature verification and language detection.
"Any human can watch almost any one of those videos and make sense of it, but a robot currently cannot—they just see it as a stream of pixels. So the goal of this work is to try and make sense of those pixels. That is to look at the video, analyze it, and… be able to segment the videos into meaningful sequences."
Machine learning has contributed much to biotechnology indecent years. The ability of AI to rapidly process huge volumes of data has yielded progress in detecting lung cancer and stroke risk based on CAT scans, calculated risk of heart disease and cardiac arrest based on EKG and MRI imagery, classified skin lesions from photos and detected signs of diabetic distress in eye images.
학부생 때 잠깐 연구했던 분야가 나왔다. 이정도까지 되었다는게 신기하다.
자세한 내용은 논문을 찾아봐야겠다.
reference
'AI & Robotics' 카테고리의 다른 글
ML로 blood quality test 하기 (0) | 2020.08.25 |
---|---|
deep learning이 잘 돌아가는 이유 (0) | 2020.08.03 |
Astrocyte를 반영한 AI 모델을 통하여 로봇 움직임 제어 (0) | 2020.07.11 |
머신러닝으로 PTSD 예측하기 (0) | 2020.07.07 |
저해상도를 고해상도로 바꿔주는 AI (0) | 2020.06.15 |