毕业快乐!最近更新 2021 年 07 月 05 日:论文更新 +104,开源代码 +9
Visual_SLAM_Related_Research
@Author:吴艳敏
@E-mail :wuyanminmax[AT]gmail.com
@github :wuxiaolang
前言
以下收集的论文、代码等资料主要与本人的硕士期间的学习方向 视觉 SLAM、增强现实 相关。目前(2019-2021)重点关注 VO、物体级 SLAM 和语义数据关联 , 对传感器融合、稠密建图也略有关注,所以资料的收集范围也与自己的兴趣比较一致,无法涵盖视觉 SLAM 的所有研究,请谨慎参考 ,部分整理后发布于知乎 。主要内容包括:
1. 开源代码
:经典、优秀的开源工程
2. 优秀作者与实验室
:在自己感兴趣领域比较优秀的值得关注的团队或个人
3. SLAM 学习资料
:SLAM 相关学习资料 、视频、数据集、公众号和代码注释
4. 近期论文
:自己感兴趣方向的最新论文 ,大概一个月一更新 。部分论文的详/泛读笔记放在我的博客 /List 上。
🌚 本仓库于 2019 年 3 月(研一)开始整理(私密);
🌝 本仓库于 2020 年 3月(研二)公开,正好一周年;
🎓 2021 年 7 月,研三硕士毕业,此仓库后续可能不会有很频繁的更新,祝各位学习工作顺利,学术交流欢迎邮件联系 wuyanminmax[AT]gmail.com。
目录
推荐使用 GayHub 插件自动在侧栏展开目录
1. 开源代码
这一部分整理之后发布在知乎(2020 年 3 月 31 日):https://zhuanlan.zhihu.com/p/115599978/
:smile: 1.1 Geometric SLAM
1. PTAM
论文 :Klein G, Murray D. Parallel tracking and mapping for small AR workspaces [C]//Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on. IEEE, 2007 : 225-234.
代码 :https://github.com/Oxford-PTAM/PTAM-GPL
工程地址:http://www.robots.ox.ac.uk/~gk/PTAM/
作者其他研究:http://www.robots.ox.ac.uk/~gk/publications.html
2. S-PTAM(双目 PTAM)
3. MonoSLAM
论文 :Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM [J]. IEEE transactions on pattern analysis and machine intelligence, 2007 , 29(6): 1052-1067.
代码 :https://github.com/hanmekim/SceneLib2
4. ORB-SLAM2
以下5, 6, 7, 8几项是 TUM 计算机视觉组全家桶,官方主页:https://vision.in.tum.de/research/vslam/dso
5. DSO
6. LDSO
高翔在 DSO 上添加闭环的工作
论文 :Gao X, Wang R, Demmel N, et al. LDSO: Direct sparse odometry with loop closure [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018 : 2198-2204.
代码 :https://github.com/tum-vision/LDSO
7. LSD-SLAM
8. DVO-SLAM
论文 :Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras [C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013 : 2100-2106.
代码 1 :https://github.com/tum-vision/dvo_slam
代码 2 :https://github.com/tum-vision/dvo
其他论文:
9. SVO
10. DSM
论文 :Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping [J]. arXiv preprint arXiv:1904.06577, 2019 .
代码 :https://github.com/jzubizarreta/dsm ;Video
11. openvslam
12. se2lam(地面车辆位姿估计的视觉里程计)
13. GraphSfM(基于图的并行大规模 SFM)
14. LCSD_SLAM(松耦合的半直接法单目 SLAM)
15. RESLAM(基于边的 SLAM)
16. scale_optimization(将单目 DSO 拓展到双目)
17. BAD-SLAM(直接法 RGB-D SLAM)
18. GSLAM(集成 ORB-SLAM2,DSO,SVO 的通用框架)
19. ARM-VO(运行于 ARM 处理器上的单目 VO)
20. cvo-rgbd(直接法 RGB-D VO)
21. Map2DFusion(单目 SLAM 无人机图像拼接)
22. CCM-SLAM(多机器人协同单目 SLAM)
23. ORB-SLAM3
24. OV²SLAM(完全实时在线多功能 SLAM)
25. ESVO(基于事件的双目视觉里程计)
26. VOLDOR-SLAM(实时稠密非直接法 SLAM)
:smile: 1.2 Semantic / Deep SLAM
1. MsakFusion
2. SemanticFusion
3. semantic_3d_mapping
4. Kimera(实时度量与语义定位建图开源库)
5. NeuroSLAM(脑启发式 SLAM)
6. gradSLAM(自动分区的稠密 SLAM)
7. ORB-SLAM2 + 目标检测/分割的方案语义建图
8. SIVO(语义辅助特征选择)
9. FILD(临近图增量式闭环检测)
10. object-detection-sptam(目标检测与双目 SLAM)
11. Map Slammer(单目深度估计 + SLAM)
12. NOLBO(变分模型的概率 SLAM)
13. GCNv2_SLAM (基于图卷积神经网络 SLAM)
14. semantic_suma(激光语义建图)
论文 :Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM [C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019 : 4530-4537.
代码 :https://github.com/PRBonn/semantic_suma/ ;Video
15. Neural-SLAM(主动神经 SLAM)
16. TartanVO:一种通用的基于学习的 VO
17. DF-VO
:smile: 1.3 Multi-Landmarks / Object SLAM
1. PL-SVO(点线 SVO)
2. stvo-pl(双目点线 VO)
3. PL-SLAM(点线 SLAM)
4. PL-VIO
5. lld-slam(用于 SLAM 的可学习型线段描述符)
点线结合的工作还有很多,国内的比如
6. PlaneSLAM
7. Eigen-Factors(特征因子平面对齐)
8. PlaneLoc
9. Pop-up SLAM
10. Object SLAM
11. voxblox-plusplus(物体级体素建图)
12. Cube SLAM
论文 :Yang S, Scherer S. Cubeslam: Monocular 3-d object slam [J]. IEEE Transactions on Robotics, 2019 , 35(4): 925-938.
代码 :https://github.com/shichaoy/cube_slam
对,这就是带我入坑的一项工作,2018 年 11 月份看到这篇论文(当时是预印版)之后开始学习物体级 SLAM,个人对 Cube SLAM 的一些注释和总结:链接 。
也有很多有意思的但没开源的物体级 SLAM
Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation [C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019 : 669-675.
Li J, Meger D, Dudek G. Semantic Mapping for View-Invariant Relocalization [C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019 : 7108-7115.
Nicholson L, Milford M, Sünderhauf N. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam [J]. IEEE Robotics and Automation Letters, 2018 , 4(1): 1-8.
13. VPS-SLAM(平面语义 SLAM)
14. Structure-SLAM (低纹理环境下点线 SLAM)
15. PL-VINS
:smile: 1.4 Sensor Fusion
1. msckf_vio
2. rovio
3. R-VIO
论文 :Huai Z, Huang G. Robocentric visual-inertial odometry [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018 : 6319-6326.
代码 :https://github.com/rpng/R-VIO ;Video
VI_ORB_SLAM2 :https://github.com/YoujieXia/VI_ORB_SLAM2
4. okvis
5. VIORB
6. VINS-mono
7. VINS-RGBD
8. Open-VINS
9. versavis(多功能的视惯传感器系统)
10. CPI(视惯融合的封闭式预积分)
11. TUM Basalt
12. Limo(激光单目视觉里程计)
论文 :Graeter J, Wilczynski A, Lauer M. Limo: Lidar-monocular visual odometry [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018 : 7872-7879.
代码 :https://github.com/johannes-graeter/limo ; Video
13. LARVIO(多状态约束卡尔曼滤波的单目 VIO)
14. vig-init(垂直边缘加速视惯初始化)
15. vilib(VIO 前端库)
16. Kimera-VIO
17. maplab(视惯建图框架)
18. lili-om:固态雷达惯性里程计与建图
19. CamVox:Lidar 辅助视觉 SLAM
20. SSL_SLAM:固态 LiDAR 轻量级 3D 定位与建图
21. r2live:LiDAR-Inertial-Visual 紧耦合
22. GVINS:GNSS-视觉-惯导紧耦合
23. LVI-SAM:Lidar-Visual-Inertial 建图与定位
:smile: 1.5 Dynamic SLAM
1. DynamicSemanticMapping(动态语义建图)
2. DS-SLAM(动态语义 SLAM)
3. Co-Fusion(实时分割与跟踪多物体)
4. DynamicFusion
5. ReFusion(动态场景利用残差三维重建)
6. DynSLAM(室外大规模稠密重建)
7. VDO-SLAM(动态物体感知的 SLAM)
:smile: 1.6 Mapping
1. InfiniTAM(跨平台 CPU 实时重建)
2. BundleFusion
3. KinectFusion
4. ElasticFusion
5. Kintinuous
6. ElasticReconstruction
论文 :Choi S, Zhou Q Y, Koltun V. Robust reconstruction of indoor scenes [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015 : 5556-5565.
代码 :https://github.com/qianyizh/ElasticReconstruction ;作者主页
7. FlashFusion
8. RTAB-Map(激光视觉稠密重建)
9. RobustPCLReconstruction(户外稠密重建)
10. plane-opt-rgbd(室内平面重建)
11. DenseSurfelMapping(稠密表面重建)
论文 :Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping [C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019 : 6919-6925.
代码 :https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping
12. surfelmeshing(网格重建)
13. DPPTAM(单目稠密重建)
14. VI-MEAN(单目视惯稠密重建)
15. REMODE(单目概率稠密重建)
16. DeepFactors(实时的概率单目稠密 SLAM)
17. probabilistic_mapping(单目概率稠密重建)
18. ORB-SLAM2 单目半稠密建图
19. Voxgraph(SDF 体素建图)
20. SegMap(三维分割建图)
21. OpenREALM:无人机实时建图框架
22. c-blox:可拓展的 TSDF 稠密建图
:smile: 1.7 Optimization
1. 后端优化库
GTSAM :https://github.com/borglab/gtsam ;官网
g2o :https://github.com/RainerKuemmerle/g2o
ceres :http://ceres-solver.org/
2. ICE-BA
3. minisam(因子图最小二乘优化框架)
4. SA-SHAGO(几何基元图优化)
5. MH-iSAM2(SLAM 优化器)
6. MOLA(用于定位和建图的模块化优化框架)
2. 优秀作者与实验室
这一部分整理之后发布在知乎(2020 年 4 月 19 日):https://zhuanlan.zhihu.com/p/130530891
1. 美国卡耐基梅陇大学机器人研究所
2. 美国加州大学圣地亚哥分校语境机器人研究所
研究方向 :多模态环境理解,语义导航,自主信息获取
实验室主页 :https://existentialrobotics.org/index.html
发表论文汇总 :https://existentialrobotics.org/pages/publications.html
👦 Nikolay Atanasov :个人主页 谷歌学术
机器人状态估计与感知课程 ppt:https://natanaso.github.io/ece276a2019/schedule.html
📜 语义 SLAM 经典论文 :Bowman S L, Atanasov N, Daniilidis K, et al. Probabilistic data association for semantic slam [C]//2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017 : 1722-1729.
📜 实例网格模型定位与建图 :Feng Q, Meng Y, Shan M, et al. Localization and Mapping using Instance-specific Mesh Models [C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019 : 4985-4991.
📜 基于事件相机的 VIO :Zihao Zhu A, Atanasov N, Daniilidis K. Event-based visual inertial odometry [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017 : 5391-5399.
3. 美国特拉华大学机器人感知与导航组
研究方向 :SLAM、VINS、语义定位与建图等
实验室主页 :https://sites.udel.edu/robot/
发表论文汇总 :https://sites.udel.edu/robot/publications/
Github 地址 :https://github.com/rpng?page=2
📜 Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation [C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019 .(代码 :https://github.com/rpng/open_vins )
📜 Huai Z, Huang G. Robocentric visual-inertial odometry [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018 : 6319-6326.(代码 :https://github.com/rpng/R-VIO )
📜 Zuo X, Geneva P, Yang Y, et al. Visual-Inertial Localization With Prior LiDAR Map Constraints [J]. IEEE Robotics and Automation Letters, 2019 , 4(4): 3394-3401.
📜 Zuo X, Ye W, Yang Y, et al. Multimodal localization: Stereo over LiDAR map [J]. Journal of Field Robotics, 2020 ( 左星星博士谷歌学术 )
👦 黄国权教授主页
4. 美国麻省理工学院航空航天实验室
研究方向 :位姿估计与导航,路径规划,控制与决策,机器学习与强化学习
实验室主页 :http://acl.mit.edu/
发表论文 :http://acl.mit.edu/publications (实验室的学位论文 也可以在这里找到)
👦 Jonathan P. How 教授:个人主页 谷歌学术
👦 Kasra Khosoussi (SLAM 图优化):谷歌学术
📜 物体级 SLAM :Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph [C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016 : 4602-4609.(代码 :https://github.com/BeipengMu/objectSLAM)
📜 物体级 SLAM 导航 :Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation [C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019 : 669-675.
📜 SLAM 的图优化 :Khosoussi, K., Giamou, M., Sukhatme, G., Huang, S., Dissanayake, G., and How, J. P., Reliable Graphs for SLAM [C]//International Journal of Robotics Research (IJRR), 2019.
5. 美国麻省理工学院 SPARK 实验室
研究方向 :移动机器人环境感知
实验室主页 :http://web.mit.edu/sparklab/
👦 Luca Carlone 教授:个人主页 谷歌学术
📜 SLAM 经典综述 :Cadena C, Carlone L, Carrillo H, et al. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age [J]. IEEE Transactions on robotics, 2016 , 32(6): 1309-1332.
📜 VIO 流形预积分 :Forster C, Carlone L, Dellaert F, et al. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry [J]. IEEE Transactions on Robotics, 2016 , 33(1): 1-21.
📜 开源语义 SLAM :Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping [J]. arXiv preprint arXiv:1910.02490, 2019 .(代码:https://github.com/MIT-SPARK/Kimera )
6. 美国麻省理工学院海洋机器人组
研究方向 :水下或陆地移动机器人导航与建图
实验室主页 :https://marinerobotics.mit.edu/ (隶属于 MIT 计算机科学与人工智能实验室 )
👦 John Leonard 教授 :谷歌学术
发表论文汇总 :https://marinerobotics.mit.edu/biblio
📜 面向物体的 SLAM :Finman R, Paull L, Leonard J J. Toward object-based place recognition in dense rgb-d maps [C]//ICRA Workshop Visual Place Recognition in Changing Environments, Seattle, WA. 2015 .
📜 拓展 KinectFusion :Whelan T, Kaess M, Fallon M, et al. Kintinuous: Spatially extended kinectfusion [J]. 2012 .
📜 语义 SLAM 概率数据关联 :Doherty K, Fourie D, Leonard J. Multimodal semantic slam with probabilistic data association [C]//2019 international conference on robotics and automation (ICRA). IEEE, 2019 : 2419-2425.
7. 美国明尼苏达大学多元自主机器人系统实验室
8. 美国宾夕法尼亚大学 Vijay Kumar 实验室
9. Srikumar Ramalingam(美国犹他大学计算机学院)
研究方向 :三维重构、语义分割、视觉 SLAM、图像定位、深度神经网络
👦 Srikumar Ramalingam :个人主页 谷歌学术
📜 点面 SLAM :Taguchi Y, Jian Y D, Ramalingam S, et al. Point-plane SLAM for hand-held 3D sensors [C]//2013 IEEE international conference on robotics and automation. IEEE, 2013 : 5182-5189.
📜 点线定位 :Ramalingam S, Bouaziz S, Sturm P. Pose estimation using both points and lines for geo-localization [C]//2011 IEEE International Conference on Robotics and Automation. IEEE, 2011 : 4716-4723.(视频 )
📜 2D 3D 定位 :Ataer-Cansizoglu E, Taguchi Y, Ramalingam S. Pinpoint SLAM: A hybrid of 2D and 3D simultaneous localization and mapping for RGB-D sensors [C]//2016 IEEE international conference on robotics and automation (ICRA). IEEE, 2016 : 1300-1307.(视频 )
10. Frank Dellaert(美国佐治亚理工学院机器人与智能机器研究中心)
研究方向 :SLAM,图像时空重构
👦 个人主页 ,谷歌学术
📜 因子图 :Dellaert F. Factor graphs and GTSAM: A hands-on introduction [R]. Georgia Institute of Technology, 2012 . (GTSAM 代码:http://borg.cc.gatech.edu/ )
📜 多机器人分布式 SLAM :Cunningham A, Wurm K M, Burgard W, et al. Fully distributed scalable smoothing and mapping with robust multi-robot data association [C]//2012 IEEE International Conference on Robotics and Automation. IEEE, 2012 : 1093-1100.
📜 Choudhary S, Trevor A J B, Christensen H I, et al. SLAM with object discovery, modeling and mapping [C]//2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2014 : 1018-1025.
11. Patricio Vela (美国佐治亚理工学院智能视觉与自动化实验室)
12. 加拿大蒙特利尔大学 机器人与嵌入式 AI 实验室
13. 加拿大舍布鲁克大学智能、交互、综合、跨学科机器人实验室
14. 瑞士苏黎世大学机器人与感知课题组
研究方向 :移动机器人、无人机环境感知与导航,VISLAM ,事件相机
实验室主页 :http://rpg.ifi.uzh.ch/index.html
发表论文汇总 :http://rpg.ifi.uzh.ch/publications.html
Github 代码公开地址 :https://github.com/uzh-rpg
📜 Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry [C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014 : 15-22.
📜 VO/VIO 轨迹评估工具 rpg_trajectory_evaluation :https://github.com/uzh-rpg/rpg_trajectory_evaluation
📜 事件相机项目主页:http://rpg.ifi.uzh.ch/research_dvs.html
👦 人物 :Davide Scaramuzza 张子潮
15. 瑞士苏黎世联邦理工计算机视觉与几何实验室
16. 英国帝国理工学院戴森机器人实验室
研究方向 :机器人视觉场景与物体理解、机器人操纵
实验室主页 :https://www.imperial.ac.uk/dyson-robotics-lab/
发表论文 :https://www.imperial.ac.uk/dyson-robotics-lab/publications/
代表性工作 :MonoSLAM、CodeSLAM、ElasticFusion、KinectFusion
📜 ElasticFusion :Whelan T, Leutenegger S, Salas-Moreno R, et al. ElasticFusion: Dense SLAM without a pose graph [C]. Robotics: Science and Systems, 2015 .(代码 :https://github.com/mp3guy/ElasticFusion )
📜 Semanticfusion :McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks [C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, 2017 : 4628-4635.(代码 :https://github.com/seaun163/semanticfusion )
📜 Code-SLAM :Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018 : 2560-2568.
👦 Andrew Davison :谷歌学术
17. 英国牛津大学信息工程学
研究方向 :SLAM、目标跟踪、运动结构、场景增强、移动机器人运动规划、导航与建图等等等
实验室主页 :http://www.robots.ox.ac.uk/
主动视觉实验室:http://www.robots.ox.ac.uk/ActiveVision/
牛津机器人学院:https://ori.ox.ac.uk/
发表论文汇总 :
主动视觉实验室:http://www.robots.ox.ac.uk/ActiveVision/Publications/index.html
机器人学院:https://ori.ox.ac.uk/publications/papers/
代表性工作 :
👦 人物 (谷歌学术):David Murray Maurice Fallon
部分博士学位论文可以在这里搜到:https://ora.ox.ac.uk/
18. 德国慕尼黑工业大学计算机视觉组
研究方向 :三维重建、机器人视觉、深度学习、视觉 SLAM 等
实验室主页 :https://vision.in.tum.de/research/vslam
发表论文汇总 :https://vision.in.tum.de/publications
代表作 :DSO、LDSO、LSD_SLAM、DVO_SLAM
📜 DSO :Engel J, Koltun V, Cremers D. Direct sparse odometry [J]. IEEE transactions on pattern analysis and machine intelligence, 2017 , 40(3): 611-625.(代码 :https://github.com/JakobEngel/dso )
📜 LSD-SLAM : Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM [C]//European conference on computer vision. Springer, Cham, 2014 : 834-849.(代码 :https://github.com/tum-vision/lsd_slam )2.
Github 地址 :https://github.com/tum-vision
👦 Daniel Cremers 教授:个人主页 谷歌学术
👦 Jakob Engel (LSD-SLAM,DSO 作者):个人主页 谷歌学术
19. 德国马克斯普朗克智能系统研究所嵌入式视觉组
研究方向 :智能体自主环境理解、导航与物体操纵
实验室主页 :https://ev.is.tuebingen.mpg.de/
👦 负责人 Jörg Stückler (前 TUM 教授):个人主页 谷歌学术
📜 发表论文汇总 :https://ev.is.tuebingen.mpg.de/publications
Kasyanov A, Engelmann F, Stückler J, et al. Keyframe-based visual-inertial online SLAM with relocalization [C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017 : 6662-6669.
📜 Strecke M, Stuckler J. EM-Fusion: Dynamic Object-Level SLAM with Probabilistic Data Association [C]//Proceedings of the IEEE International Conference on Computer Vision. 2019 : 5865-5874.
📜 Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D. Visual-Inertial Mapping with Non-Linear Factor Recovery IEEE Robotics and Automation Letters (RA-L), 5, 2020
20. 德国弗莱堡大学智能自主系统实验室
21. 西班牙萨拉戈萨大学机器人、感知与实时组 SLAM 实验室
研究方向 :视觉 SLAM、物体 SLAM、非刚性 SLAM、机器人、增强现实
实验室主页 :http://robots.unizar.es/slamlab/
发表论文 :http://robots.unizar.es/slamlab/?extra=3 (论文好像没更新,可以访问下面实验室大佬的谷歌学术查看最新论文)
👦 J. M. M. Montiel :谷歌学术
📜 Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras [J]. IEEE Transactions on Robotics, 2017 , 33(5): 1255-1262.
Gálvez-López D, Salas M, Tardós J D, et al. Real-time monocular object slam [J]. Robotics and Autonomous Systems, 2016 , 75: 435-449.
📜 Strasdat H, Montiel J M M, Davison A J. Real-time monocular SLAM: Why filter? [C]//2010 IEEE International Conference on Robotics and Automation. IEEE, 2010 : 2657-2664.
📜 Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping [J]. arXiv preprint arXiv:1904.06577, 2019 .
22. 西班牙马拉加大学机器感知与智能机器人课题组
研究方向 :自主机器人、人工嗅觉、计算机视觉
实验室主页 :http://mapir.uma.es/mapirwebsite/index.php/topics-2.html
发表论文汇总 :http://mapir.isa.uma.es/mapirwebsite/index.php/publications-menu-home.html
📜 Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments [J]. IEEE Transactions on Robotics, 2019 , 35(3): 734-746.(代码:https://github.com/rubengooj/pl-slam )
👦 Francisco-Angel Moreno
👦 Ruben Gomez-Ojeda 点线 SLAM
📜 Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments [C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, 2016 : 4211-4216.(代码 :https://github.com/rubengooj/pl-svo )
📜 Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments [C]//2016 IEEE International Conference on Robotics and Automation (ICRA ). IEEE, 2016 : 2521-2526.(代码 :https://github.com/rubengooj/stvo-pl )
📜 Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments [J]. arXiv preprint arXiv:1705.09479, 2017 .(代码 :https://github.com/rubengooj/pl-slam )
23. Alejo Concha(Oculus VR,西班牙萨拉戈萨大学)
24. 奥地利格拉茨技术大学计算机图形学与视觉研究所
研究方向 :AR/VR,机器人视觉,机器学习,目标识别与三维重建
实验室主页 :https://www.tugraz.at/institutes/icg/home/
👦 Friedrich Fraundorfer 教授:团队主页 谷歌学术
👦 Dieter Schmalstieg 教授:团队主页 谷歌学术
25. 波兰波兹南工业大学移动机器人实验室
26. Alexander Vakhitov(三星莫斯科 AI 中心)
27. 澳大利亚昆士兰科技大学机器人技术中心
研究方向 :脑启发式机器人,采矿机器人,机器人视觉
实验室主页 :https://www.qut.edu.au/research/centre-for-robotics
开源代码 :https://research.qut.edu.au/qcr/open-source-code/
👦 Niko Sünderhauf :个人主页 ,谷歌学术
👦 Michael Milford :谷歌学术 https://scholar.google.com/citations?user=TDSmCKgAAAAJ&hl=zh-CN&oi=ao
📜 ICRA 2012 :SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights (代码:https://michaelmilford.com/seqslam/)
📜 Ball D, Heath S, Wiles J, et al. OpenRatSLAM: an open source brain-based SLAM system [J]. Autonomous Robots, 2013 , 34(3): 149-176.(代码:https://openslam-org.github.io/openratslam.html )
📜 Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments [J]. Biological Cybernetics, 2019 , 113(5-6): 515-545. (代码 :https://github.com/cognav/NeuroSLAM )
28. 澳大利亚机器人视觉中心
研究方向 :机器人感知、理解与学习 (集合了昆士兰科技大学,澳大利亚国立大学,阿德莱德大学,昆士兰大学等学校机器人领域的研究者)
实验室主页 :https://www.roboticvision.org/
人物 :https://www.roboticvision.org/rv_person_category/researchers/
发表论文汇总 :https://www.roboticvision.org/publications/scientific-publications/
👦 Yasir Latif :个人主页 ,谷歌学术
👦 Ian D Reid :谷歌学术:https://scholar.google.com/citations?user=ATkNLcQAAAAJ&hl=zh-CN&oi=sra
29. 日本国立先进工业科学技术研究所
人工智能研究中心 :https://www.airc.aist.go.jp/en/intro/
👦 Ken Sakurada :个人主页 ,谷歌学术
👦 Shuji Oishi :谷歌学术
30. Pyojin Kim(韩国首尔大学自主机器人实验室)
31. 香港科技大学空中机器人实验室
研究方向 :空中机器人在复杂环境下的自主运行,包括状态估计、建图、运动规划、多机器人协同以及低成本传感器和计算组件的实验平台开发。
实验室主页 :http://uav.ust.hk/
发表论文 :http://uav.ust.hk/publications/
👦 沈邵劼教授谷歌学术
代码公开地址 :https://github.com/HKUST-Aerial-Robotics
📜 Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator [J]. IEEE Transactions on Robotics, 2018 , 34(4): 1004-1020.(代码 :https://github.com/HKUST-Aerial-Robotics/VINS-Mono )
📜 Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping [C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019 : 6919-6925.(代码 :https://github.com/HKUST-Aerial-Robotics/DenseSurfelMapping )
32. 香港科技大学机器人与多感知实验室 RAM-LAB
33. 香港中文大学天石机器人实验室
研究方向 :工业、物流、手术机器人,三维影像,机器学习
实验室主页 :http://ri.cuhk.edu.hk/
👦 刘云辉教授 :http://ri.cuhk.edu.hk/yhliu
👦 李浩昂 :个人主页 ,谷歌学术
📜 Li H, Yao J, Bazin J C, et al. A monocular SLAM system leveraging structural regularity in Manhattan world [C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018 : 2518-2525.
📜 Li H, Yao J, Lu X, et al. Combining points and lines for camera pose estimation and optimization in monocular visual odometry [C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017 : 1289-1296.
📜 消失点检测:Lu X, Yaoy J, Li H, et al. 2-Line Exhaustive Searching for Real-Time Vanishing Point Estimation in Manhattan World [C]//Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, 2017 : 345-353.(代码:https://github.com/xiaohulugo/VanishingPointDetection )
👦 郑帆 :个人主页 ,谷歌学术
34. 浙江大学 CAD&CG 国家重点实验室
研究方向 :SFM/SLAM,三维重建,增强现实
实验室主页 :http://www.zjucvg.net/
Github 代码地址 :https://github.com/zju3dv
👦 章国峰教授 :个人主页 ,谷歌学术
📜 ICE-BA :Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018 : 1974-1982.(代码:https://github.com/zju3dv/EIBA )
📜 RK-SLAM :Liu H, Zhang G, Bao H. Robust keyframe-based monocular SLAM for augmented reality [C]//2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2016 : 1-10.(项目主页:http://www.zjucvg.net/rkslam/rkslam.html )
📜 RD-SLAM :Tan W, Liu H, Dong Z, et al. Robust monocular SLAM in dynamic environments [C]//2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2013 : 209-218.
35. 邹丹平(上海交通大学)
研究方向 :视觉 SLAM,SFM,多源导航,微型无人机
👦 个人主页 :http://drone.sjtu.edu.cn/dpzou/index.php , 谷歌学术
📜 Co-SLAM :Zou D, Tan P. Coslam: Collaborative visual slam in dynamic environments [J]. IEEE transactions on pattern analysis and machine intelligence, 2012 , 35(2): 354-366.(代码 :https://github.com/danping/CoSLAM )
📜 StructSLAM :Zhou H, Zou D, Pei L, et al. StructSLAM: Visual SLAM with building structure lines [J]. IEEE Transactions on Vehicular Technology, 2015 , 64(4): 1364-1375.(项目主页 :http://drone.sjtu.edu.cn/dpzou/project/structslam.php )
📜 StructVIO :Zou D, Wu Y, Pei L, et al. StructVIO: visual-inertial odometry with structural regularity of man-made environments [J]. IEEE Transactions on Robotics, 2019 , 35(4): 999-1013.
36. 布树辉教授(西北工业大学智能系统实验室)
研究方向 :语义定位与建图、SLAM、在线学习与增量学习
👦 个人主页 :http://www.adv-ci.com/blog/ 谷歌学术
布老师的课件 :http://www.adv-ci.com/blog/course/
实验室 2018 年暑期培训资料:https://github.com/zdzhaoyong/SummerCamp2018
📜 开源的通用 SLAM 框架 :Zhao Y, Xu S, Bu S, et al. GSLAM: A general SLAM framework and benchmark [C]//Proceedings of the IEEE International Conference on Computer Vision. 2019 : 1110-1120.(代码 :https://github.com/zdzhaoyong/GSLAM )
📜 Bu S, Zhao Y, Wan G, et al. Map2DFusion: Real-time incremental UAV image mosaicing based on monocular slam [C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016 : 4564-4571.(代码 :https://github.com/zdzhaoyong/Map2DFusion )
📜 Wang W, Zhao Y, Han P, et al. TerrainFusion: Real-time Digital Surface Model Reconstruction based on Monocular SLAM [C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019 : 7895-7902.
+1 Cyrill Stachniss(德国波恩大学摄影测量与机器人实验室)
研究方向 :概率机器人、SLAM、自主导航、视觉激光感知、场景分析与分配、无人飞行器
实验室主页 :https://www.ipb.uni-bonn.de/
👦 个人主页 :https://www.ipb.uni-bonn.de/people/cyrill-stachniss/ 谷歌学术
发表论文:https://www.ipb.uni-bonn.de/publications/
开源代码:https://github.com/PRBonn
📜 IROS 2019 激光语义 SLAM:Chen X, Milioto A, Palazzolo E, et al. SuMa++: Efficient LiDAR-based semantic SLAM [C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019 : 4530-4537.(代码:https://github.com/PRBonn/semantic_suma/ )
Cyrill Stachniss 教授 SLAM 公开课:youtube ; bilibili
波恩大学另外一个智能自主系统实验室 :http://www.ais.uni-bonn.de/research.html
+1 上海科技大学
+1 美国密歇根大学机器人研究所
学院官网 :https://robotics.umich.edu/
研究方向 :https://robotics.umich.edu/research/focus-areas/
感知机器人实验室(PeRL)
实验室主页:http://robots.engin.umich.edu/About/
👦 Ryan M. Eustice 谷歌学术
📜 激光雷达数据集 Pandey G, McBride J R, Eustice R M. Ford campus vision and lidar data set [J]. The International Journal of Robotics Research, 2011 , 30(13): 1543-1552. | 数据集
APRIL robotics lab
+1 瑞士苏黎世联邦理工自主系统实验室
研究方向 :复杂多样环境中自主运行的机器人和智能系统
实验室主页 :https://asl.ethz.ch/
发表论文:https://asl.ethz.ch/publications-and-sources/publications.html
youtube | Github
👦 Cesar Cadena 个人主页
📜 Schneider T, Dymczyk M, Fehr M, et al. maplab: An open framework for research in visual-inertial mapping and localization [J]. IEEE Robotics and Automation Letters, 2018 , 3(3): 1418-1425. | 代码
📜 Dubé R, Cramariuc A, Dugas D, et al. SegMap: 3d segment mapping using data-driven descriptors [J]. arXiv preprint arXiv:1804.09557, 2018 . | 代码
📜 Millane A, Taylor Z, Oleynikova H, et al. C-blox: A scalable and consistent tsdf-based dense mapping approach [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018: 995-1002. | 代码
+1 美国麻省理工学院 Robust Robotics Group
+1 瑞士苏黎世联邦理工 Vision for Robotics Lab
+1 谢立华教授(南洋理工大学)
研究方向 :控制,多智能体,定位
个人主页 :https://personal.ntu.edu.sg/elhxie/research.html | Google Scholar
👦 Wang Han :个人主页 | Github
📜 Wang H, Wang C, Xie L. Intensity scan context: Coding intensity and geometry relations for loop closure detection [C]//2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020 : 2095-2101. | Code
📜 Wang H, Wang C, Xie L. Lightweight 3-D Localization and Mapping for Solid-State LiDAR [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 1801-1807. | Code
📜 Wang C, Yuan J, Xie L. Non-iterative SLAM [C]//2017 18th International Conference on Advanced Robotics (ICAR). IEEE, 2017 : 83-90.
3. SLAM 学习资料
这一部分的内容不太完整,陆续丰富,欢迎补充
3.1 国内资料
1) SLAMcn:http://www.slamcn.org/index.php/
2) SLAM 最新研究更新 Recent_SLAM_Research :https://github.com/YiChenCityU/Recent_SLAM_Research
3) 西北工大智能系统实验室 SLAM 培训 :https://github.com/zdzhaoyong/SummerCamp2018
布树辉老师课件:http://www.adv-ci.com/blog/course/
4) IROS 2019 视觉惯导导航 的挑战与应用研讨会:http://udel.edu/~ghuang/iros19-vins-workshop/index.html
5) 泡泡机器人 VIO 相关资料:https://github.com/PaoPaoRobot/Awesome-VIO
6) 崔华坤:主流 VIO 论文推导及代码解析 :https://github.com/StevenCui/VIO-Doc
7) 李言:SLAM 中的几何与学习方法
8) 黄山老师状态估计视频:bilibili
9) 谭平老师-SLAM 6小时课程:bilibili
10) 2020 年 SLAM 技术及应用暑期学校:视频-bilibili | 课件
3.2 国外资料
1) 事件相机 相关研究与发展:https://github.com/uzh-rpg/event-based_vision_resources
2) 加州大学圣地亚哥分校语境机器人研究所 Nikolay Atanasov 教授机器人状态估计与感知课程 ppt:https://natanaso.github.io/ece276a2019/schedule.html
3) 波恩大学 Mobile Sensing and Robotics Course 公开课 :youtube ,bilibili
3.3 公众号
泡泡机器人 SLAM :paopaorobot_slam
3.4 代码注释
今天(2020.04.25)刚想到的一个点,就算前面整理了大量的开源工作,但是看原版的代码还是会有很大的困难,感谢国内 SLAM 爱好者的将自己的代码注释分享出来,促进交流,共同进步 。这一小节的内容陆续发掘,期待大家的推荐代码注释(可以在 issue 中留言)。
3.5 数据集
4. 近期论文(持续更新)
2021 年 6 月论文更新(20 篇)
本期更新于 2021 年 7 月 5 日
共 20 篇论文,其中 7 项(待)开源工作
[4,9,13,14,16,17] LiDAR 相关
[1,2,6,7] Mapping
1. Geometric SLAM
[1] Bokovoy A, Muravyev K, Yakovlev K. MAOMaps: A Photo-Realistic Benchmark For vSLAM and Map Merging Quality Assessment [J]. arXiv preprint arXiv:2105.14994, 2021 .
用于视觉 SLAM 和地图合并质量评估的逼真基准
俄罗斯科学院;开源数据集
[2] Demmel N, Sommer C, Cremers D, et al. Square Root Bundle Adjustment for Large-Scale Reconstruction [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 11723-11732.
[3] Chen Y, Zhao L, Zhang Y, et al. Anchor Selection for SLAM Based on Graph Topology and Submodular Optimization [J]. IEEE Transactions on Robotics, 2021 .
基于图拓扑和子模块优化的SLAM锚点选择
悉尼科技大学
[4] Zhou L, Koppel D, Kaess M. LiDAR SLAM with Plane Adjustment for Indoor Environment [J]. IEEE Robotics and Automation Letters, 2021 .
室内环境中平面调整的 LiDAR SLAM
Magic Leap,CMU
[5] Liu D, Parra A, Chin T J. Spatiotemporal Registration for Event-based Visual Odometry [J]. arXiv preprint arXiv:2103.05955, 2021 .
基于事件的视觉里程计的时空配准
阿德莱德大学;开源数据集 (待公开)
2. Semantic / Deep SLAM / Mapping
[6] Wimbauer F, Yang N, von Stumberg L, et al. MonoRec: Semi-supervised dense reconstruction in dynamic environments from a single moving camera [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 6112-6122.
MonoRec: 动态环境中从单个移动相机中进行半监督稠密重建
TUM;项目主页 ;代码开源
[7] Qin T, Zheng Y, Chen T, et al. RoadMap: A Light-Weight Semantic Map for Visual Localization towards Autonomous Driving [J]. arXiv preprint arXiv:2106.02527, 2021 .(ICRA2021)
[8] Tschopp F, Nieto J, Siegwart R Y, et al. Superquadric Object Representation for Optimization-based Semantic SLAM [J]. 2021 .
基于优化的语义 SLAM 的超二次曲面的物体表示
ETH;Microsoft
[9] Miller I D, Cowley A, Konkimalla R, et al. Any Way You Look at It: Semantic Crossview Localization and Mapping With LiDAR [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 2397-2404.
以任何方式看待它:使用 LiDAR 进行语义交叉视图的定位和建图
宾夕法尼亚大学;代码开源
[10] Lu Y, Xu X, Ding M, et al. A Global Occlusion-Aware Approach to Self-Supervised Monocular Visual Odometry [C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021 , 35(3): 2260-2268.
自监督单目视觉里程计的全局遮挡感知方法
中国人民大学
[11] Li S, Liu S, Zhao Q, et al. Quantized Self-supervised Local Feature for Real-time Robot Indirect VSLAM [J]. IEEE/ASME Transactions on Mechatronics, 2021 .
实时机器人间接 VSLAM 的量化自监督局部特征
上海交大;期刊:中科院二区,JCR Q1,IF 5.3
3. Sensor Fusion
[12] Seiskari O, Rantalankila P, Kannala J, et al. HybVIO: Pushing the Limits of Real-time Visual-inertial Odometry [J]. arXiv preprint arXiv:2106.11857, 2021 .
HybVIO: 突破实时视觉惯性里程计的极限
Spectacular AI,阿尔托大学,坦佩雷大学
[13] Li L, Kong X, Zhao X, et al. SA-LOAM: Semantic-aided LiDAR SLAM with Loop Closure [J]. arXiv preprint arXiv:2106.11516, 2021. (ICRA2021 )
[14] Li K, Ouyang Z, Hu L, et al. Robust SRIF-based LiDAR-IMU Localization for Autonomous Vehicles [J]. 2021. (ICRA2021 )
用于自动驾驶汽车的鲁棒的基于 SRIF 的 LiDAR-IMU 定位
上海科技大学
[15] Kumar H, Payne J J, Travers M, et al. Periodic SLAM: Using Cyclic Constraints to Improve the Performance of Visual-Inertial SLAM on Legged Robots [J].
周期性 SLAM:使用循环约束提高腿式机器人视觉惯性 SLAM 的性能
[16] Zhou P, Guo X, Pei X, et al. T-LOAM: Truncated Least Squares LiDAR-Only Odometry and Mapping in Real Time [J]. IEEE Transactions on Geoscience and Remote Sensing, 2021 .
截断最小二乘法的 LiDAR 实时里程计和建图
武汉理工
[17] Jia Y, Luo H, Zhao F, et al. Lvio-Fusion: A Self-adaptive Multi-sensor Fusion SLAM Framework Using Actor-critic Method [J]. arXiv preprint arXiv:2106.06783, 2021 .
Lvio-Fusion:使用Actor-critic方法的自适应多传感器融合SLAM框架
北邮,中科院计算机所;代码开源
4. Others
[18] Huang R, Fang C, Qiu K, et al. AR Mapping: Accurate and Efficient Mapping for Augmented Reality [J]. arXiv preprint arXiv:2103.14846, 2021 .
AR Mapping:用于增强现实的准确高效建图
阿里巴巴
[19] Kim A, Ošep A, Leal-Taixé L. EagerMOT: 3D Multi-Object Tracking via Sensor Fusion [J]. arXiv preprint arXiv:2104.14682, 2021 . (ICRA2021)
EagerMOT:通过传感器融合进行 3D 多目标跟踪
TUM;代码开源
[20] Wang J, Zhong Y, Dai Y, et al. Deep Two-View Structure-from-Motion Revisited [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 8953-8962.
重新审视两视图 SFM
澳大利亚国立、西工大、NVIDIA
2021 年 5 月论文更新(20 篇)
本期更新于 2021 年 6 月 13 日
共 20 篇论文,其中 6 项(待)开源工作
[2] SLAM 中的隐私保护问题近年来受到关注
[4,5,6] 基于线的 SLAM/SFM
1. Geometric SLAM
[1] Geppert M, Larsson V, Speciale P, et al. Privacy Preserving Localization and Mapping from Uncalibrated Cameras [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 1809-1819.
未校准相机的隐私保护定位和建图
ETH, Microsift
[2] Hermann M, Ruf B, Weinmann M. Real-time dense 3D Reconstruction from monocular video data captured by low-cost UAVs [J]. arXiv preprint arXiv:2104.10515, 2021 .
低成本无人机捕获的单目视频数据实时实时稠密重建
KIT
[3] Zhang J, Zhu C, Zheng L, et al. ROSEFusion: Random Optimization for Online Dense Reconstruction under Fast Camera Motion [J]. arXiv preprint arXiv:2105.05600, 2021 . (SIGGRAPH 2021)
[4] Xu B, Wang P, He Y, et al. Leveraging Structural Information to Improve Point Line Visual-Inertial Odometry [J]. arXiv preprint arXiv:2105.04064, 2021 .
利用结构信息改进点线 VIO
武汉大学;东北大学;代码开源
[5] Liu Z, Shi D, Li R, et al. PLC-VIO: Visual-Inertial Odometry Based on Point-Line Constraints [J]. IEEE Transactions on Automation Science and Engineering, 2021 .
[6] Mateus A, Tahri O, Aguiar A P, et al. On Incremental Structure from Motion Using Lines [J]. IEEE Transactions on Robotics, 2021 .
[7] Patel M, Bandopadhyay A, Ahmad A. Collaborative Mapping of Archaeological Sites using multiple UAVs [J]. arXiv preprint arXiv:2105.07644, 2021 .
[8] Chiu C Y, Sastry S S. Simultaneous Localization and Mapping: A Rapprochement of Filtering and Optimization-Based Approaches [J]. 2021 .
SLAM:滤波和优化方法的结合
加州大学伯克利分校硕士学位论文
2. Semantic / Deep SLAM / Mapping
[9] Karkus P, Cai S, Hsu D. Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 2815-2825.
可微 SLAM 网络:用于视觉导航的学习型粒子 SLAM
新加坡国立大学;项目主页
[10] Ok K, Liu K, Roy N. Hierarchical Object Map Estimation for Efficient and Robust Navigation [C]//Proc. ICRA. 2021
[11] Adu-Bredu A, Del Coro N, Liu T, et al. GODSAC : Graph Optimized DSAC for Robot Relocalization** [J]. arXiv preprint arXiv:2105.00546, 2021 .
用于机器人重定位的图形优化 DSAC*
密歇根大学;代码开源
[12] Xu D, Vedaldi A, Henriques J F. Moving SLAM: Fully Unsupervised Deep Learning in Non-Rigid Scenes [J]. arXiv preprint arXiv:2105.02195, 2021 .
Moving SLAM:非刚性场景中的完全无监督深度学习
港科,牛津大学
[13] Ma J, Ye X, Zhou H, et al. Loop-Closure Detection Using Local Relative Orientation Matching [J]. IEEE Transactions on Intelligent Transportation Systems, 2021 .
[14] Çatal O, Jansen W, Verbelen T, et al. LatentSLAM: unsupervised multi-sensor representation learning for localization and mapping [J]. arXiv preprint arXiv:2105.03265, 2021 .
LatentSLAM:用于定位和建图的无监督多传感器表示学习
根特大学,安特卫普大学;数据集
3. Sensor Fusion
[15] Zhao S, Zhang H, Wang P, et al. Super Odometry: IMU-centric LiDAR-Visual-Inertial Estimator for Challenging Environments [J]. arXiv preprint arXiv:2104.14938, 2021 .
超级里程计:用于挑战性环境的以 IMU 为中心的 LiDAR-Visual-Inertial 状态估计器
CMU
[16] Nguyen T M, Yuan S, Cao M, et al. VIRAL SLAM: Tightly Coupled Camera-IMU-UWB-Lidar SLAM [J]. arXiv preprint arXiv:2105.03296, 2021 .
VIRAL SLAM:紧耦合相机-IMU-UWB-Lidar SLAM
南洋理工大学
[17] Nguyen T M, Yuan S, Cao M, et al. MILIOM: Tightly Coupled Multi-Input Lidar-Inertia Odometry and Mapping [J]. IEEE Robotics and Automation Letters, 2021 , 6(3): 5573-5580.
MILIOM:紧耦合多源激光雷达-惯性里程计和建图
南洋理工大学
4. Others
2021 年 4 月论文更新(20 篇)
本期更新于 2021 年 5 月 11 日
共 20 篇论文,其中 5 篇来自于 CVPR2021,7 项开源工作
[7, 8, 9] Event-based
[10] VDO-SLAM 作者博士学位论文
[3, 4, 5, 17] 线、平面
[14] NeuralRecon
1. Geometric SLAM
[1] Jang Y, Oh C, Lee Y, et al. Multirobot Collaborative Monocular SLAM Utilizing Rendezvous [J]. IEEE Transactions on Robotics, 2021 .
[2] Luo H, Pape C, Reithmeier E. Hybrid Monocular SLAM Using Double Window Optimization [J]. IEEE Robotics and Automation Letters, 2021 , 6(3): 4899-4906.
[3] Yunus R, Li Y, Tombari F. ManhattanSLAM: Robust Planar Tracking and Mapping Leveraging Mixture of Manhattan Frames [J]. arXiv preprint arXiv:2103.15068, 2021 .
ManhattanSLAM:利用混合曼哈顿世界的鲁棒平面跟踪与建图
TUM
[4] Wang Q, Yan Z, Wang J, et al. Line Flow Based Simultaneous Localization and Mapping [J]. IEEE Transactions on Robotics, 2021 .
基于线流的 SLAM
北大(去年的 9 月的 Preprint)
[5] Vakhitov A, Ferraz L, Agudo A, et al. Uncertainty-Aware Camera Pose Estimation From Points and Lines [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 4659-4668.
基于点线不确定性感知的相机位姿估计
SLAMCore;代码开源
[6] Liu D, Parra A, Chin T J. Spatiotemporal Registration for Event-based Visual Odometry [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 4937-4946.
[7] Min Z, Dunn E. Jiao J, Huang H, Li L, et al. Comparing Representations in Tracking for Event Camera-based SLAM [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 1369-1376.
比较基于事件相机 SLAM 在跟踪中的表示
港科,港大;代码开源
[8] Zhou Y, Gallego G, Shen S. Event-based stereo visual odometry [J]. IEEE Transactions on Robotics, 2021 .
基于事件的双目视觉里程计
港科、柏林工业大学;代码开源 ,(去年的 8 月的 Preprint)
[9] Min Z, Dunn E. VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough [J]. arXiv preprint arXiv:2104.06800, 2021.
VOLDOR-SLAM: 当基于特征或直接法不够好时
史蒂文斯理工学院;代码开源
[10] Ila V, Henein M, Li H. Meta Information In Graph-based Simultaneous Localisation And Mapping [J]. 2020
基于图的 SLAM 中的元信息
澳大利亚国立大学 Mina Henein 博士学位论文
VDO-SLAM: A Visual Dynamic Object-aware SLAM System.
Dynamic SLAM: The Need for Speed.
2. Semantic / Deep SLAM / Mapping
[11] Li S, Wu X, Cao Y, et al. Generalizing to the Open World: Deep Visual Odometry with Online Adaptation [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 13184-13193.
[12] Qiu K, Chen S, Zhang J, et al. Compact 3D Map-Based Monocular Localization Using Semantic Edge Alignment [J]. arXiv preprint arXiv:2103.14826, 2021 .
使用语义边缘对齐的基于 3D 紧凑地图的单目定位
阿里巴巴
[13] Cheng W, Yang S, Zhou M, et al. Road Mapping and Localization using Sparse Semantic Visual Features [J]. IEEE Robotics and Automation Letters, 2021 .
[14] Sun J, Xie Y, Chen L, et al. NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021 : 15598-15607.
NeuralRecon:从单目视频中进行实时连贯 3D 重建
浙大,商汤;代码开源
3. Sensor Fusion
[15] Shan T, Englot B, Ratti C, et al. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping [J]. arXiv preprint arXiv:2104.10831, 2021 . (ICRA2021)
LVI-SAM:通过平滑的紧耦合的激光雷达-视觉-惯性里程计和建图框架
MIT;代码开源
[16] Zhang K, Yang T, Ding Z, et al. The Visual-Inertial-Dynamical UAV Dataset [J]. arXiv preprint arXiv:2103.11152, 2021 .
[17] Amblard V, Osedach T P, Croux A, et al. Lidar-Monocular Surface Reconstruction Using Line Segments [J]. arXiv preprint arXiv:2104.02761, 2021 .
[18] Wei B, Trigoni N, Markham A. iMag+: An Accurate and Rapidly Deployable Inertial Magneto-Inductive SLAM System [J]. IEEE Transactions on Mobile Computing, 2021 .
iMag+: 一种准确且可快速部署的惯性磁感应 SLAM 系统
诺森比亚大学、牛津大学
4. Others
2021 年 3 月论文更新(23 篇)
本期更新于 2021 年 4 月 15 日
共 23 篇论文,其中 9 项(待)开源工作
[3, 4, 5] 线、面 SLAM
[6, 7] 滤波方法
[10] DynaSLAM 作者博士学位论文
[9, 12, 13, 17, 23] LiDAR
1. Geometric SLAM
[1] Liu P, Zuo X, Larsson V, et al. MBA-VO: Motion Blur Aware Visual Odometry [J]. arXiv preprint arXiv:2103.13684, 2021 .
MBA-VO: 运动模糊感知的视觉里程计
ETH,Microsoft
[2] Chen H, Hu W, Yang K, et al. Panoramic annular SLAM with loop closure and global optimization [J]. arXiv preprint arXiv:2102.13400, 2021 .
具有闭环和全局优化的全景环形 SLAM
浙大、KIT
[3] Wang X, Christie M, Marchand E. TT-SLAM: Dense Monocular SLAM for Planar Environments [C]//IEEE International Conference on Robotics and Automation, ICRA'21. 2021 .
TT-SLAM: 面向平面环境的单目稠密 SLAM
法国雷恩大学
[4] Lim H, Kim Y, Jung K, et al. Avoiding Degeneracy for Monocular Visual SLAM with Point and Line Features [J]. arXiv preprint arXiv:2103.01501, 2021 . (ICRA2021)
使用点线特征避免单目视觉 SLAM 退化
韩国科学技术院
[5] Lu J, Fang Z, Gao Y, et al. Line-based visual odometry using local gradient fitting [J]. Journal of Visual Communication and Image Representation, 2021 , 77: 103071.
使用局部梯度拟合的基于线的视觉里程计
上海工程技术大学
[6] Wang J, Meng Z, Wang L. A UPF-PS SLAM Algorithm for Indoor Mobile Robot with Non-Gaussian Detection Model [J]. IEEE/ASME Transactions on Mechatronics, 2021 .
具有非高斯检测模型的室内移动机器人 UPF-PS SLAM 算法
清华大学
[7] Gao L, Battistelli G, Chisci L. PHD-SLAM 2.0: Efficient SLAM in the Presence of Missdetections and Clutter [J]. IEEE Transactions on Robotics, 2021 .
PHD-SLAM 2.0:漏检和杂波污染情况下的高效 SLAM
电子科大,佛罗伦萨大学;代码开源
[8] Labbé M, Michaud F. Multi-session visual SLAM for illumination invariant localization in indoor environments [J]. arXiv preprint arXiv:2103.03827, 2021 .
用于室内环境光照不变定位的多会话视觉 SLAM
谢布鲁克大学
[9] Yokozuka M, Koide K, Oishi S, et al. LiTAMIN2: Ultra Light LiDAR-based SLAM using Geometric Approximation applied with KL-Divergence [J]. arXiv preprint arXiv:2103.00784, 2021 . (ICRA2021)
使用几何近似和 KL 散度的超轻型基于 LiDAR 的 SLAM
日本先进工业科学技术研究所
2. Semantic/Deep SLAM
[10] Visual SLAM in dynamic environments . 2020
动态环境中视觉 SLAM
萨拉戈萨大学 Berta Bescos 博士学位论文,Github
DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes. 2018
DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM. 2020
Empty Cities: a Dynamic-Object-Invariant Space for Visual SLAM. 2021
[11] Zhan H, Weerasekera C S, Bian J W, et al. DF-VO: What Should Be Learnt for Visual Odometry? [J]. arXiv preprint arXiv:2103.00933, 2021 .
[12] Westfechtel T, Ohno K, Akegawa T, et al. Semantic Mapping of Construction Site From Multiple Daily Airborne LiDAR Data [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 3073-3080.
来自多个日常机载激光雷达数据的施工现场语义建图
东京大学
[13] Habich T L, Stuede M, Labbé M, et al. Have I been here before? Learning to Close the Loop with LiDAR Data in Graph-Based SLAM [J]. arXiv preprint arXiv:2103.06713, 2021 .
在图 SLAM 中学习使用 LiDAR 数据的闭环
汉诺威大学
[14] Jayasuriya M, Arukgoda J, Ranasinghe R, et al. UV-Loc: A Visual Localisation Strategy for Urban Environments [J]. 2021
城市环境中视觉定位策略
悉尼科技大学
pole-like landmarks and ground surface boundaries
[15] Sarlin P E, Unagar A, Larsson M, et al. Back to the Feature: Learning Robust Camera Localization from Pixels to Pose [J]. arXiv preprint arXiv:2103.09213, 2021 . (CVPR2021)
回归到特征:从像素中学习鲁棒的相机定位
ETH;代码开源
[16] Zhang J, Sui W, Wang X, et al. Deep Online Correction for Monocular Visual Odometry [J]. arXiv preprint arXiv:2103.10029, 2021 . (ICRA2021)
3. Sensor Fusion
[17] Lin J, Zheng C, Xu W, et al. R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping [J]. arXiv preprint arXiv:2102.12400, 2021 .
R2LIVE:一种鲁棒的的、实时的、LiDAR-惯导-视觉紧耦合的状态估计和建图方法
香港大学;代码开源
[18] Cao S, Lu X, Shen S. GVINS: Tightly Coupled GNSS-Visual-Inertial for Smooth and Consistent State Estimation [J]. arXiv e-prints, 2021 : arXiv: 2103.07899.
GVINS: 用于平滑和一致性状态估计的紧耦合 GNSS-视觉-惯性系统
港科;代码开源
[19] Zhu P, Geneva P, Ren W, et al. Distributed Visual-Inertial Cooperative Localization [J]. arXiv preprint arXiv:2103.12770, 2021 .
分布式视觉-惯性协同定位
加利福尼亚大学、特拉华大学;video
[20] Peng X, Liu Z, Wang Q, et al. Accurate Visual-Inertial SLAM by Feature Re-identification [J]. arXiv preprint arXiv:2102.13438, 2021 .
[21] Reinke A, Chen X, Stachniss C. Simple But Effective Redundant Odometry for Autonomous Vehicles [J]. arXiv preprint arXiv:2105.11783, 2021 . (ICRA2021)
用于自动驾驶的简易且有效的冗余里程计
波恩大学;代码开源 (待公开)
[22] Ram K, Kharyal C, Harithas S S, et al. RP-VIO: Robust Plane-based Visual-Inertial Odometry for Dynamic Environments [J]. arXiv preprint arXiv:2103.10400, 2021 .
RP-VIO: 动态环境中鲁棒的基于平面的 VIO
印度理工学院海得拉巴机器人研究中心;代码开源 (基于 VINS)
[23] Kramer A, Harlow K, Williams C, et al. ColoRadar: The Direct 3D Millimeter Wave Radar Dataset [J]. arXiv preprint arXiv:2103.04510, 2021 .
直接 3D 毫米波雷达数据集
科罗拉多大学博尔德分校;开源数据集
2021 年 2 月论文更新(21 篇)
本期更新于 2021 年 3 月 20 日
共 21 篇论文,其中 8 项开源工作
[3][4] VIO 数据集
[9][10] 杆状特征
1. Geometric SLAM
[1] Ferrera M, Eudes A, Moras J, et al. OV $^{2} $ SLAM: A Fully Online and Versatile Visual SLAM for Real-Time Applications [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 1399-1406.
适用于实时应用的完全在线、多功能 SLAM
IFREMER;代码开源
[2] Gladkova M, Wang R, Zeller N, et al. Tight-Integration of Feature-Based Relocalization in Monocular Direct Visual Odometry [J]. arXiv preprint arXiv:2102.01191, 2021 .
单目直接法视觉里程计中基于特征重定位的紧耦合
TUM
[3] Zhang H, Jin L, Ye C. The VCU-RVI Benchmark: Evaluating Visual Inertial Odometry for Indoor Navigation Applications with an RGB-D Camera [C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 6209-6214.
VCU-RVI Benchmark:用于评估 RGB-D 室内导航应用的 VIO
弗吉尼亚联邦大学;数据集地址
[4] Minoda K, Schilling F, Wüest V, et al. VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 1343-1350.
VIODE:用于解决 VIO 在动态环境中挑战的仿真数据集
东京大学,洛桑联邦理工;数据集地址
[5] Younes G. A Unified Hybrid Formulation for Visual SLAM [D]. , 2021 .
[6] Shakeri M, Loo S Y, Zhang H, et al. Polarimetric Monocular Dense Mapping Using Relative Deep Depth Prior [J]. IEEE Robotics and Automation Letters, 2021 , 6(3): 4512-4519.
利用相对深度先验的偏振相机单目密集建图
阿尔伯塔大学
[7] Wang H, Wang C, Xie L. Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 1715-1721.
大规模环境中强度特征辅助的 LiDAR 定位与建图
NTU, CMU
[8] Doan A, Latif Y, Chin T, et al. HM⁴: hidden Markov model with memory management for visual place recognition [J]. IEEE Robotics and Automation Letters, vol. 6, no. 1, pp. 167-174, Jan. 2021 .
用于视觉位置识别的具有内存管理功能的隐马尔可夫模型
阿德莱德大学
[9] Li L, Yang M, Weng L, et al. Robust Localization for Intelligent Vehicles Based on Pole-Like Features Using the Point Cloud [J]. IEEE Transactions on Automation Science and Engineering, 2021 .
基于杆状点云特征的智能车鲁棒定位
上海交大;期刊:中科院一区,JCR Q1,IF 4.9
[10] Tschopp F, von Einem C, Cramariuc A, et al. Hough $^ 2$ Map–Iterative Event-Based Hough Transform for High-Speed Railway Mapping [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 2745-2752.
[11] Ma X, Liang X. Point-line-based RGB-D SLAM and Bundle Adjustment Uncertainty Analysis [J]. arXiv preprint arXiv:2102.07110, 2021 .
基于点线的 RGB-D SLAM 和 BA 的不确定性分析
上交
2. Semantic/Deep SLAM
[12] Rosinol A, Violette A, Abate M, et al. Kimera: from slam to spatial perception with 3d dynamic scene graphs [J]. arXiv preprint arXiv:2101.06894, 2021 .
Kimera: 从 SLAM 到具有 3D 动态场景图的空间感知
MIT;项目主页
[13] Liu Y, Liu J, Hao Y, et al. A Switching-Coupled Backend for Simultaneous Localization and Dynamic Object Tracking [J]. IEEE Robotics and Automation Letters, 2021 , 6(2): 1296-1303.
一种同时用于定位与动态物体跟踪的可切换松耦合后端
清华大学
[14] Feng Q, Atanasov N. Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning [J]. arXiv preprint arXiv:2101.01844, 2021 .
使用 2D-3D 联合学习从航空图像进行室外地形建图的 Mesh 重建
UCSD Nikolay Atanasov
[15] Wong Y S, Li C, Nießner M, et al. RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects [J]. Eurographics, 2021 .
RigidFusion: 移动刚体运动场景的 RGB-D 重建
UCL, TUM;补充材料
[16] Sun S, Melamed D, Kitani K. IDOL: Inertial Deep Orientation-Estimation and Localization [J]. arXiv preprint arXiv:2102.04024, 2021 .(AAAI 2021)
[17] Qin C, Zhang Y, Liu Y, et al. Semantic loop closure detection based on graph matching in multi-objects scenes [J]. Journal of Visual Communication and Image Representation, 2021 , 76: 103072.
3. Sensor Fusion
4. Others
2021 年 1 月论文更新(20 篇)
本期更新于 2021 年 2 月 13 日
共 20 篇论文,其中 2 项开源工作
[1] 长期定位
[10] Building Fusion
[11] Mesh Reconstruction
1. Geometric SLAM
[1] Rotsidis A, Lutteroth C, Hall P, et al. ExMaps: Long-Term Localization in Dynamic Scenes using Exponential Decay [C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021 : 2867-2876.
[2] Wen T, Xiao Z, Wijaya B, et al. High Precision Vehicle Localization based on Tightly-coupled Visual Odometry and Vector HD Map [C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 672-679.
基于视觉里程计和矢量高清地图紧耦合的高精度车辆定位
清华大学
[3] Lee S J, Kim D, Hwang S S, et al. Local to Global: Efficient Visual Localization for a Monocular Camera [C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021 : 2231-2240.
Local to Global:单目相机高效视觉定位
人工特征用于实时里程计,基于学习的特征用于定位,地图对齐
[4] Jung K Y, Kim Y E, Lim H J, et al. ALVIO: Adaptive Line and Point Feature-based Visual Inertial Odometry for Robust Localization in Indoor Environments [J]. arXiv preprint arXiv:2012.15008, 2020 .
基于自适应点线特征的视觉惯性里程计在室内环境中的鲁棒定位
韩国高等科学技术学院
[5] Yang A J, Cui C, Bârsan I A, et al. Asynchronous Multi-View SLAM [J]. arXiv preprint arXiv:2101.06562, 2021 .
[6] Lyu Y, Nguyen T M, Liu L, et al. SPINS: Structure Priors aided Inertial Navigation System [J]. arXiv preprint arXiv:2012.14053, 2020 .
[7] Pan Y, Xu X, Ding X, et al. GEM: online globally consistent dense elevation mapping for unstructured terrain [J]. IEEE Transactions on Instrumentation and Measurement, 2020 .
非结构化地形的在线全局一致稠密高程图
浙大;期刊:中科院三区,ICR Q1,IF 3.6
[8] Tian R, Zhang Y, Zhu D, et al. Accurate and Robust Scale Recovery for Monocular Visual Odometry Based on Plane Geometry [J]. arXiv preprint arXiv:2101.05995, 2021 .
基于平面几何单目视觉里程计的准确鲁棒尺度恢复
东北大学,香港中文大学
[9] Fang B, Mei G, Yuan X, et al. Visual SLAM for robot navigation in healthcare facility [J]. Pattern Recognition, 2021 : 107822.
用于医疗机构机器人导航的视觉 SLAM
合肥工业大学;期刊:中科院二区,ICR Q1,IF 7.2
2. Semantic/Deep SLAM
[10] Zheng T, Zhang G, Han L, et al. Building Fusion: Semantic-aware Structural Building-scale 3D Reconstruction [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020 .
Building Fusion:语义感知结构化建筑规模的三维重建
清华-伯克利深圳学院,清华大学
[11] Feng Q, Atanasov N. Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using Joint 2D-3D Learning [J]. arXiv preprint arXiv:2101.01844, 2021 .
使用 2D-3D 联合学习从空中图像进行室外地形的网格重建
加州大学圣地亚哥分校 Nikolay A. Atanasov
[12] Ma T, Wang Y, Wang Z, et al. ASD-SLAM: A Novel Adaptive-Scale Descriptor Learning for Visual SLAM [C]//2020 IEEE Intelligent Vehicles Symposium (IV). IEEE, 809-816.
一种新的视觉 SLAM 自适应尺度描述符学习方法
上交;代码开源
[13] Li B, Hu M, Wang S, et al. Self-supervised Visual-LiDAR Odometry with Flip Consistency [C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2021 : 3844-3852.
[14] Akilan T, Johnson E, Sandhu J, et al. A Hybrid Learner for Simultaneous Localization and Mapping [J]. arXiv preprint arXiv:2101.01158, 2021 .
用于 SLAM 的混合学习器
Lakehead University
3. Sensor Fusion
[15] Cowley A, Miller I D, Taylor C J. UPSLAM: Union of Panoramas SLAM [J]. arXiv preprint arXiv:2101.00585, 2021 .
[16] Jiang Z, Taira H, Miyashita N, et al. VIO-Aided Structure from Motion Under Challenging Environments [J]. arXiv preprint arXiv:2101.09657, 2021 .
[17] Zhai C, Wang M, Yang Y, et al. Robust Vision-Aided Inertial Navigation System for Protection Against Ego-Motion Uncertainty of Unmanned Ground Vehicle [J]. IEEE Transactions on Industrial Electronics, 2020 .
鲁棒的视觉辅助惯性导航系统,避免地面无人车的自我运动估计不确定性
北理工;期刊:中科院一区 JCR Q1,IF 7.5
[18] Palieri M, Morrell B, Thakur A, et al. LOCUS: A Multi-Sensor Lidar-Centric Solution for High-Precision Odometry and 3D Mapping in Real-Time [J]. IEEE Robotics and Automation Letters, 2020 , 6(2): 421-428.
用于高精度里程计和建图的以雷达为中心的多传感器框架
加州理工
4. Others
2020 年 12 月论文更新(18 篇)
本期更新于 2021 年 1 月 02 日
共 18 篇论文,其中 6 项(待)开源工作
9 CodeVIO, 10 CamVox
1. Geometric SLAM
[1] Mascaro R, Wermelinger M, Hutter M, et al. Towards automating construction tasks: Large‐scale object mapping, segmentation, and manipulation [J]. Journal of Field Robotics, 2020 .
【挖掘机抓石头】实现自动化的施工任务:大型物体建图,分割和操纵
ETH,期刊:中科院二区,JCR Q1
[2] Yang X, Yuan Z, Zhu D, et al. Robust and Efficient RGB-D SLAM in Dynamic Environments [J]. IEEE Transactions on Multimedia, 2020 .
动态环境中鲁棒高效 RGB-D SLAM
华中科大,期刊:中科院二区,JCR Q1,IF 5.5
[3] Yazdanpour M, Fan G, Sheng W. ManhattanFusion: Online Dense Reconstruction of Indoor Scenes from Depth Sequences [J]. IEEE Transactions on Visualization and Computer Graphics, 2020 .
ManhattanFusion:从深度序列中对室内场景进行在线稠密重建
北肯塔基大学,期刊:中科院二区,JCR Q1,IF 4.3
[4] Fourie D, Rypkema N R, Teixeira P V, et al. Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation [J]. IROS 2020
2. Semantic/Deep SLAM
[5] Garg S, Sünderhauf N, Dayoub F, et al. Semantics for Robotic Mapping, Perception and Interaction: A Survey [J]. arXiv preprint arXiv:2101.00443, 2020 .
用于机器人建图、感知和交互的语义
昆士兰科技大学,阿德莱德大学、澳大利亚机器人中心,962 篇论文综述
[6] Nubert J, Khattak S, Hutter M. Self-supervised Learning of LiDAR Odometry for Robotic Applications [J]. arXiv preprint arXiv:2011.05418, 2020 .
用于机器人应用的基于自监督学习的 LiDAR 里程计
ETH,代码开源
[7] Thomas H, Agro B, Gridseth M, et al. Self-Supervised Learning of Lidar Segmentation for Autonomous Indoor Navigation [J]. arXiv preprint arXiv:2012.05897, 2020 .
自主室内导航 LIDAR 分割的自监督学习
多伦多大学,Apple
[8] Huynh L, Nguyen P, Matas J, et al. Boosting Monocular Depth Estimation with Lightweight 3D Point Fusion [J]. arXiv preprint arXiv:2012.10296, 2020 .
轻量级 3D 点融合(立体匹配/SLAM)提高单眼深度估计
奥卢大学
3. Sensor Fusion
[9] Zuo X, Merrill N, Li W, et al. CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth [J]. arXiv preprint arXiv:2012.10133, 2020 .
CodeVIO: 具有学习可优化稠密深度的 VIO
ETH,浙大,ICRA2021 投稿论文,video
[10] ZHU, Yuewen, et al. CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System . arXiv preprint arXiv:2011.11357, 2020 .
低成本、精确的Lidar辅助视觉 SLAM 系统
南方科技大学,代码开源 ,ICRA2021 投稿论文
[11] Gong Z, Liu P, Wen F, et al. Graph-Based Adaptive Fusion of GNSS and VIO Under Intermittent GNSS-Degraded Environment [J]. IEEE Transactions on Instrumentation and Measurement, 2020 , 70: 1-16.
间歇性 GNSS 退化环境下的基于图的自适应 GNSS-VIO 融合
上海交大
[12] Wu Y, Li Y, Li W, et al. Robust Lidar-Based Localization Scheme for Unmanned Ground Vehicle via Multisensor Fusion [J]. IEEE Transactions on Neural Networks and Learning Systems, 2020 .
基于激光雷达的多传感器融合无人地面车辆鲁棒定位方法
广东工业大学,期刊:中科院一区,JCR Q1, IF 8.8
[13] Zhu P, Ren W. Multi-Robot Joint Visual-Inertial Localization and 3-D Moving Object Tracking [J]. IROS 2020
多机器人联合视觉惯性定位和 3D 运动目标跟踪
加州大学河滨分校
4. Others
[14] Xu M, Snderhauf N, Milford M. Probabilistic Visual Place Recognition for Hierarchical Localization [J]. IEEE Robotics and Automation Letters, 2020 , 6(2): 311-318.
用于分层定位的概率视觉场景识别
昆士兰科技大学, 代码开源
[15] Li D, Miao J, Shi X, et al. RaP-Net: A Region-wise and Point-wise Weighting Network to Extract Robust Keypoints for Indoor Localization [J]. arXiv preprint arXiv:2012.00234, 2020 .
RaP-Net: 区域和点加权网络用于室内定位的鲁棒关键点提取
清华、北交大、北航、Intel, 代码开源
[16] Bui M, Birdal T, Deng H, et al. 6D Camera Relocalization in Ambiguous Scenes via Continuous Multimodal Inference [J]. arXiv preprint arXiv:2004.04807, 2020 .(ECCV 2020)
通过连续多峰推理在模糊场景中进行 6D 相机重定位
斯坦福,TUM, 项目主页 ,代码开源
[17] ALVES, Nelson, et al. Low-latency Perception in Off-Road Dynamical Low Visibility Environments . arXiv preprint arXiv:2012.13014, 2020 .
[18] Chen C, Al-Halah Z, Grauman K. Semantic Audio-Visual Navigation [J]. arXiv preprint arXiv:2012.11583, 2020 .
语义视-听导航
UT Austin, Facebook,项目主页
2020 年 11 月论文更新(20 篇)
本期更新于 2020 年 12 月 07 日
共 20 篇论文,其中 6 项(待)开源工作
其中近一半来自于 IROS 2020 的录用论文和 ICRA 2021 的投稿论文
1. Geometric SLAM
[1] Kim C, Kim J, Kim H J. Edge-based Visual Odometry with Stereo Cameras using Multiple Oriented Quadtrees [J]. IROS 2020
使用多个定向四叉树的基于边的双目视觉里程计
首尔国立大学
[2] Jaenal A, Zuniga-Noël D, Gomez-Ojeda R, et al. Improving Visual SLAM in Car-Navigated Urban Environments with Appearance Maps [J]. IROS 2020
通过外观地图改善城市环境汽车导航的视觉 SLAM
马拉加大学;video
[3] Chen L, Zhao Y, Xu S, et al. DenseFusion: Large-Scale Online Dense Pointcloud and DSM Mapping for UAVs [J]. IROS 2020
[4] Arndt C, Sabzevari R, Civera J. From Points to Planes-Adding Planar Constraints to Monocular SLAM Factor Graphs [J]. IROS 2020
From Points to Planes:在单目 SLAM 因子图中添加平面约束
西班牙萨拉戈萨大学
[5] Giubilato R, Le Gentil C, Vayugundla M, et al. GPGM-SLAM: Towards a Robust SLAM System for Unstructured Planetary Environments with Gaussian Process Gradient Maps [C]//IROS Workshop on Planetary Exploration Robots: Challenges and Opportunities (PLANROBO20). ETH Zurich, Department of Mechanical and Process Engineering, 2020 .
GPGM-SLAM:具有高斯过程梯度图的非结构化行星环境的鲁棒 SLAM 系统
DLR,TUM
2. Semantic/Deep SLAM
[6] Chang Y, Tian Y, How J P, et al. Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic Simultaneous Localization and Mapping [J]. arXiv preprint arXiv:2011.04087, 2020 .
Kimera-Multi: 分布式多机器人度量语义 SLAM 系统
MIT
[7] Sharma A, Dong W, Kaess M. Compositional Scalable Object SLAM [J]. arXiv preprint arXiv:2011.02658, 2020 .
合成可拓展的物体级 SLAM
CMU;ICRA2021 投稿论文;待开源
[8] Wang W, Hu Y, Scherer S. TartanVO: A Generalizable Learning-based VO [J]. arXiv preprint arXiv:2011.00359, 2020 .
TartanVO:一种通用的基于学习的 VO
CMU,代码开源
[9] Wimbauer F, Yang N, von Stumberg L, et al. MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera [J]. arXiv preprint arXiv:2011.11814, 2020 .
动态环境中单个移动相机的半监督稠密重构
TUM;项目主页
相关研究:
CVPR 2020 D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry
ECCV 2018 Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry
[10] Almalioglu Y, Santamaria-Navarro A, Morrell B, et al. Unsupervised Deep Persistent Monocular Visual Odometry and Depth Estimation in Extreme Environments [J]. arXiv preprint arXiv:2011.00341, 2020 .
极端环境下的无监督持久性的单目视觉里程计和深度估计
牛津大学,NASA;ICRA2021 投稿论文
[11] Zou Y, Ji P, Tran Q H, et al. Learning monocular visual odometry via self-supervised long-term modeling [J]. arXiv preprint arXiv:2007.10983, 2020. (ECCV 2020 )
[12] Nubert J, Khattak S, Hutter M. Self-supervised Learning of LiDAR Odometry for Robotic Applications [J]. arXiv preprint arXiv:2011.05418, 2020 .
应用于机器人的自监督学习 LiDAR 里程计
ETH;代码开源
[13] Chancán M, Milford M. DeepSeqSLAM: A Trainable CNN+ RNN for Joint Global Description and Sequence-based Place Recognition [J]. arXiv preprint arXiv:2011.08518, 2020 .
用于联合全局描述和基于序列的位置识别的可训练的CNN + RNN
昆士兰科技大学;代码开源 ,项目主页
3. Sensor Fusion
[14] Zhao S, Wang P, Zhang H, et al. TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint [J]. arXiv preprint arXiv:2012.03455, IROS 2020 .
TP-TIO: 一种使用深度 ThermalPoint 网络的红外视觉-惯性里程计
CMU,东北大学,video
[15] Jaekel J, Mangelson J G, Scherer S, et al. A Robust Multi-Stereo Visual-Inertial Odometry Pipeline [J]. IROS 2020 .
[16] Huang H, Ye H, Jiao J, et al. Geometric Structure Aided Visual Inertial Localization [J]. arXiv preprint arXiv:2011.04173, 2020 .
几何结构辅助的视觉惯性定位
港科,ICRA 2021 投稿论文
[17] Ding Z, Yang T, Zhang K, et al. VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation [J]. arXiv preprint arXiv:2011.03993, 2020 .
VID-Fusion: 用于准确外力估计的露露帮视觉惯性里程计
浙大 FAST Lab
[18] Li K, Li M, Hanebeck U D. Towards high-performance solid-state-lidar-inertial odometry and mapping [J]. arXiv preprint arXiv:2010.13150, 2020 .
高性能固态雷达惯性里程计与建图
卡尔斯鲁厄理工学院;代码开源
4. Others
2020 年 10 月论文更新(22 篇)
本期更新于 2020 年 11 月 09 日
共 22 篇论文,其中 7 项(待)开源工作
9,10,11:SLAM 中动态物体跟踪,动态物体级 SLAM 今年很火
3,7,8,14,18:线段相关
1. Geometric SLAM
[1] Bhutta M, Kuse M, Fan R, et al. Loop-box: Multi-Agent Direct SLAM Triggered by Single Loop Closure for Large-Scale Mapping [J]. arXiv preprint arXiv:2009.13851, 2020 . IEEE Transactions on Cybernetics, 2020
用于大规模建图的由单闭环触发的多智能体直接 SLAM
香港科技大学;项目主页 ;video
期刊:中科院一区,JCR Q1,IF 11.079
[2] Zhou B, He Y, Qian K, et al. S4-SLAM: A real-time 3D LIDAR SLAM system for ground/watersurface multi-scene outdoor applications [J]. Autonomous Robots, 2020 : 1-22.
S4-SLAM:用于地面/水面多场景户外应用的实时 3D LIDAR SLAM 系统
东南大学;期刊:中科院三区,JCR Q1,IF 3.6
[3] Li Y, Yunus R, Brasch N, et al. RGB-D SLAM with Structural Regularities [J]. arXiv preprint arXiv:2010.07997, 2020 .
[4] Rodríguez J J G, Lamarca J, Morlana J, et al. SD-DefSLAM: Semi-Direct Monocular SLAM for Deformable and Intracorporeal Scenes [J]. arXiv preprint arXiv:2010.09409, 2020 .
SD-DefSLAM:适用于可变形和体内场景的半直接法单目 SLAM
萨拉戈萨大学;ICRA 2021 投稿论文;Video
[5] Millane A, Oleynikova H, Lanegger C, et al. Freetures: Localization in Signed Distance Function Maps [J]. arXiv preprint arXiv:2010.09378, 2020 .
[6] Long R, Rauch C, Zhang T, et al. RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects [J]. arXiv preprint arXiv:2010.10841, 2020 .
RigidFusion: 在具有动态刚体物体的大型场景中进行机器人定位与建图
爱丁堡大学机器人中心
[8] Han J, Dong R, Kan J. A novel loop closure detection method with the combination of points and lines based on information entropy [J]. Journal of Field Robotics. 2020
一种新的基于信息熵的点线闭环检测方法
北京林业大学;期刊:中科院二区,JCR Q1,IF 3.58
2. Semantic/Deep SLAM
[9] Bescos B, Campos C, Tardós J D, et al. DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM [J]. arXiv preprint arXiv:2010.07820, 2020 .
DynaSLAM II: 多目标跟踪与 SLAM 紧耦合
萨拉戈萨大学;一作是 DynaSLAM 的作者,二作是 ORB-SLAM3 的作者
[10] Bescos B, Cadena C, Neira J. Empty Cities: a Dynamic-Object-Invariant Space for Visual SLAM [J]. arXiv preprint arXiv:2010.07646, 2020 .
视觉 SLAM 动态的物体不变空间
萨拉戈萨大学、ETH;代码开源
个人主页 ,相关论文:
ICRA 2018 Empty Cities: Image Inpainting for a Dynamic-Object-Invariant Space
[11] Ballester I, Fontan A, Civera J, et al. DOT: Dynamic Object Tracking for Visual SLAM [J]. arXiv preprint arXiv:2010.00052, 2020 .
[12] Wu S C, Tateno K, Navab N, et al. SCFusion: Real-time Incremental Scene Reconstruction with Semantic Completion [J]. arXiv preprint arXiv:2010.13662, 2020 .
SCFusion:具有完整语义的实时增量场景重建
TUM
[13] Mallick A, Stückler J, Lensch H. Learning to Adapt Multi-View Stereo by Self-Supervision [J]. arXiv preprint arXiv:2009.13278, 2020 .
3. Sensor Fusion
[14] Li X, Li Y, Ornek E P, et al. Co-Planar Parametrization for Stereo-SLAM and Visual-Inertial Odometry [J]. IEEE Robotics and Automation Letters, 2020 .
双目 SLAM 和 VIO 的共面参数化
北京大学,代码开源 (暂未放出)
[15] Liu Z, Zhang F. BALM: Bundle Adjustment for Lidar Mapping [J]. arXiv preprint arXiv:2010.08215, 2020 .
BALM:激光雷达建图中的 BA 优化
香港大学,代码开源
[16] Nguyen T M, Yuan S, Cao M, et al. VIRAL-Fusion: A Visual-Inertial-Ranging-Lidar Sensor Fusion Approach [J]. arXiv preprint arXiv:2010.12274, 2020 .
VIRAL-Fusion: 视觉-惯性-测距-激光雷达传感器融合方法
南洋理工
[17] Liu J, Gao W, Hu Z. Optimization-Based Visual-Inertial SLAM Tightly Coupled with Raw GNSS Measurements [J]. arXiv preprint arXiv:2010.11675, 2020 .
基于优化的视觉惯性 SLAM 与原始 GNSS 测量紧耦合
中科院自动化所;ICRA 2021 投稿论文
4. Others
[18] Taubner F, Tschopp F, Novkovic T, et al. LCD--Line Clustering and Description for Place Recognition [J]. arXiv preprint arXiv:2010.10867, 2020. (3DV 2020 )
LCD: 用于位置识别的线段聚类和描述
ETH;代码开源
[19] Triebel R. 3D Scene Reconstruction from a Single Viewport . ECCV 2020
[20] Hidalgo-Carrió J, Gehrig D, Scaramuzza D. Learning Monocular Dense Depth from Events [J]. arXiv preprint arXiv:2010.08350, 2020.(3DV 2020 )
[21] Yang B. Learning to reconstruct and segment 3D objects [J]. arXiv preprint arXiv:2010.09582, 2020 .
学习重建和分割 3D 物体
牛津大学 BoYang 博士学位论文
[22] von Stumberg L, Wenzel P, Yang N, et al. LM-Reloc: Levenberg-Marquardt Based Direct Visual Relocalization [J]. arXiv preprint arXiv:2010.06323, 2020 .
2020 年 9 月论文更新(20 篇)
本期更新于 2020 年 9 月 28 日
共 20 篇论文,其中 6 项(待)开源工作
4-5:机器人自主探索
8-11:多路标 SLAM
13: Jan Czarnowski 博士学位论文
17-20:增强现实相关的几项很好玩的工作
1. Geometric SLAM
[1] FZhao Y, Smith J S, Vela P A. Good graph to optimize: Cost-effective, budget-aware bundle adjustment in visual SLAM [J]. arXiv preprint arXiv:2008.10123, 2020 .
Good Graph to Optimize: 视觉 SLAM 中具有成本效益、可感知预算的 BA
佐治亚理工学院 Yipu Zhao
作者有很多 Good 系列的文章
IROS 2018 Good feature selection for least squares pose optimization in VO/VSLAM
ECCV 2018 Good line cutting: Towards accurate pose tracking of line-assisted VO/VSLAM
T-RO 2020 Good Feature Matching: Towards Accurate, Robust VO/VSLAM with Low Latency
[2] Fu Q, Yu H, Wang X, et al. FastORB-SLAM: a Fast ORB-SLAM Method with Coarse-to-Fine Descriptor Independent Keypoint Matching [J]. arXiv preprint arXiv:2008.09870, 2020 .
FastORB-SLAM: 一种 Coarse-to-Fine 描述符独立关键点匹配的快速 ORB-SLAM 方法
湖南大学
[3] Wenzel P, Wang R, Yang N, et al. 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous Driving [J]. arXiv preprint arXiv:2009.06364, 2020 .
4Seasons:自动驾驶中多天气SLAM的跨季节数据集
慕尼黑工业大学 Nan Yang
数据集网页:http://www.4seasons-dataset.com/
[4] Duong T, Yip M, Atanasov N. Autonomous Navigation in Unknown Environments with Sparse Bayesian Kernel-based Occupancy Mapping [J]. arXiv preprint arXiv:2009.07207, 2020 .
[5] Bartolomei L, Karrer M, Chli M. Multi-robot Coordination with Agent-Server Architecture for Autonomous Navigation in Partially Unknown Environments [C]//IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020)(virtual). 2020 .
在部分未知环境中实现自主导航的多机器人协同代理服务器架构
ETH | 代码开源 | video
[6] Kern A, Bobbe M, Khedar Y, et al. OpenREALM: Real-time Mapping for Unmanned Aerial Vehicles [J]. arXiv preprint arXiv:2009.10492, 2020 .
[7] Du Z J, Huang S S, Mu T J, et al. Accurate RGB-D SLAM in Dynamic Environment using Observationally Consistent Conditional Random Fields . 2020
动态环境中使用观察一致 CRF 的精确 RGB-D SLAM
清华大学
[8] Holynski A, Geraghty D, Frahm J M, et al. Reducing Drift in Structure from Motion using Extended Features [J]. arXiv preprint arXiv:2008.12295, 2020 .
使用拓展特征减小 SFM 中的漂移
华盛顿大学,Facebook
[9] Fu Q, Wang J, Yu H, et al. PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line [J]. arXiv preprint arXiv:2009.07462, 2020 .
PL-VINS: 实时点线单目视觉惯性 SLAM
湖南大学 | 代码开源
[10] Company-Corcoles J P, Garcia-Fidalgo E, Ortiz A. LiPo-LCD: Combining Lines and Points for Appearance-based Loop Closure Detection [J]. arXiv preprint arXiv:2009.09897, 2020.(BMVC 2020 )
[11] Wang Q, Yan Z, Wang J, et al. Line Flow based SLAM [J]. arXiv preprint arXiv:2009.09972, 2020 .
[12] Badias A, Alfaro I, Gonzalez D, et al. MORPH-DSLAM: Model Order Reduction for PHysics-based Deformable SLAM [J]. arXiv preprint arXiv:2009.00576, 2020 .
基于物理可变形 SLAM 降低模型阶数
萨拉戈萨大学
2. Semantic/Deep SLAM
[13] Czarnowski J. Learned representations for real-time monocular SLAM [J]. 2020.
[14] Li J, Pei L, Zou D, et al. Attention-SLAM: A Visual Monocular SLAM Learning from Human Gaze [J]. arXiv preprint arXiv:2009.06886, 2020 .
Attention-SLAM:从人类视线中学习的单目视觉 SLAM
上海交通大学
[15] Cremona J, Uzal L, Pire T. WGANVO: Monocular Visual Odometry based on Generative Adversarial Networks [J]. arXiv preprint arXiv:2007.13704, 2020 .
WGANVO: 基于生成对抗网络的单目视觉里程计
阿根廷 CIFASIS | 代码开源 | video
[16] Labbé Y, Carpentier J, Aubry M, et al. CosyPose: Consistent multi-view multi-object 6D pose estimation [J]. arXiv preprint arXiv:2008.08465, 2020 .(ECCV 2020)
CosyPose:一致的多视图多物体 6D 位姿估计
Object SLAM @ 物体位姿估计
3. AR/VR/MR
[17] Yang X, Zhou L, Jiang H, et al. Mobile3DRecon: Real-time Monocular 3D Reconstruction on a Mobile Phone [J]. IEEE Annals of the History of Computing, 2020 (01): 1-1.
Mobile3DRecon手机上的实时单目重建
商汤、浙大
[18] Ungureanu D, Bogo F, Galliani S, et al. HoloLens 2 Research Mode as a Tool for Computer Vision Research[J]. arXiv preprint arXiv:2008.11239, 2020.
HoloLens 2 研究模式作为计算机视觉研究的工具
三星 AI 中心,微软
[19] Mori S, Erat O, Broll W, et al. InpaintFusion: Incremental RGB-D Inpainting for 3D Scenes [J]. IEEE Transactions on Visualization and Computer Graphics, 2020 , 26(10): 2994-3007.
InpaintFusion:3D场景的增量RGB-D修复
格拉茨工业大学 期刊:中科院二区,JCR Q1, IF 4.558
[20] AAR: Augmenting a Wearable Augmented Reality Display with an Actuated Head-Mounted Projector . 2020
使用可驱动的头戴式投影仪增强可穿戴的增强现实显示
滑铁卢大学
在 AR 眼镜上再装个投影仪。。。。会玩
2020 年 8 月论文更新(30 篇)
本期更新于 2020 年 8 月 27 日
共 30 篇论文,其中 11 项(待)开源工作
这个月公开的论文比较多,且有意思、高质量的工作也不少,多来自于 IROS、RAL(大部分也同步发表于 IROS),比如融合视觉、惯导、LiDAR 的 LIC-Fusion 2.0 和 融合物体语义的视惯里程计 OrcVIO,其他:
4-6、15:多机/多地图
8-13:结构化/室内 SLAM
1. Geometric SLAM
[1] Geppert M, Larsson V, Speciale P, et al. Privacy Preserving Structure-from-Motion [J]. 2020 .
具有隐私保护的 SFM
苏黎世联邦理工
相关论文:
[2] Zhang Z, Scaramuzza D. Fisher Information Field: an Efficient and Differentiable Map for Perception-aware Planning [J]. arXiv preprint arXiv:2008.03324, 2020 .
[3] Zhou Y, Gallego G, Shen S. Event-based Stereo Visual Odometry [J]. arXiv preprint arXiv:2007.15548, 2020 .
[4] Yue Y, Zhao C, Wu Z, et al. Collaborative Semantic Understanding and Mapping Framework for Autonomous Systems [J]. IEEE/ASME Transactions on Mechatronics, 2020 .
自治系统协作式语义理解和建图框架
南洋理工大学 | 期刊:中科院二区,JCR Q1,IF 5.6
[5] Do H, Hong S, Kim J. Robust Loop Closure Method for Multi-Robot Map Fusion by Integration of Consistency and Data Similarity [J]. IEEE Robotics and Automation Letters, 2020 , 5(4): 5701-5708.
[6] Zhan Z, Jian W, Li Y, et al. A SLAM Map Restoration Algorithm Based on Submaps and an Undirected Connected Graph [J]. arXiv preprint arXiv:2007.14592, 2020 .
基于子图和无向连通图的 SLAM 地图恢复算法
武汉大学
[7] Chen H, Zhang G, Ye Y. Semantic Loop Closure Detection with Instance-Level Inconsistency Removal in Dynamic Industrial Scenes [J]. IEEE Transactions on Industrial Informatics, 2020 .
动态工业场景中具有实例级不一致消除功能的语义闭环检测
厦门大学 | 期刊:中科院一区,JCR Q1,IF 9.1
[8] Li Y, Brasch N, Wang Y, et al. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments [J]. IEEE Robotics and Automation Letters, 2020 , 5(4): 6583-6590.
Structure-SLAM:室内环境中的低漂移单目 SLAM
TUM | 代码开源
[9] Liu J, Meng Z. Visual SLAM with Drift-Free Rotation Estimation in Manhattan World [J]. IEEE Robotics and Automation Letters, 2020 .
[10] Hou J, Yu L, Fei S. A highly robust automatic 3D reconstruction system based on integrated optimization by point line features [J]. Engineering Applications of Artificial Intelligence, 2020 , 95: 103879.
基于点线联合优化的自动三维重建
苏州大学、东南大学 | 期刊:中科院二区,JCR Q1,IF 4.2
[11] Li H, Kim P, Zhao J, et al. Globally Optimal and Efficient Vanishing Point Estimation in Atlanta World [J]. 2020 .
曼哈顿世界中无漂移旋转估计的视觉 SLAM
港中文,西蒙弗雷泽大学
[12] Wang X, Christie M, Marchand E. Relative Pose Estimation and Planar Reconstruction via Superpixel-Driven Multiple Homographies [C]//IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS'20 . 2020.
通过超像素驱动的多个单应性的相对姿势估计和平面重建
雷恩大学
[13] Zuñiga-Noël D, Jaenal A, Gomez-Ojeda R, et al. The UMA-VI dataset: Visual–inertial odometry in low-textured and dynamic illumination environments [J]. The International Journal of Robotics Research, 2020 : 0278364920938439.
UMA-VI 数据集:在低纹理和动态照明环境中的视觉惯性里程计
马拉加大学 | PL-SLAM 的作者 | 期刊:中科院二区,JCR Q1,IF 4.0 | 数据集地址
2. Sensor Fusion
[14] Zuo X, Yang Y, Geneva P, et al. LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking [J]. arXiv preprint arXiv:2008.07196, 2020 .
[15] Alliez P, Bonardi F, Bouchafa S, et al. Real-Time Multi-SLAM System for Agent Localization and 3D Mapping in Dynamic Scenarios [C]//International Confererence on Intelligent Robots and Systems (IROS 2020). 2020 .
实时 Multi-SLAM 系统,用于动态场景中智能体定位和 3D 建图
法国国家信息与自动化研究所
[16] Shao X, Zhang L, Zhang T, et al. A Tightly-coupled Semantic SLAM System with Visual, Inertial and Surround-view Sensors for Autonomous Indoor Parking [J].2020 .
具有视觉、惯性和全景传感器的紧密耦合语义 SLAM 系统,用于自主室内停车
同济大学
[17] Shan M, Feng Q, Atanasov N. OrcVIO: Object residual constrained Visual-Inertial Odometry [J]. arXiv preprint arXiv:2007.15107, 2020.(IROS 2020 )
[18] Seok H, Lim J. ROVINS: Robust Omnidirectional Visual Inertial Navigation System [J]. IEEE Robotics and Automation Letters, 2020 .
[19] Liu W, Caruso D, Ilg E, et al. TLIO: Tight Learned Inertial Odometry [J]. IEEE Robotics and Automation Letters, 2020 , 5(4): 5653-5660.
[20] Sartipi K, Do T, Ke T, et al. Deep Depth Estimation from Visual-Inertial SLAM [J]. arXiv preprint arXiv:2008.00092, 2020 .
视觉惯性 SLAM 深度估计
明尼苏达大学 | 代码开源
3. Semantic/Deep SLAM
[21] Gomez C, Hernandez A C, Derner E, et al. Object-Based Pose Graph for Dynamic Indoor Environments [J]. IEEE Robotics and Automation Letters, 2020 , 5(4): 5401-5408.
[22] Wang H, Wang C, Xie L. Online Visual Place Recognition via Saliency Re-identification [J]. arXiv preprint arXiv:2007.14549, 2020.(IROS 2020 )
通过显著性重识别进行视觉场景重识别
CMU | 代码开源
[23] Jau Y Y, Zhu R, Su H, et al. Deep Keypoint-Based Camera Pose Estimation with Geometric Constraints [J]. arXiv preprint arXiv:2007.15122, 2020.(IROS 2020 )
具有几何约束的基于深度关键点的相机姿势估计
加州大学圣地亚哥分校 | 代码开源
[24] Gong X, Liu Y, Wu Q, et al. An Accurate, Robust Visual Odometry and Detail-preserving Reconstruction System [J]. 2020 .
准确、鲁棒的视觉里程计和保留细节的重建系统
南京航空航天大学 | 代码开源 (暂未公开)
[25] Wei P, Hua G, Huang W, et al. Unsupervised Monocular Visual-inertial Odometry Network [J].IJCAI 2020
无监督单目视惯里程计网络
北大 | 代码开源 (暂未公开)
[26] Li D, Shi X, Long Q, et al. DXSLAM: A Robust and Efficient Visual SLAM System with Deep Features [J]. arXiv preprint arXiv:2008.05416, 2020.(IROS 2020 )
基于深度信息的鲁棒高效视觉 SLAM 系统
清华大学 | 代码开源
4. AR/VR/MR
[27] Tahara T, Seno T, Narita G, et al. Retargetable AR: Context-aware Augmented Reality in Indoor Scenes based on 3D Scene Graph [J]. arXiv preprint arXiv:2008.07817, 2020 .
可重定位的 AR:基于室内 3D 场景图中的情景感知增强现实
索尼
[28] Li X, Tian Y, Zhang F, et al. Object Detection in the Context of Mobile Augmented Reality [J]. arXiv preprint arXiv:2008.06655, 2020.(ISMAR 2020 )
[29] Liu C, Shen S. An Augmented Reality Interaction Interface for Autonomous Drone [J]. arXiv preprint arXiv:2008.02234, 2020.(IROS 2020 )
[30] Du R, Turner E L, Dzitsiuk M, et al. DepthLab: Real-time 3D Interaction with Depth Maps for Mobile Augmented Reality [J]. 2020.
使用深度地图实时 3D 交互的移动增强现实
Google
2020 年 7 月论文更新(20 篇)
本期更新于 2020 年 7 月 27 日
共 20 篇论文,其中 8 项(待)开源工作
本月月初 ECCV,IROS 放榜,不少新论文出现
2 隐私保护的视觉 SLAM,11 秦通大佬的 AVP-SLAM
月底 ORB-SLAM3 又制造了大新闻,谷歌学术都没来得及收录,国内公众号都出解析了
1. Geometric SLAM
[1] Carlos Campos, Richard Elvira, et al.ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM [J]. arXiv preprint arXiv:2007.11898, 2020 .
ORB-SLAM3: 集视觉、视惯、多地图 SLAM 于一体
西班牙萨拉戈萨大学 | 代码开源 | Video
[2] Shibuya M, Sumikura S, Sakurada K. Privacy Preserving Visual SLAM [J]. arXiv preprint arXiv:2007.10361, 2020.(ECCV 2020 )
[3] Tompkins A, Senanayake R, Ramos F. Online Domain Adaptation for Occupancy Mapping [J]. arXiv preprint arXiv:2007.00164, 2020.(RSS 2020 )
[4] Li Y, Zhang T, Nakamura Y, et al. SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes [J]. arXiv preprint arXiv:2007.02108, 2020.(IROS 2020 )
SplitFusion:非刚性场景的 SLAM
东京大学
[5] WeiChen Dai, Yu Zhang, Ping Li, and Zheng Fang, Sebastian Scherer. RGB-D SLAM in Dynamic Environments Using Points Correlations . IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI ), 2020 .
动态环境中使用点关联的 RGB-D SLAM
浙江大学;期刊:PAMI 中科院一区,JCR Q1,IF 17.86
[6] Huang H, Ye H, Sun Y, et al. GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Models [J]. IEEE Robotics and Automation Letters, 2020 .
高斯混合模型的结构一致视觉定位
香港科技大学 | 代码开源
2. Sensor Fusion
[7] Zuo X, Ye W, Yang Y, et al. Multimodal localization: Stereo over LiDAR map [J]. Journal of Field Robotics, 2020 .
多模式定位:在 LiDAR 先验地图中使用双目相机定位
浙江大学、特拉华大学 | 作者谷歌学术 | 期刊:中科院二区,JCR Q2,IF 4.19
[8] Shan T, Englot B, Meyers D, et al. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping [J]. arXiv preprint arXiv:2007.00258, 2020.(IROS 2020 )
[9] Rozenberszki D, Majdik A. LOL: Lidar-Only Odometry and Localization in 3D Point Cloud Maps [J]. arXiv preprint arXiv:2007.01595, 2020.(ICRA 2020 )
3D 点云地图中仅激光雷达的里程计和定位
匈牙利科学院机器感知实验室 | 代码开源 | Video
[10] You R, Hou H. Real-Time Pose Estimation by Fusing Visual and Inertial Sensors for Autonomous Driving [J]. 2020 .
通过融合视觉和惯性传感器进行自动驾驶的实时位姿估计
瑞典查尔默斯理工大学 硕士学位论文 | 代码开源
3. Semantic/Deep SLAM
[11] Qin T, Chen T, Chen Y, et al. AVP-SLAM: Semantic Visual Mapping and Localization for Autonomous Vehicles in the Parking Lot [J]. arXiv preprint arXiv:2007.01813, 2020 .(IROS 2020 )
AVP-SLAM:停车场中自动驾驶车辆的语义 SLAM
华为秦通 | 知乎文章
[12] Gomez C, Silva A C H, Derner E, et al. Object-Based Pose Graph for Dynamic Indoor Environments [J]. IEEE Robotics and Automation Letters, 2020 .
动态室内环境中基于物体的位姿图
西班牙马德里卡洛斯三世大学
[13] Costante G, Mancini M. Uncertainty Estimation for Data-Driven Visual Odometry [J]. IEEE Transactions on Robotics, 2020 .
数据驱动的视觉测程的不确定度估计
意大利佩鲁贾大学 | 代码开源 (还未放出) | 期刊:中科院二区,JCR Q1,IF 7.0
[14] Min Z, Yang Y, Dunn E. VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR 2020 : 4898-4909.
对数逻辑密集光流残差的视觉里程计
史蒂文斯技术学院 | 代码开源 (还未放出)
[15] Zou Y, Ji P, Tran Q H, et al. Learning Monocular Visual Odometry via Self-Supervised Long-Term Modeling [J]. arXiv preprint arXiv:2007.10983, 2020.(ECCV 2020 )
自监督长期建模学习单目视觉里程计
弗吉尼亚理工大学 | 项目主页 | 代码 Coming soon
[16] Wei P, Hua G, Huang W, et al. Unsupervised Monocular Visual-inertial Odometry Network[J].2020
无监督单目视惯里程计
北京大学 | 代码开源 (还未放出)
4. AR/VR/MR
5. Others
2020 年 6 月论文更新(20 篇)
本期更新于 2020 年 6 月 27 日
共 20 篇论文,其中 3 项开源工作
4,5,6,12,13 线、边、平面、物体多路标 SLAM
2,3 多机器人 SLAM
7,16 拓扑相关
11:深度学习用于定位和建图的调研
1. Geometric SLAM
[1] Zhang T, Zhang H, Li Y, et al. FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow [C]. ICRA 2020 .
FlowFusion:基于光流的动态稠密 RGB-D SLAM
东京大学;作者谷歌学术
[2] Lajoie P Y, Ramtoula B, Chang Y, et al. DOOR-SLAM: Distributed, online, and outlier resilient SLAM for robotic teams [J]. IEEE Robotics and Automation Letters, 2020 , 5(2): 1656-1663.
适用于多机器人的分布式,在线和异常灵活的 SLAM
加拿大蒙特利尔理工学院;代码开源
[3] Chakraborty K, Deegan M, Kulkarni P, et al. JORB-SLAM: A Jointly optimized Multi-Robot Visual SLAM [J].
多机器人 SLAM 联合优化
密歇根大学机器人研究所
[4] Zhang H, Ye C. Plane-Aided Visual-Inertial Odometry for 6-DOF Pose Estimation of a Robotic Navigation Aid [J]. IEEE Access, 2020 , 8: 90042-90051.
用于机器人导航 6 自由度位姿估计的平面辅助 VIO
弗吉尼亚联邦大学;开源期刊;谷歌学术
[5] Ali A J B, Hashemifar Z S, Dantu K. Edge-SLAM: edge-assisted visual simultaneous localization and mapping [C]//Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services. 2020 : 325-337.
Edge-SLAM: 边辅助的视觉 SLAM
布法罗大学
[6] Mateus A, Ramalingam S, Miraldo P. Minimal Solvers for 3D Scan Alignment With Pairs of Intersecting Lines [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020 : 7234-7244.
成对的相交线 3D 扫描对齐的最少求解器
葡萄牙里斯本大学,谷歌
[7] Xue W, Ying R, Gong Z, et al. SLAM Based Topological Mapping and Navigation [C]//2020 IEEE/ION Position, Location and Navigation Symposium (PLANS). IEEE, 2020 : 1336-1341.
2. Sensor Fusion
3. Semantic/Deep SLAM
[11] Chen C, Wang B, Lu C X, et al. A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence [J]. arXiv preprint arXiv:2006.12567, 2020 .
深度学习用于定位和建图的调研:走向空间机器智能时代
牛津大学;所有涉及到的论文的列表 :Github
[12] Li J, Koreitem K, Meger D, et al. View-Invariant Loop Closure with Oriented Semantic Landmarks [C]. ICRA 2020.
[13] Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems [J]. IEEE Access, 2020 .
VPS-SLAM:航空机器人的视觉平面语义 SLAM
马德里理工大学自动化与机器人研究中心,MIT 航空航天控制实验室
代码开源
[14] Shi T, Cui H, Song Z, et al. Dense Semantic 3D Map Based Long-Term Visual Localization with Hybrid Features [J]. arXiv preprint arXiv:2005.10766, 2020 .
使用混合特征的基于密集 3D 语义地图的长距离视觉定位
中科院自动化所
[15] Metrically-Scaled Monocular SLAM using Learned Scale Factors [C]. ICRA 2020 Best Paper Award in Robot Vision
通过学习尺度因子的单目度量 SLAM
MIT;作者主页
[16] Chaplot D S, Salakhutdinov R, Gupta A, et al. Neural Topological SLAM for Visual Navigation [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR 2020 : 12875-12884.
[17] Min Z, Yang Y, Dunn E. VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR 2020 : 4898-4909.
基于对数逻辑稠密光流残差的视觉里程计
史蒂文斯理工学院;代码开源 (还未放出)
[18] Loo S Y, Mashohor S, Tang S H, et al. DeepRelativeFusion: Dense Monocular SLAM using Single-Image Relative Depth Prediction [J]. arXiv preprint arXiv:2006.04047, 2020 .
使用单视图深度预测的单目稠密 SLAM
哥伦比亚大学
4. AR/VR/MR
[19] Choudhary S, Sekhar N, Mahendran S, et al. Multi-user, Scalable 3D Object Detection in AR Cloud [C]. CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Seattle, WA, 2020.
AR 云进行多用户可扩展的 3D 目标检测
Magic Leap ;项目主页
[20] Tang F, Wu Y, Hou X, et al. 3D Mapping and 6D Pose Computation for Real Time Augmented Reality on Cylindrical Objects [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019 .
圆柱物体上实时增强现实的 3D 建图和 6D 位姿计算
中科院自动化所
2020 年 5 月论文更新(20 篇)
本期更新于 2020 年 5 月 23 日
共 20 篇论文,其中 5 项开源工作
最近不知道是不是受疫情影响,论文好像有点少了
Voxgraph:苏黎世理工开源的实时体素建图
Neural-SLAM:CMU 开源的主动神经网络
1. Geometric SLAM
[1] Wang W, Zhu D, Wang X, et al. TartanAir: A Dataset to Push the Limits of Visual SLAM [J]. arXiv preprint arXiv:2003.14338, 2020 .
TartanAir:突破视觉 SLAM 极限的数据集
CMU,港中文;数据集公开:http://theairlab.org/tartanair-dataset/
朱德龙师兄参与的一项工作,上个月推荐过了,这个月刚完善网站再推荐一遍,并在 CVPR 2020 组织了 workshop
[2] Reijgwart V, Millane A, Oleynikova H, et al. Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps [J]. IEEE Robotics and Automation Letters, 2019 , 5(1): 227-234.
使用 SDF 子图的全局一致体素建图
苏黎世联邦理工;代码开源
[3] Fontán A, Civera J, Triebel R. Information-Driven Direct RGB-D Odometry [J].2020.
信息驱动的直接法 RGB-D SLAM
萨拉戈萨大学, TUM
[4] Murai R, Saeedi S, Kelly P H J. BIT-VO: Visual Odometry at 300 FPS using Binary Features from the Focal Plane [J]. arXiv preprint arXiv:2004.11186, 2020 .
BIT-VO:使用焦平面的二进制特征以 300 FPS 运行的视觉里程计
帝国理工 项目主页、演示视频
[5] Du S, Guo H, Chen Y, et al. GPO: Global Plane Optimization for Fast and Accurate Monocular SLAM Initialization [J]. ICRA 2020 .
准确快速单目 SLAM 初始化的全局平面优化
中科院自动化所,字节跳动
[6] Li F, Fu C, Gostar A K, et al. Advanced Mapping Using Planar Features Segmented from 3D Point Clouds [C]//2019 International Conference on Control, Automation and Information Sciences (ICCAIS). IEEE, 2019 : 1-6.
[7] Zou Y, Chen L, Jiang J. Lightweight Indoor Modeling Based on Vertical Planes and Lines [C]//2020 11th International Conference on Information and Communication Systems (ICICS). IEEE, 2020 : 136-142.
基于垂直平面和线段的室内轻量化建图
国防科大;ICICS:CCF C 类会议
[8] Nobis F, Papanikolaou O, Betz J, et al. Persistent Map Saving for Visual Localization for Autonomous Vehicles: An ORB-SLAM Extension [J]. arXiv preprint arXiv:2005.07429, 2020 .
ORB-SLAM2 的拓展应用:永久保存车辆视觉定位的地图
TUM 汽车技术研究所;代码开源
2. Sensor Fusion
[9] Li X, He Y, Lin J, et al. Leveraging Planar Regularities for Point Line Visual-Inertial Odometry [J]. arXiv preprint arXiv:2004.11969, 2020 .
利用平面规律的点线 VIO
北京大学;IROS 2020 投稿论文
[10] Liu J, Gao W, Hu Z. Bidirectional Trajectory Computation for Odometer-Aided Visual-Inertial SLAM [J]. arXiv preprint arXiv:2002.00195, 2020 .
里程计辅助视惯 SLAM 的双向轨迹计算
中科院自动化所;解决 SLAM 在转弯之后容易退化的问题
[11] Liu R, Marakkalage S H, Padmal M, et al. Collaborative SLAM based on Wifi Fingerprint Similarity and Motion Information [J]. IEEE Internet of Things Journal, 2019 .
基于 Wifi 指纹相似度和运动信息的协作式 SLAM
新加坡科技设计大学;期刊:中科院一区,JCR Q1,IF 11.2
[12] Jung J H, Heo S, Park C G. Observability Analysis of IMU Intrinsic Parameters in Stereo Visual-Inertial Odometry [J]. IEEE Transactions on Instrumentation and Measurement, 2020 .
立体视觉惯性里程计中IMU内部参数的可观察性分析
韩国首尔大学;期刊:中科院三区,JCR Q2,IF 3.0
3. Semantic/Deep SLAM
[13] Wu Y, Zhang Y, Zhu D, et al. EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association [J]. arXiv preprint arXiv:2004.12730, 2020 .
基于集合数据关联的单目半稠密物体级 SLAM
东北大学,港中文;IROS 2020 投稿论文
厚着脸皮推荐一下自己的工作,欢迎批评指正
代码开源 (还未公开);演示视频:YouTube | bilibili
[14] Vasilopoulos V, Pavlakos G, Schmeckpeper K, et al. Reactive Navigation in Partially Familiar Planar Environments Using Semantic Perceptual Feedback [J]. arXiv preprint arXiv:2002.08946, 2020 .
使用语义感知反馈的部分熟悉平面环境中的反应性导航
宾夕法尼亚大学
[15] Chaplot D S, Gandhi D, Gupta S, et al. Learning to explore using active neural slam [C]. ICLR 2020 .
[16] Li S, Wang X, Cao Y, et al. Self-Supervised Deep Visual Odometry with Online Adaptation [C]. CVPR. 2020.
[17] Li W, Gu J, Chen B, et al. Incremental Instance-Oriented 3D Semantic Mapping via RGB-D Cameras for Unknown Indoor Scene [J]. Discrete Dynamics in Nature and Society, 2020, 2020 .
RGB-D 相机室内增量式三维实例语义建图
河北工业大学;期刊:中科院三区,JCR Q3Q4 开源期刊
[18] Tiwari L, Ji P, Tran Q H, et al. Pseudo RGB-D for Self-Improving Monocular SLAM and Depth Prediction [J]. arXiv preprint arXiv:2004.10681, 2020 .
伪 RGB-D 用于改善单目 SLAM 和深度预测 (单目 SLAM + 单目深度估计)
印度德里 Indraprastha 信息技术学院(IIIT-Delhi)
4. Others
2020 年 4 月论文更新(22 篇)
本期更新于 2020 年 4 月 25 日
共 22 篇论文,其中 7 项开源工作, 1 项公开数据集;
2、8、12 跟线段有关
9、10 VIO 相关
TartanAir 突破视觉 SLAM 极限的数据集,投稿于 IROS 2020
VPS-SLAM 平面语义 SLAM 比较有意思,代码开源
1. Geometric SLAM
[1] Wang W, Zhu D, Wang X, et al. TartanAir: A Dataset to Push the Limits of Visual SLAM [J]. arXiv preprint arXiv:2003.14338, 2020 .
TartanAir:突破视觉 SLAM 极限的数据集
CMU,港中文;数据集公开:http://theairlab.org/tartanair-dataset/
朱德龙师兄的工作,置顶推荐一下
[2] Gomez-Ojeda R. Robust Visual SLAM in Challenging Environments with Low-texture and Dynamic Illumination [J]. 2020 .
低纹理和动态光照挑战环境下的鲁棒视觉 SLAM
西班牙马拉加大学,点线 SLAM 作者的博士学位论文
[3] Yang S, Li B, Cao Y P, et al. Noise-resilient reconstruction of panoramas and 3D scenes using robot-mounted unsynchronized commodity RGB-D cameras [J]. ACM Transactions on Graphics, 2020 .
使用安装在机器人上的非商用 RGB-D 相机对全景图和三维场景进行抗噪声重建
清华大学胡事民教授,期刊:中科院二区,JCR Q1,IF 7.176
[4] Huang J, Yang S, Mu T J, et al. ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings [J]. arXiv preprint arXiv:2003.12980, 2020 .
[5] Quenzel J, Rosu R A, Läbe T, et al. Beyond Photometric Consistency: Gradient-based Dissimilarity for Improving Visual Odometry and Stereo Matching [C]. International Conference on Robotics and Automation (ICRA), 2020 .
[6] Yang Y, Tang D, Wang D, et al. Multi-camera visual SLAM for off-road navigation [J]. Robotics and Autonomous Systems, 2020 : 103505.
[7] Cheng W. Methods for large-scale image-based localization using structure-from-motion point clouds [J]. 2020 .
利用 SFM 点云在大规模环境下的基于图像的定位
南洋理工大学博士学位论文;相关代码
[8] Sun T, Song D, Yeung D Y, et al. Semi-semantic Line-Cluster Assisted Monocular SLAM for Indoor Environments [C]//International Conference on Computer Vision Systems. Springer, Cham, 2019 : 63-74.
室内环境中半语义线段簇辅助单目 SLAM
香港科技大学机器人与多感知实验室 RAM-LAB
2. Sensor Fusion
[9] Nagy B, Foehn P, Scaramuzza D. Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO [J]. arXiv preprint arXiv:2003.13493, 2020 .
用于高速 VIO 的前端 GPU 加速
苏黎世大学、苏黎世联邦理工;代码开源
[10] Li J, Yang B, Huang K, et al. Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors [C]//Chinese Conference on Pattern Recognition and Computer Vision (PRCV). Springer, Cham, 2019 : 283-295.
具有多平面先验的稳健高效的视觉惯性里程计
浙大 CAD&CG 实验室,章国峰;章老师主页 上是显示将会开源
[11] Debeunne C, Vivet D. A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping [J]. Sensors, 2020 , 20(7): 2068.
视觉-激光 SLAM 综述
图卢兹大学;开源期刊,中科院三区,JCR Q2Q3
[12] Yu H, Zhen W, Yang W, et al. Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences [J]. arXiv preprint arXiv:2004.00740, 2020 .
在先验雷达地图中通过 2D-3D 线段关联实现单目视觉定位
CMU,武汉大学;代码开源
3. Semantic/Deep SLAM
[13] Bavle H, De La Puente P, How J, et al. VPS-SLAM: Visual Planar Semantic SLAM for Aerial Robotic Systems [J]. IEEE Access, 2020 .
VPS-SLAM:航空机器人的视觉平面语义 SLAM
马德里理工大学自动化与机器人研究中心,MIT 航空航天控制实验室
代码开源 ,视频
[14] Liao Z, Wang W, Qi X, et al. Object-oriented SLAM using Quadrics and Symmetry Properties for Indoor Environments [J]. arXiv preprint arXiv:2004.05303, 2020 .
室内环境中使用二次曲面和对偶属性的面向物体的 SLAM
北航;代码开源 ,视频
[15] Ma Q M, Jiang G, Lai D Z. Robust Line Segments Matching via Graph Convolution Networks [J]. arXiv preprint arXiv:2004.04993, 2020 .
[16] Li R, Wang S, Gu D. DeepSLAM: A Robust Monocular SLAM System with Unsupervised Deep Learning [J]. IEEE Transactions on Industrial Electronics, 2020 .
4. AR/VR/MR
5. Others
[20] Sengupta S, Jayaram V, Curless B, et al. Background Matting: The World is Your Green Screen [J]. arXiv preprint arXiv:2004.00626, 2020 .
[21] Wang L, Wei H. Avoiding non-Manhattan obstacles based on projection of spatial corners in indoor environment [J]. IEEE/CAA Journal of Automatica Sinica, 2020 .
室内环境中基于空间角投影避免非曼哈顿障碍物
北大、上海理工、复旦;期刊:自动化学报英文版
[22] Spencer J, Bowden R, Hadfield S. Same Features, Different Day: Weakly Supervised Feature Learning for Seasonal Invariance [J]. arXiv preprint arXiv:2003.13431, 2020 .
不同时间的相同特征:季节性不变的弱监督特征学习
英国萨里大学;代码开源 (还未放出)
2020 年 3 月论文更新(23 篇)
本期 23 篇论文,其中 7 项开源工作;
1、2 多相机 SLAM 系统
9、10 VIO
21、22 3D 目标检测
12-19 八篇跟 semantic/deep learning 有关,趋势?
注:没有特意整理 CVPR,ICRA 新的论文,大部分都半年前就有预印版了,在这个仓库里基本上也早收录了
2020 年 3 月 29 日更新
1. Geometric SLAM
[1] Kuo J, Muglikar M, Zhang Z, et al. Redesigning SLAM for Arbitrary Multi-Camera Systems [C]. ICRA 2020 .
[2] Won C, Seok H, Cui Z, et al. OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems [J]. arXiv preprint arXiv:2003.08056, 2020 .
OmniSLAM:宽基线和多相机的全向定位和建图
韩国汉阳大学计算机科学系
[3] Colosi M, Aloise I, Guadagnino T, et al. Plug-and-Play SLAM: A Unified SLAM Architecture for Modularity and Ease of Use [J]. arXiv preprint arXiv:2003.00754, 2020 .
即插即用型 SLAM:模块化且易用的 SLAM 统一框架
意大利罗马萨皮恩扎大学;代码开源
作者之前一篇类似的文章,教你怎么模块化一个 SLAM 系统:
[4] Wu X, Vela P, Pradalier C. Robust Monocular Edge Visual Odometry through Coarse-to-Fine Data Association [J].2020 .
通过从粗到细的数据关联实现鲁棒的单目基于边的视觉里程计
佐治亚理工学院
[5] Rosinol A, Gupta A, Abate M, et al. 3D Dynamic Scene Graphs: Actionable Spatial Perception with Places, Objects, and Humans [J]. arXiv preprint arXiv:2002.06289, 2020 .
[6] Zeng T, Li X, Si B. StereoNeuroBayesSLAM: A Neurobiologically Inspired Stereo Visual SLAM System Based on Direct Sparse Method [J]. arXiv preprint arXiv:2003.03091, 2020 .
[7] Oleynikova H, Taylor Z, Siegwart R, et al. Sparse 3d topological graphs for micro-aerial vehicle planning [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018 : 1-9.
微型飞行器路径规划的稀疏 3D 拓扑图
苏黎世联邦理工;作者主页 ;路径规划与建图部分代码开源 ,相关论文:
[8] Ye H, Huang H, Liu M. Monocular Direct Sparse Localization in a Prior 3D Surfel Map [J]. arXiv preprint arXiv:2002.09923, 2020 .
2. Sensor Fusion
3. Semantic/Deep SLAM
[12] Landgraf Z, Falck F, Bloesch M, et al. Comparing View-Based and Map-Based Semantic Labelling in Real-Time SLAM [J]. arXiv preprint arXiv:2002.10342, 2020 .
在实时 SLAM 中比较基于视图和基于地图的语义标签
帝国理工学院计算机系戴森机器人实验室
[13] Singh G, Wu M, Lam S K. Fusing Semantics and Motion State Detection for Robust Visual SLAM [C]//The IEEE Winter Conference on Applications of Computer Vision. 2020 : 2764-2773.
融合语义和运动状态检测以实现鲁棒的视觉 SLAM
南洋理工大学
[14] Gupta A, Iyer G, Kodgule S. DeepEvent-VO: Fusing Intensity Images and Event Streams for End-to-End Visual Odometry [J].
DeepEvent-VO:融合强度图像和事件流的端到端视觉测距
CMU;代码开源
[15] Wagstaff B, Peretroukhin V, Kelly J. Self-Supervised Deep Pose Corrections for Robust Visual Odometry [J]. arXiv preprint arXiv:2002.12339, 2020 .
鲁棒视觉里程计的自监督深度位姿矫正
多伦多大学 STARS 实验室;代码开源
[16] Ye X, Ji X, Sun B, et al. DRM-SLAM: Towards dense reconstruction of monocular SLAM with scene depth fusion [J]. Neurocomputing, 2020.
DRM-SLAM:通过场景深度融合实现单目 SLAM 的稠密重建
大连理工大学;期刊:中科院二区,JCR Q1,IF 3.824
[17] Yang N, von Stumberg L, Wang R, et al. D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry [C]. CVPR 2020 .
D3VO:单目视觉里程计中针对深度、位姿和不确定性的深度网络
TUM 计算机视觉组;个人主页
[18] Chen C, Rosa S, Miao Y, et al. Selective sensor fusion for neural visual-inertial odometry [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019 : 10542-10551.
[19] Towards the Probabilistic Fusion of Learned Priors into Standard Pipelines for 3D Reconstruction [C]. ICRA 2020 .
4. AR/VR/MR
5. Others
2020 年 2 月论文更新(17 篇)
这个月赶论文,看的论文比较少,本期 17 篇,其中 3 项开源工作;
1、2、3、4 建图相关
7、8、9 动态相关
10、11 视惯融合
13、14、15 AR相关
2020 年 2 月 25 日更新
1. Geometric SLAM
[1] Muglikar M, Zhang Z, Scaramuzza D. Voxel map for visual slam [C]. ICRA 2020 .
[2] Ye X, Ji X, Sun B, et al. DRM-SLAM: Towards Dense Reconstruction of Monocular SLAM with Scene Depth Fusion [J]. Neurocomputing, 2020 .
通过场景深度融合实现单目 SLAM 的稠密重建
大连理工大学,期刊:中科院二区, IF 4.0
[3] Nardi F, Grisetti G, Nardi D. High-Level Environment Representations for Mobile Robots . 2019 .
[4] Puligilla S S, Tourani S, Vaidya T, et al. Topological Mapping for Manhattan-like Repetitive Environments [J]. arXiv preprint arXiv:2002.06575, 2020 .
类曼哈顿重复环境中的拓扑建图
印度海得拉巴国际信息技术研究所;代码开源 ;演示视频
[5] Li X, Ling H. Hybrid Camera Pose Estimation with Online Partitioning for SLAM [J]. IEEE Robotics and Automation Letters, 2020 , 5(2): 1453-1460.
在线分割 SLAM 中的混合相机位姿估计
天普大学,林海滨教授
[6] Karimian A, Yang Z, Tron R. Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization [J]. arXiv preprint arXiv:2002.02638, 2020 .
使用最大期望识别多机器人 VSLAM 中的异常值
波士顿大学
[7] Henein M, Zhang J, Mahony R, et al. Dynamic SLAM: The Need For Speed [J]. arXiv preprint arXiv:2002.08584, 2020 .
[8] Nair G B, Daga S, Sajnani R, et al. Multi-object Monocular SLAM for Dynamic Environments [J]. arXiv preprint arXiv:2002.03528, 2020 .
用于动态环境的多目标单目 SLAM
印度海得拉巴理工学院
[9] Cheng J, Zhang H, Meng M Q H. Improving Visual Localization Accuracy in Dynamic Environments Based on Dynamic Region Removal [J]. IEEE Transactions on Automation Science and Engineering, 2020 .
通过动态区域剔除来提升动态环境中视觉定位的准确性
港中文;中科院二区 JCR Q1
2. Sensor Fusion
3. Semantic/Deep SLAM
4. AR/VR/MR
5. Others
2020 年 1 月论文更新(26 篇)
本期 26 篇论文,其中 7 项开源工作,1 项开放数据集;
5、6、10 关于线段的 SLAM
7 基于事件相机的 SLAM 综述
8、9、10 视惯融合
16、17 AR+SLAM
2020 年 1 月 28 日更新
1. Geometric SLAM
[1] RÜCKERT, Darius; INNMANN, Matthias; STAMMINGER, Marc. FragmentFusion: A Light-Weight SLAM Pipeline for Dense Reconstruction . In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019 . p. 342-347.
FragmentFusion:一种轻量级的用于稠密重建的方案
德国埃朗根-纽伦堡大学
[2] Chen Y, Shen S, Chen Y, et al. Graph-Based Parallel Large Scale Structure from Motion [J]. arXiv preprint arXiv:1912.10659, 2019 .
基于图的并行大尺度的 SFM
中科院自动化所,代码开源
[3] Sommer C, Sun Y, Guibas L, et al. From Planes to Corners: Multi-Purpose Primitive Detection in Unorganized 3D Point Clouds [J]. arXiv preprint arXiv:2001.07360, 2020 .
从平面到角点:无组织的点云中的多用途基本体检测
慕尼黑工业大学,代码开源
[4] Zhao Y, Vela P A. Good feature matching: Towards accurate, robust VO/VSLAM with low latency [J]. IEEE Transactions on Robotics, 2019 , 7: 181800-181811.
良好的特征匹配:面向低延迟、准确且鲁棒的 VO / VSLAM
佐治亚理工学院,作者主页 ,代码开源 ,期刊中科院二区 JCR Q1
[5] Luo X, Tan Z, Ding Y. Accurate Line Reconstruction for Point and Line-Based Stereo Visual Odometry [J]. IEEE Access, 2019 , 7: 185108-185120.
基于双目点线视觉里程计的精确线段重构
浙江大学超大规模集成电路设计研究院,IEEE Access 开源期刊
[6] Ma J, Wang X, He Y, et al. Line-Based Stereo SLAM by Junction Matching and Vanishing Point Alignment [J]. IEEE Access, 2019 , 7: 181800-181811.
通过节点匹配与消失点对齐的基于线的双目 SLAM
武汉大学、中科院自动化所,IEEE Access 开源期刊
[7] 马艳阳, 叶梓豪, 刘坤华, 等. 基于事件相机的定位与建图算法: 综述 [J]. 自动化学报, 2020 , 46: 1-11.
2. Sensor Fusion
[8] WEN, Shuhuan, et al. Joint optimization based on direct sparse stereo visual-inertial odometry . Autonomous Robots, 2020 , 1-19.
[9] Chen C, Zhu H, Wang L, et al. A Stereo Visual-Inertial SLAM Approach for Indoor Mobile Robots in Unknown Environments Without Occlusions [J]. IEEE Access, 2019 , 7: 185408-185421.
无遮挡未知环境中室内移动机器人的双目视觉惯性 SLAM 方法
中国矿业大学,代码开源 (还未放出),IEEE Access 开源期刊
[10] Yan D, Wu C, Wang W, et al. Invariant Cubature Kalman Filter for Monocular Visual Inertial Odometry with Line Features [J]. arXiv preprint arXiv:1912.11749, 2019 .
单目线特征视觉惯性里程法的不变容积卡尔曼滤波
石家庄铁道大学、北京交通大学
3. Semantic/Deep SLAM
[11] XU, Jingao, et al. Edge Assisted Mobile Semantic Visual SLAM .
[12] ZHAO, Zirui, et al. Visual Semantic SLAM with Landmarks for Large-Scale Outdoor Environment . arXiv preprint arXiv:2001.01028, 2020 .
用于大规模室外环境的具有路标的视觉语义 SLAM
西安交大、北京交大
[13] WANG, Li, et al. Object-Aware Hybrid Map for Indoor Robot Visual Semantic Navigation . In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO ). IEEE, 2019 . p. 1166-1172.
用于室内机器人视觉语义导航的对象感知混合地图
燕山大学、加拿大阿尔伯塔大学、伦敦大学;Autonomous Robots 期刊中科院三区,JCR Q1, IF 2.244
[14] Czarnowski, J., Laidlow, T., Clark, R., & Davison, A. J. (2020 ). DeepFactors: Real-Time Probabilistic Dense Monocular SLAM . IEEE Robotics and Automation Letters, 5(2), 721–728. doi:10.1109/lra.2020.2965415
DeepFactors:实时的概率单目稠密 SLAM
帝国理工学院戴森机器人实验室,代码开源
[15] TRIPATHI, Nivedita; SISTU, Ganesh; YOGAMANI, Senthil. Trained Trajectory based Automated Parking System using Visual SLAM . arXiv preprint arXiv:2001.02161, 2020 .
使用视觉 SLAM 基于轨迹训练的自动停车系统
爱尔兰法雷奥视觉系统公司
4. AR/VR/MR
[16] WANG, Cheng, et al. NEAR: The NetEase AR Oriented Visual Inertial Dataset . In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019 . p. 366-371.
[17] HUANG, Ningsheng; CHEN, Jing; MIAO, Yuandong. Optimization for RGB-D SLAM Based on Plane Geometrical Constraint . In: 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019 . p. 326-331.
基于平面几何约束优化的 RGB-D SLAM
北理工
[18] WU, Yi-Chin; CHAN, Liwei; LIN, Wen-Chieh. Tangible and Visible 3D Object Reconstruction in Augmented Reality . In: 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2019 . p. 26-36.
增强现实中有形且可见的三维物体重建
台湾国立交通大学计算机科学系
[19] FEIGL, Tobias, et al. Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-Scale Industry Environments .
大型工业动态环境中 ARCore,ARKit 和 Hololens 的定位局限性
德国弗里德里希-亚历山大大学
[20] Yang X, Yang J, He H, et al. A Hybrid 3D Registration Method of Augmented Reality for Intelligent Manufacturing [J]. IEEE Access, 2019 , 7: 181867-181883.
用于智能制造的增强现实混合三维注册方法
广东工业大学,开源期刊
5. Others
[21] Speciale P. Novel Geometric Constraints for 3D Computer Vision Applications [D]. ETH Zurich, 2019 .
[22] Patil V, Van Gansbeke W, Dai D, et al. Don't Forget The Past: Recurrent Depth Estimation from Monocular Video [J]. arXiv preprint arXiv:2001.02613, 2020 .
不要忘记过去的信息:从单目视频中的重复深度估计
苏黎世联邦理工,代码开源 (还未放出)
[23] CHIU, Hsu-kuang, et al. Probabilistic 3D Multi-Object Tracking for Autonomous Driving . arXiv preprint arXiv:2001.05673, 2020 .
用于自动驾驶的概率 3D 多目标跟踪
斯坦福大学、丰田研究所,代码开源
[24] ZHOU, Boyu, et al. Robust Real-time UAV Replanning Using Guided Gradient-based Optimization and Topological Paths . arXiv preprint arXiv:1912.12644, 2019 .
[25] Object-based localization ,2019 .
[26] Device pose estimation using 3d line clouds ,2019 .
2019 年 12 月论文更新(23 篇)
本期 23 篇论文,其中 5 项开源工作;
比较有意思的有 TextSLAM、VersaVIS 和单目 3D 目标检测。
1. Geometric SLAM
[1] Tanke J, Kwon O H, Stotko P, et al. Bonn Activity Maps: Dataset Description [J]. arXiv preprint arXiv:1912.06354, 2019 .
[2] An S, Che G, Zhou F, et al. Fast and Incremental Loop Closure Detection Using Proximity Graphs [J]. arXiv preprint arXiv:1911.10752, 2019 .
使用邻近图的快速增量式闭环检测
京东、北航,代码开源
[3] Li B, Zou D, Sartori D, et al. TextSLAM: Visual SLAM with Planar Text Features [J]. arXiv preprint arXiv:1912.05002, 2019.
TextSLAM:基于平面文本的视觉 SLAM
上交邹丹平老师
[4] Bundle Adjustment Revisited
[5] Lange M, Raisch C, Schilling A. LVO: Line only stereo Visual Odometry [C]//2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN). IEEE, 1-8.
[6] Liu W, Mo Y, Jiao J. An efficient edge-feature constraint visual SLAM [C]//Proceedings of the International Conference on Artificial Intelligence, Information Processing and Cloud Computing. ACM, 2019 : 13.
[7] Pan L, Wang P, Cao J, et al. Dense RGB-D SLAM with Planes Detection and Mapping [C]//IECON 2019-45th Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2019 , 1: 5192-5197.
使用平面检测与建图的稠密 RGB-D SLAM
新加坡国立大学
[8] Ji S, Qin Z, Shan J, et al. Panoramic SLAM from a multiple fisheye camera rig [J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020 , 159: 169-183.
[9] Lecrosnier L, Boutteau R, Vasseur P, et al. Vision based vehicle relocalization in 3D line-feature map using Perspective-n-Line with a known vertical direction [C]//2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, 2019 : 1263-1269.
使用具有已知垂直方向的透视线在 3D 线特征图中进行基于视觉的车辆重定位
诺曼底大学
[10] de Souza Muñoz M E, Menezes M C, de Freitas E P, et al. A Parallel RatSlam C++ Library Implementation [C]//Latin American Workshop on Computational Neuroscience. Springer, Cham, 2019 : 173-183.
2. Sensor Fusion
[11] Tschopp F, Riner M, Fehr M, et al. VersaVIS: An Open Versatile Multi-Camera Visual-Inertial Sensor Suite [J]. arXiv preprint arXiv:1912.02469, 2019 .
VersaVIS:开源多功能多相机的视觉惯性传感器套件
苏黎世联邦理工学院,代码开源
[12] Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation [C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. IROS 2019 .
Openvins:用于视觉惯性估计的研究平台
特拉华大学,代码开源
[13] Barrau A, Bonnabel S. A Mathematical Framework for IMU Error Propagation with Applications to Preintegration [J]. 2019 .
IMU错误传播的数学框架及其在预积分中的应用
PSL Research University
[14] Ke T, Wu K J, Roumeliotis S I. RISE-SLAM: A Resource-aware Inverse Schmidt Estimator for SLAM[C]. IROS 2019 .
3. Semantic/Deep SLAM
[15] Dong Y, Wang S, Yue J, et al. A Novel Texture-Less Object Oriented Visual SLAM System [J]. IEEE Transactions on Intelligent Transportation Systems, 2019 .
一种新型的面向低纹理物体的视觉 SLAM 系统
同济大学,期刊 中科院二区,JCR Q1,IF 6
[16] Peng J, Shi X, Wu J, et al. An Object-Oriented Semantic SLAM System towards Dynamic Environments for Mobile Manipulation [C]//2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM ). IEEE, 2019 : 199-204.
用于动态环境中移动机器人抓取的面向物体的语义 SLAM
上海交大机械学院,AIM:CCF 人工智能 C 类会议
[17] Kim U H, Kim S H, Kim J H. SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation [J]. arXiv preprint arXiv:1911.05939, 2019 .
SimVODIS:同时进行视觉里程计、目标检测和语义分割
韩国高等科学技术院
[18] Howard Mahe, Denis Marraud, Andrew I. Comport. Real-time RGB-D semantic keyframe SLAM based on image segmentation learning from industrial CAD models . International Conference on Advanced Robotics, Dec 2019, Belo Horizonte, Brazil. ffhal-02391499
4. AR/VR/MR
5. Others
[21] Yan Z, Zha H. Flow-based SLAM: From geometry computation to learning [J]. Virtual Reality & Intelligent Hardware, 2019 , 1(5): 435-460.
[22] Li J, Liu Y, Yuan X, et al. Depth Based Semantic Scene Completion With Position Importance Aware Loss [J]. IEEE Robotics and Automation Letters, 2019 , 5(1): 219-226.
[23] Simonelli A, Bulò S R R, Porzi L, et al. Disentangling Monocular 3D Object Detection [J]. arXiv preprint arXiv:1905.12365, 2019 .
揭秘单目 3D 目标检测
意大利特伦托大学,作者其他论文
2019 年 11 月论文更新(17 篇)
1. Geometric SLAM
[1] Jatavallabhula K M, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation [J]. arXiv preprint arXiv:1910.10672, 2019 .
[2] Lee S J, Hwang S S. Elaborate Monocular Point and Line SLAM With Robust Initialization [C]//ICCV 2019 : 1121-1129.
具有鲁棒初始化的单目点线 SLAM
韩国韩东国际大学
[3] Wen F, Ying R, Gong Z, et al. Efficient Algorithms for Maximum Consensus Robust Fitting [J]. IEEE Transactions on Robotics, 2019 .
最大一致性稳健拟合的有效算法
期刊 中科院二区,JCRQ1,IF 1.038,代码开源
上海交通大学电子工程系/脑启发式应用技术中心
[4] Civera J, Lee S H. RGB-D Odometry and SLAM [M]//RGB-D Image Analysis and Processing. Springer, Cham, 2019 : 117-144.
[5] Wang H, Li J, Ran M, et al. Fast Loop Closure Detection via Binary Content [C]//2019 IEEE 15th International Conference on Control and Automation (ICCA ). IEEE, 2019 : 1563-1568.
通过二进制内容进行快速闭环检测
南洋理工大学,ICCV 会议
[6] Liu W, Wu S, Wu Z, et al. Incremental Pose Map Optimization for Monocular Vision SLAM Based on Similarity Transformation [J]. Sensors, 2019 , 19(22): 4945.
基于相似度变换的单目 SLAM 增量式位姿图优化
北航,开源期刊
[7] Wang S, Yue J, Dong Y, et al. A synthetic dataset for Visual SLAM evaluation [J]. Robotics and Autonomous Systems, 2019 : 103336.
用于视觉 SLAM 评估的合成数据集
同济大学,期刊中科院三区, JCR Q2, IF 2.809
[8] Han B, Li X, Yu Q, et al. A Novel Visual Odometry Aided by Vanishing Points in the Manhattan World [J].
[9] Yuan Z, Zhu D, Chi C, et al. Visual-Inertial State Estimation with Pre-integration Correction for Robust Mobile Augmented Reality [C]//Proceedings of the 27th ACM International Conference on Multimedia. ACM, 2019 : 1410-1418.
用于鲁棒的移动增强现实中基于预积分校正的视觉惯性状态估计
华中科大,会议 ACM MM:CCF A 类会议
[10] Zhong D, Han L, Fang L. iDFusion: Globally Consistent Dense 3D Reconstruction from RGB-D and Inertial Measurements [C]//Proceedings of the 27th ACM International Conference on Multimedia. ACM, 2019 : 962-970.
iDFusion:RGB-D 和惯导的全局一致性稠密三维建图
清华大学、港科,会议 ACM MM:CCF A 类会议,Google Scholar
[11] Duhautbout T, Moras J, Marzat J. Distributed 3D TSDF Manifold Mapping for Multi-Robot Systems [C]//2019 European Conference on Mobile Robots (ECMR ). IEEE, 2019 : 1-8.
多机器人系统的分布式三维 TSDF 流形建图
TSDF 开源库:https://github.com/personalrobotics/OpenChisel
2. Semantic/Deep SLAM
[12] Schorghuber M, Steininger D, Cabon Y, et al. SLAMANTIC-Leveraging Semantics to Improve VSLAM in Dynamic Environments [C]//ICCV Workshops . 2019 : 0-0.
SLAMANTIC:在动态环境中利用语义来改善VSLAM
奥地利理工学院, 代码开源
[13] Lee C Y, Lee H, Hwang I, et al. Spatial Perception by Object-Aware Visual Scene Representation [C]//ICCV Workshops 2019 : 0-0.
[14] Peng J, Shi X, Wu J, et al. An Object-Oriented Semantic SLAM System towards Dynamic Environments for Mobile Manipulation [C]//2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM ). IEEE, 2019 : 199-204.
用于动态环境移动操作的物体级语义 SLAM 系统
上海交大,AIM:CCF 人工智能 C 类会议
[15] Cui L, Ma C. SOF-SLAM: A semantic visual SLAM for Dynamic Environments [J]. IEEE Access, 2019 .
一种用于动态环境的语义视觉里程计
北航,IEEE Access 开源期刊
[16] Zheng L, Tao W. Semantic Object and Plane SLAM for RGB-D Cameras [C]//Chinese Conference on Pattern Recognition and Computer Vision (PRCV ). Springer, Cham, 2019 : 137-148.
RGB-D 物体语义与平面级 SLAM
平面分割采用 PEAC ,PRCV 第二届今年在西安举办的那个
华中科大
[17] Kim U H, Kim S H, Kim J H. SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation [J]. arXiv preprint arXiv:1911.05939, 2019 .
SimVODIS:同时进行视觉里程计、目标检测和实例分割
2019 年 10 月论文更新(22 篇)
1. Geometric SLAM
[1] Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework [J]. arXiv preprint arXiv:1910.01122, 2019 .
[2] Chen Y, Huang S, Fitch R, et al. On-line 3D active pose-graph SLAM based on key poses using graph topology and sub-maps [C]//2019 International Conference on Robotics and Automation (ICRA ). IEEE, 2019 : 169-175.
使用图形拓扑和子图的基于关键姿势的在线 3D 活跃位姿图SLAM
悉尼科技大学
[3] Pfrommer B, Daniilidis K. TagSLAM: Robust SLAM with Fiducial Markers [J]. arXiv preprint arXiv:1910.00679, 2019 .
TagSLAMM:具有基准标记的鲁棒 SLAM
宾夕法尼亚大学通用机器人,自动化,感应和感知实验室,项目主页
[4] Lin T Y, Clark W, Eustice R M, et al. Adaptive Continuous Visual Odometry from RGB-D Images [J]. arXiv preprint arXiv:1910.00713, 2019 .
RGB-D 图像的自适应连续视觉里程计
密西根大学
[5] Y Yang, P Geneva, K Eckenhoff, G Huang. Visual-Inertial Odometry with Point and Line Features , 2019 .
[6] Tarrio J J, Smitt C, Pedre S. SE-SLAM: Semi-Dense Structured Edge-Based Monocular SLAM [J]. arXiv preprint arXiv:1909.03917, 2019 .
SE-SLAM:基于边的单目半稠密 SLAM
阿根廷巴尔塞罗研究所
[7] Wu X, Pradalier C. Robust Semi-Direct Monocular Visual Odometry Using Edge and Illumination-Robust Cost [J]. arXiv preprint arXiv:1909.11362, 2019 .
利用边缘和光照鲁棒成本的单目半直接法视觉里程计
佐治亚理工学院
[8] Pan Z, Chen H, Li S, et al. ClusterMap Building and Relocalization in Urban Environments for Unmanned Vehicles [J]. Sensors, 2019 , 19(19): 4252.
无人驾驶车辆在城市环境中的 ClusterMap 构建和重定位
哈工大深圳,港中文,期刊 Sensors:开源期刊,中科院三区 JCR Q2Q3 IF 3.014
[9] Zhang M, Zuo X, Chen Y, et al. Localization for Ground Robots: On Manifold Representation, Integration, Re-Parameterization, and Optimization [J]. arXiv preprint arXiv:1909.03423, 2019 .
地面机器人的定位:流形表示,积分,重新参数化和优化
阿里巴巴人工智能实验室
[10] Kirsanov P, Gaskarov A, Konokhov F, et al. DISCOMAN: Dataset of Indoor SCenes for Odometry, Mapping And Navigation [J]. arXiv preprint arXiv:1909.12146, 2019 .
DISCOMAN:用于里程计、制图和导航的室内场景数据集
三星 AI 中心,数据集随论文正式发表放出
2. Semantic/Deep SLAM
[11] Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System [J]. Journal of Intelligent & Robotic Systems, 2019 : 1-10.
在线目标检测和定位的双目视觉 SLAM
阿根廷国际信息科学中心,代码开源
[12] Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping [J]. arXiv preprint arXiv:1910.02490, 2019 .
[13] Feng Q, Meng Y, Shan M, et al. Localization and Mapping using Instance-specific Mesh Models [J].IROS 2019
使用特定实例网格模型进行定位和建图
加州大学圣地亚哥分校语境机器人研究所,课题组
[14] Liao Z, Shi J, Qi X, et al. Coarse-To-Fine Visual Localization Using Semantic Compact Map [J]. arXiv preprint arXiv:1910.04936, 2019 .
使用语义紧凑图的从粗糙到精细的视觉定位
北航,face++
[15] Doherty K, Baxter D, Schneeweiss E, et al. Probabilistic Data Association via Mixture Models for Robust Semantic SLAM [J]. arXiv preprint arXiv:1909.11213, 2019 .
[16] Jung E, Yang N, Cremers D. Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light [J]. arXiv preprint arXiv:1910.06632, 2019 .
Multi-Frame GAN:弱光照双目视觉里程计的图像增强
慕尼黑工业大学、澳大利亚国立大学,Artisense 自动驾驶公司,LSD、DSO 作者,Google Scholar
[17] Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments [J]. Biological Cybernetics, 2019 : 1-31.
NeuroSLAM:针对 3D 环境的脑启发式 SLAM 系统
昆士兰科技大学,Rat SLAM 作者,代码开源
[18] Zeng T, Si B. A Brain-Inspired Compact Cognitive Mapping System [J]. arXiv preprint arXiv:1910.03913, 2019 .
3. Learning others
4. Others
2019 年 9 月论文更新(24 篇)
1. Geometric SLAM
[1] Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system [J]. arXiv preprint arXiv:1908.11585, 2019 .
ORBSLAM-Atlas:一个鲁棒而准确的多地图系统
西班牙萨拉戈萨大学,ORB-SLAM 作者
[2] Yang Y, Dong W, Kaess M. Surfel-Based Dense RGB-D Reconstruction With Global And Local Consistency [C]//2019 International Conference on Robotics and Automation (ICRA ). IEEE, 2019 : 5238-5244.
[3] Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System [J]. Journal of Intelligent & Robotic Systems, 2019 : 1-10.
在线进行三维目标检测的双目视觉 SLAM
期刊:中科院四区,JCR Q4,IF 2.4
[4] Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment [C] IROS 2019 .
特征因子:多帧和时间连续点云对齐的平面估计
俄罗斯斯科尔科沃科技学院,三星 代码开源 演示视频
[5] Zhang Y, Yang J, Zhang H, et al. Bundle Adjustment for Monocular Visual Odometry Based on Detected Traffic Sign Features [C]//2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019 : 4350-4354.
基于交通标志特征检测的单目视觉里程计 BA 优化
北理工、华盛顿大学
[6] Zhang X, Wang W, Qi X, et al. Point-Plane SLAM Using Supposed Planes for Indoor Environments [J]. Sensors, 2019 , 19(17): 3795.
室内环境中使用假设平面的点-平面 SLAM
北京航空航天大学机器人研究所 开源期刊
[7] Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints [C]//2019 International Conference on Robotics and Automation (ICRA ). IEEE, 2019 : 3556-3562.
使用 SE(2)-XYZ 约束用于地面车辆定位于建图的视觉里程计
香港中文大学 代码开源
[8] Li H, Xing Y, Zhao J, et al. Leveraging Structural Regularity of Atlanta World for Monocular SLAM [C]//2019 International Conference on Robotics and Automation (ICRA ). IEEE, 2019 : 2412-2418.
[9] Sun J, Wang Y, Shen Y. Fully Scaled Monocular Direct Sparse Odometry with A Distance Constraint [C]//2019 5th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2019 : 271-275.
[10] Dong J, Lv Z. miniSAM: A Flexible Factor Graph Non-linear Least Squares Optimization Framework [J]. arXiv preprint arXiv:1909.00903, 2019 .
[11] Campos C, MM M J, Tardós J D. Fast and Robust Initialization for Visual-Inertial SLAM [C]//2019 International Conference on Robotics and Automation (ICRA ). IEEE, 2019 : 1288-1294.
视惯 SLAM 快速鲁棒的初始化
西班牙萨拉戈萨大学,ORB-SLAM 课题组
[12] He L, Yang M, Li H, et al. Graph Matching Pose SLAM based on Road Network Information [C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019 : 1274-1279.
基于路网信息的图匹配位姿 SLAM
上海交通大学 系统控制与信息处理教育部重点实验室
[13] Gu T, Yan R. An Improved Loop Closure Detection for RatSLAM [C]//2019 5th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2019 : 884-888.
一种改进的 RatSLAM 闭环检测方法
四川大学
2. Semantic/Deep SLAM
[14] Zhang J, Gui M, Wang Q, et al. Hierarchical Topic Model Based Object Association for Semantic SLAM [J]. IEEE transactions on visualization and computer graphics, 2019 .
基于层次主题模型的语义 SLAM 对象关联
期刊:中科院三区, JCR Q1,IF 3.78
[15] Gählert N, Wan J J, Weber M, et al. Beyond Bounding Boxes: Using Bounding Shapes for Real-Time 3D Vehicle Detection from Monocular RGB Images [C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019 : 675-682.
超越边界框:使用边界形状从单目 RGB 图像中进行实时 3D 车辆检测
德国耶拿大学
[16] Yang N, Wang R, Stuckler J, et al. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry [C]//Proceedings of the European Conference on Computer Vision (ECCV ). 2018 : 817-833.
深度虚拟双目里程计:利用单目直接稀疏里程计的深度预测
慕尼黑工业大学;作者主页 ,项目主页
[17] Liu H, Ma H, Zhang L. Visual Odometry based on Semantic Supervision [C]//2019 IEEE International Conference on Image Processing (ICIP ). IEEE, 2019 : 2566-2570.
基于语义监督的视觉里程计
清华大学;会议 ICIP:CCF 计算机图形学与多媒体 C 类会议
[18] Wald J, Avetisyan A, Navab N, et al. RIO: 3D Object Instance Re-Localization in Changing Indoor Environments [J]. arXiv preprint arXiv:1908.06109, 2019 .
RIO:改变室内环境的 3D 物体实例重定位
TUM,Google,项目主页 ,开放数据集
3. AR & MR & VR
4. Learning others
5. Others
[22] Yang B, Xu X, Li J, et al. Landmark Generation in Visual Place Recognition Using Multi-Scale Sliding Window for Robotics [J]. Applied Sciences, 2019 , 9(15): 3146.
基于多尺度滑动窗口的机器人视觉地点识别中的地标生成
东南大学 期刊:开源期刊,中科院三区,JCR Q3
[23] Hofstetter I, Sprunk M, Schuster F, et al. On Ambiguities in Feature-Based Vehicle Localization and their A Priori Detection in Maps [C]//2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2019 : 1192-1198.
基于特征的车辆模糊定位及其在地图中的先验检测
SLAM 中的物体数据关联可参考
[24] Kümmerle J, Sons M, Poggenhans F, et al. Accurate and Efficient Self-Localization on Roads using Basic Geometric Primitives [C]//2019 International Conference on Robotics and Automation (ICRA ). IEEE, 2019 : 5965-5971.
基于几何元素在道路中进行准确有效的自定位
德国卡尔斯鲁厄理工学院
2019 年 8 月论文更新(26 篇)
1. Geometric SLAM
[1] Wei X, Huang J, Ma X. Real-Time Monocular Visual SLAM by Combining Points and Lines [C]//2019 IEEE International Conference on Multimedia and Expo (ICME ). IEEE, 2019 : 103-108.
点线结合的单目视觉 SLAM
中国科学院上海高等研究院 ICME:CCF 计算机图形学与多媒体 B 类会议
[2] Fu Q, Yu H, Lai L, et al. A Robust RGB-D SLAM System with Points and Lines for Low Texture Indoor Environments [J]. IEEE Sensors Journal, 2019 .
低纹理室内环境的点线联合的鲁棒 RGB-D SLAM 系统
湖南大学机器人视觉感知与控制国家工程实验室 期刊 IEEE Sensors Journal:中科院三区,JCR Q1Q2,IF 2.69
[3] Zhao W, Qian K, Ma Z, et al. Stereo Visual SLAM Using Bag of Point and Line Word Pairs [C]//International Conference on Intelligent Robotics and Applications. Springer, Cham, 2019 : 651-661.
[4] Hachiuma R, Pirchheim C, Schmalstieg D, et al. DetectFusion: Detecting and Segmenting Both Known and Unknown Dynamic Objects in Real-time SLAM [C]//Proceedings British Machine Vision Conference (BMVC ). 2019 .
DetectFusion:在实时的 SLAM 中检测和分割已知与未知的动态对象
日本庆应义塾大学、格拉茨理工大学 BMVC:CCF 人工智能 C 类会议
相关论文:
[5] Prokhorov D, Zhukov D, Barinova O, et al. Measuring robustness of Visual SLAM [C]//2019 16th International Conference on Machine Vision Applications (MVA). IEEE, 2019 : 1-6.
视觉 SLAM 的鲁棒性评估
三星 AI 研究中心 MVA:CCF 人工智能 C 类会议
[6] Ryohei Y, Kanji T, Koji T. Invariant Spatial Information for Loop-Closure Detection [C]//2019 16th International Conference on Machine Vision Applications (MVA). IEEE, 2019 : 1-6.
用于闭环检测的不变空间信息
日本福井大学 MVA:CCF 人工智能 C 类会议
[7] Yang B, Xu X, Li J. Keyframe-Based Camera Relocalization Method Using Landmark and Keypoint Matching [J]. IEEE Access, 2019 , 7: 86854-86862.
使用路标和关键点匹配基于关键帧的相机重定位方法
东南大学 期刊 IEEE Access:开源期刊
2. Semantic/Deep SLAM
[8] Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM [C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, 2019 : 121-128.
[9] Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM [J]. arXiv preprint arXiv:1907.09760, 2019 .
[10] Hu L, Xu W, Huang K, et al. Deep-SLAM++: Object-level RGBD SLAM based on class-specific deep shape priors [J]. arXiv preprint arXiv:1907.09691, 2019 .
基于特定类深度形状先验的对象级 RGBD SLAM
上海科技大学
[11] Torres Cámara J M. Map Slammer. Densifying Scattered KSLAM 3D Maps with Estimated Depth [J]. 2019 .
利用深度估计的加密分散的关键帧 SLAM 三维地图
西班牙阿利坎特大学学位论文 代码开源 演示视频
[12] Cieslewski T, Bloesch M, Scaramuzza D. Matching Features without Descriptors: Implicitly Matched Interest Points (IMIPs) [C]// 2019 British Machine Vision Conference.2018 .
没有描述符的特征匹配:隐含匹配的兴趣点
苏黎世理工、帝国理工 代码开源 BMVC:CCF 人工智能 C 类会议
[13] Zheng J, Zhang J, Li J, et al. Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling [J]. arXiv preprint arXiv:1908.00222, 2019 .
[14] Jikai Lu, Jianhui Chen, James J. Little, Pan-tilt-zoom SLAM for Sports Videos .[C]//British Machine Vision Conference (BMVC) 2019 .
用于体育视频的平移 - 倾斜 - 缩放SLAM
不列颠哥伦比亚大学 代码开源
3. AR & MR & VR
4. Learning others
[18] Ku J, Pon A D, Waslander S L. Monocular 3D Object Detection Leveraging Accurate Proposals and Shape Reconstruction [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2019 : 11867-11876.
[19] Chiang H, Lin Y, Liu Y, et al. A Unified Point-Based Framework for 3D Segmentation [J]. arXiv preprint arXiv:1908.00478, 2019 .
一种统一的基于点的三维分割框架
国立台湾大学、亚马逊 代码开源
[20] Zhang Y, Lu Z, Xue J H, et al. A New Rotation-Invariant Deep Network for 3D Object Recognition [C]//2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE , 2019 : 1606-1611.
一种新的具有旋转不变性的三维物体识别深度网络
清华大学 ICME:CCF 计算机图形学与多媒体 B 类会议
[21] Gao Y, Yuille A L. Estimation of 3D Category-Specific Object Structure: Symmetry, Manhattan and/or Multiple Images[J]. International Journal of Computer Vision, 2019 : 1-26.
3D 类别特定对象结构的估计:对称性,曼哈顿和或多图像
中科大 期刊 International Journal of Computer Vision:中科院一区,JCR Q1,IF 12.389
[22] Palazzi A, Bergamini L, Calderara S, et al. Semi-parametric Object Synthesis [J]. arXiv preprint arXiv:1907.10634, 2019 .
[23] Christiansen P H, Kragh M F, Brodskiy Y, et al. UnsuperPoint: End-to-end Unsupervised Interest Point Detector and Descriptor [J]. arXiv preprint arXiv:1907.04011, 2019 .
[24] Chen B X, Tsotsos J K. Fast Visual Object Tracking with Rotated Bounding Boxes [J]. arXiv preprint arXiv:1907.03892, 2019 .
带旋转边界框的快速视觉目标跟踪
约克大学、多伦多大学 代码开源
[25] Brazil G, Liu X. M3D-RPN: Monocular 3D Region Proposal Network for Object Detection [J]. arXiv preprint arXiv:1907.06038, 2019 .
用于物体检测的单目 3D 区域提议网络
密歇根州立大学
5. Others
2019 年 7 月论文更新(36 篇)
1. Geometric SLAM
[1] Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system [C]//IEEE International Conference on Robotics and Automation(ICRA ) 2019. 2019 .
[2] Christensen K, Hebert M. Edge-Direct Visual Odometry [J]. arXiv preprint arXiv:1906.04838, 2019 .
[3] Dong E, Xu J, Wu C, et al. Pair-Navi: Peer-to-Peer Indoor Navigation with Mobile Visual SLAM [C]//IEEE INFOCOM 2019-IEEE Conference on Computer Communications. IEEE, 2019 : 1189-1197.
使用移动视觉 SLAM 进行点对点室内导航
清华大学 Google Scholar
会议:IEEE INFOCOM:CCF 计算机网络 A 类会议
[4] Zhou H, Fan H, Peng K, et al. Monocular Visual Odometry Initialization With Points and Line Segments [J]. IEEE Access, 2019 , 7: 73120-73130.
利用点线初始化的单目视觉里程计
国防科大、清华大学、港中文
IEEE Access:开源期刊
[5] He M, Zhu C, Huang Q, et al. A review of monocular visual odometry [J]. The Visual Computer, 2019 : 1-13.
单目视觉里程计综述
河海大学
期刊 The Visual Computer:中科院四区,JCR Q3,IF 1.39
[6] Bujanca M, Gafton P, Saeedi S, et al. SLAMBench 3.0: Systematic Automated Reproducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Understanding [C]//IEEE International Conference on Robotics and Automation (ICRA ). 2019 .
用于机器人视觉挑战和场景理解的 SLAM 系统自动可重复性评估
爱丁堡大学,伦敦帝国理工学院
[7] Wang Y, Zell A. Improving Feature-based Visual SLAM by Semantics [C]//2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS). IEEE, 2018 : 7-12.
[8] Mo J, Sattar J. Extending Monocular Visual Odometry to Stereo Camera System by Scale Optimization [C]. International Conference on Intelligent Robots and Systems (IROS ), 2019 .
[9] A Modular Optimization framework for Localization and mApping (MOLA) . 2019
用于定位和建图的模块化优化框架
西班牙阿尔梅利亚大学
代码开源
[10] Ye W, Zhao Y, Vela P A. Characterizing SLAM Benchmarks and Methods for the Robust Perception Age [J]. arXiv preprint arXiv:1905.07808, 2019 .
表征鲁棒感知时代的 SLAM 基准和方法
乔治亚理工学院
[11] Bürki M, Cadena C, Gilitschenski I, et al. Appearance‐based landmark selection for visual localization [J]. Journal of Field Robotics. 2019
基于外观的用于视觉定位的路标选择
ETH,MIT 期刊:中科院二区,JCR Q1,IF 5.0
[12] Hsiao M, Kaess M. MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree [J]. 2019 .
MH-iSAM2:使用贝叶树和 Hypo 树的多假设 iSAM
CMU 代码开源
[13] Schops T, Sattler T, Pollefeys M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019 : 134-144.
[14] Wang K, Gao F, Shen S. Real-time Scalable Dense Surfel Mapping [C]//Proc. of the IEEE Intl. Conf. on Robot. and Autom.(ICRA ). 2019 .
实时可拓展的表面重建
港科大沈邵劼课题组
代码开源
[15] Zhao Y, Xu S, Bu S, et al. GSLAM: A General SLAM Framework and Benchmark [J]. arXiv preprint arXiv:1902.07995, 2019 .
通用SLAM框架和基准
西北工业大学,自动化所 代码开源
[16] Nellithimaru A K, Kantor G A. ROLS: Robust Object-Level SLAM for Grape Counting [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPR2019 : 0-0.
[17] Nejad Z Z, Ahmadabadian A H. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs [J]. Machine Vision and Applications, 2019 : 1-10.
ARM CPU上地面车辆的高效单目视觉里程计
伊朗德黑兰托西技术大学
代码开源 期刊:中科院四区,JCR Q2Q3,IF 1.3
[18] Aloise I, Della Corte B, Nardi F, et al. Systematic Handling of Heterogeneous Geometric Primitives in Graph-SLAM Optimization [J]. IEEE Robotics and Automation Letters, 2019 , 4(3): 2738-2745.
SLAM 图优化中异构几何基元的系统处理
罗马大学
代码开源
[19] Guo R, Peng K, Fan W, et al. RGB-D SLAM Using Point–Plane Constraints for Indoor Environments [J]. Sensors, 2019 , 19(12): 2721.
室内环境中使用点-平面约束的 RGB-D SLAM
国防科大 期刊:开源期刊,中科院三区,JCR Q2Q3,IF 3.0
[20] Laidlow T, Czarnowski J, Leutenegger S. DeepFusion: Real-Time Dense 3D Reconstruction for Monocular SLAM using Single-View Depth and Gradient Predictions [J].2019 .
[21] Saeedi S, Carvalho E, Li W, et al. Characterizing Visual Localization and Mapping Datasets [C]//2019 IEEE International Conference on Robotics and Automation (ICRA ). 2019 .
[22] Sun T, Sun Y, Liu M, et al. Movable-Object-Aware Visual SLAM via Weakly Supervised Semantic Segmentation [J]. arXiv preprint arXiv:1906.03629, 2019 .
通过弱监督语义分割的可移动对象感知视觉SLAM
港科大
[23] Ghaffari M, Clark W, Bloch A, et al. Continuous Direct Sparse Visual Odometry from RGB-D Images [J]. arXiv preprint arXiv:1904.02266, 2019 .
RGB-D图像连续直接稀疏视觉里程计
密歇根大学 代码开源
[24] Houseago C, Bloesch M, Leutenegger S. KO-Fusion: Dense Visual SLAM with Tightly-Coupled Kinematic and Odometric Tracking [J]. 2019
KO-Fusion:具有紧耦合运动和测距跟踪的稠密视觉SLAM
帝国理工学院戴森机器人实验室
[25] Iqbal A, Gans N R. Localization of Classified Objects in SLAM using Nonparametric Statistics and Clustering [C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ). IEEE, 2018 : 161-168.
在非参数和聚类的 SLAM 中使用类别物体进行定位
德克萨斯大学计算机工程学院
[26] Semantic Mapping for View-Invariant Relocalization . 2019
用于视角不变重定位的语义地图
加拿大蒙特利尔麦吉尔大学
[27] Hou Z, Ding Y, Wang Y, et al. Visual Odometry for Indoor Mobile Robot by Recognizing Local Manhattan Structures [C]//Asian Conference on Computer Vision. Springer, Cham, ACCV2018 : 168-182.
通过识别曼哈顿结构的室内机器人视觉里程计
南京理工大学
2. Semantic/Deep SLAM
3. AR & MR & VR
[30] Guerra W, Tal E, Murali V, et al. FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality[J]. arXiv preprint arXiv:1905.11377, 2019.
使用摄影测量和虚拟现实感知驱动机器人的逼真传感器模拟
MIT 代码开源 项目主页
[31] Stotko P, Krumpen S, Hullin M B, et al. SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence [J]. IEEE transactions on visualization and computer graphics, 2019 , 25(5): 2102-2112.
4. Learning others
[32] Jörgensen E, Zach C, Kahl F. Monocular 3D Object Detection and Box Fitting Trained End-to-End Using Intersection-over-Union Loss [J]. arXiv preprint arXiv:1906.08070, 2019 .
单目三维物体检测和使用交叉联合损失的端到端立方框拟合
瑞典查尔姆斯理工大学 演示视频
[33] Wang B H, Chao W L, Wang Y, et al. LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images [J]. IEEE Robotics and Automation Letters, 2019 , 4(3): 2902-2909.
通过二维图像的标签扩散进行三维物体分割
康奈尔大学
代码开源 期刊:IEEE Robotics and Automation 中科院二区 ,JCR Q1Q2 ,IF 4.8
[34] Yang B, Wang J, Clark R, et al. Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds [J]. arXiv preprint arXiv:1906.01140, 2019 .
[35] Ahmed, Mariam. (2019 ). Pushing Boundaries with 3D Boundaries for Object Recognition . 10.13140/RG.2.2.33079.98728.
利用三维边界框推动边界进行物体检测
新加坡国立大学
[36] Wu D, Zhuang Z, Xiang C, et al. 6D-VNet: End-To-End 6-DoF Vehicle Pose Estimation From Monocular RGB Images [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPR2019 : 0-0.
6D-VNet:单目 RGB 图像的端到端 6 自由度车辆姿态估计
深圳大学 代码开源
2019 年 6 月论文更新(21 篇)
1. Geometric SLAM
[1] A Modular Optimization Framework for Localization and Mapping . [C] RSS 2019
[2] Wang C, Guo X. Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction [J]. arXiv preprint arXiv:1905.08853, 2019 .
[3] Wang J, Song J, Zhao L, et al. A submap joining algorithm for 3D reconstruction using an RGB-D camera based on point and plane features [J]. Robotics and Autonomous Systems, 2019 .
一种基于点特征和平面特征的RGB-D相机三维重建子地图连接算法
悉尼科技大学 Google Scholor 中科院三区,JCR Q2,IF 2.809
[4] Joshi N, Sharma Y, Parkhiya P, et al. Integrating Objects into Monocular SLAM: Line Based Category Specific Models [J]. arXiv preprint arXiv:1905.04698, 2019 .
将物体集成到单目 SLAM 中:基于线的特定类别模型
印度海德拉巴大学
Parkhiya P, Khawad R, Murthy J K, et al. Constructing Category-Specific Models for Monocular Object-SLAM [C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018: 1-9.
[5] Niu J, Qian K. A hand-drawn map-based navigation method for mobile robots using objectness measure [J]. International Journal of Advanced Robotic Systems, 2019 , 16(3): 1729881419846339.
一种基于手绘地图的使用物体度量移动机器人导航方法
东南大学 期刊:中科院四区, JCR Q4,IF 1.0
[6] Robust Object-based SLAM for High-speed Autonomous Navigation . 2019
[7] Cheng J, Sun Y, Meng M Q H. Robust Semantic Mapping in Challenging Environments [J]. Robotica, 1-15, 2019 .
具有挑战环境下的鲁棒的语义建图
香港中文大学,香港科技大学 期刊:中科院四区, JCR Q4,IF 1.267
[8] Sun B, Mordohai P. Oriented Point Sampling for Plane Detection in Unorganized Point Clouds [J]. arXiv preprint arXiv:1905.02553, 2019 .
无组织点云中平面检测的定向点采样
美国史蒂文斯理工学院
[9] Palazzolo E, Behley J, Lottes P, et al. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals [J]. arXiv preprint arXiv:1905.02082, 2019 .
ReFusion 利用残差的 RGB-D 相机动态环境下的三维重建
德国波恩大学 代码开源
2. Semantic/Deep SLAM
3. Sensor Fusion
[12] Huang K, Xiao J, Stachniss C. Accurate Direct Visual-Laser Odometry with Explicit Occlusion Handling and Plane Detection [C]//Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ). 2019 .
具有显式遮挡处理和平面检测的精确直接视觉激光测距
国防科大
[13] Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots [J]. Sensors, 2019 , 19(10): 2251.
[14] Xiong X, Chen W, Liu Z, et al. DS-VIO: Robust and Efficient Stereo Visual Inertial Odometry based on Dual Stage EKF [J]. arXiv preprint arXiv:1905.00684, 2019 .
[15] Xing B Y, Pan F, Feng X X, et al. Autonomous Landing of a Micro Aerial Vehicle on a Moving Platform Using a Composite Landmark [J]. International Journal of Aerospace Engineering, 2019, 2019 .
使用复合路标的在移动平台上自主着陆的微型飞行器
北京理工大学
4. AR & MR & VR
5. Learning others
6. Others
2019 年 5 月论文更新(51 篇)
1. Geometric SLAM
[1] Delmas P, Gee T. Stereo camera visual odometry for moving urban environments [J]. Integrated Computer-Aided Engineering, 2019 (Preprint): 1-14.
用于移动城市环境的双目里程计
奥克兰大学 中科院二区 JCR Q2
[2] Guo R, Zhou D, Peng K, et al. Plane Based Visual Odometry for Structural and Low-Texture Environments Using RGB-D Sensors [C]//2019 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2019: 1-4.
用于结构化低纹理的平面 RGB-D 视觉里程计
国防科大
[3] X Wang, F Xue, Z Yan, W Dong, Q Wang, H Zha. Continuous-time Stereo Visual Odometry Based on Dynamics Model⋆ 2019
基于动力学模型的连续时间的双目视觉里程计
北京大学,上海交大
[4] Strecke M, Stückler J. EM-Fusion: Dynamic Object-Level SLAM with Probabilistic Data Association [J]. arXiv preprint arXiv:1904.11781, 2019 .
[5] Guclu O, Can A B. k-SLAM: A fast RGB-D SLAM approach for large indoor environments [J]. Computer Vision and Image Understanding, 2019 .
大型室内环境的快速 RGB-D SLAM 方法
土耳其哈西德佩大学 JCR Q2,IF 2.776
[6] Yokozuka M, Oishi S, Simon T, et al. VITAMIN-E: VIsual Tracking And Mapping with Extremely Dense Feature Points [J]. arXiv preprint arXiv:1904.10324, 2019 .
具有极度密度的特征点 的视觉跟踪与建图
日本国家先进工业科学技术研究所
[7] Zubizarreta J, Aguinaga I, Montiel J M M. Direct Sparse Mapping [J]. arXiv preprint arXiv:1904.06577, 2019 .
[8] Feng G, Ma L, Tan X. Line Model-Based Drift Estimation Method for Indoor Monocular Localization [C]//2018 IEEE 88th Vehicular Technology Conference (VTC-Fall). IEEE, 2019 : 1-5.
基于线模型 的室内单目定位漂移估计方法
哈工大 VTC 无线通信会议,一年两届
[9] Castro G, Nitsche M A, Pire T, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies [J]. Robotics and Autonomous Systems, 2019.
通过约束-合作策略实现高效双目SLAM
阿根廷布宜诺斯艾利斯大学博士 双目 PTAM 作者
[10] Canovas B, Rombaut M, Nègre A, et al. A Coarse and Relevant 3D Representation for Fast and Lightweight RGB-D Mapping [C]//VISAPP 2019-International Conference on Computer Vision Theory and Applications. 2019.
应用于快速粗糙的 RGB-D 建图的粗糙的相关 3D 表示
格勒诺布尔计算机科学实验室
[11] Ziquan Lan, Zi Jian Yew, Gim Hee Lee. Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes [C], ICRA 2019 .
鲁棒的室外大场景点云重建
新加坡国立 代码开源 (还未放出)
[12] Shi T, Shen S, Gao X, et al. Visual Localization Using Sparse Semantic 3D Map [J]. arXiv preprint arXiv:1904.03803, 2019 .
利用稀疏语义三维地图 进行可视化定位
中国科学院自动化研究所模式识别国家重点实验室
[13] Yang S, Kuang Z F, Cao Y P, et al. Probabilistic Projective Association and Semantic Guided Relocalization for Dense Reconstruction [C]//ICRA 2019 .
稠密重建的概率投影关联 和语义引导重定位
清华大学 谷歌学术
[14] Xiao L, Wang J, Qiu X, et al. Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment [J]. Robotics and Autonomous Systems, 2019 .
基于动态环境深度学习的单目 SLAM
中国科学院电子研究所传感器技术国家重点实验室 期刊 中科院三区 JCR Q2
[15] Zhou L, Wang S, Ye J, et al. Do not Omit Local Minimizer: a Complete Solution for Pose Estimation from 3D Correspondences [J]. arXiv preprint arXiv:1904.01759, 2019 .
不要忽略局部最小化:一种完整的 3D 对应姿态估计解决方案
CMU
[16] Miraldo P, Saha S, Ramalingam S. Minimal Solvers for Mini-Loop Closures in 3D Multi-Scan Alignment [C]. CVPR 2019 .
2. AR/MR
[17] Piao J C, Kim S D. Real-time Visual-Inertial SLAM Based on Adaptive Keyframe Selection for Mobile AR Applications [J]. IEEE Transactions on Multimedia, 2019 .
基于自适应关键帧选择的移动增强现实 应用的实时视觉惯性 SLAM
中国延边大学,韩国延世大学 期刊 中科院二区,JCR Q2,IF 4.368
[18] Puigvert J R, Krempel T, Fuhrmann A. Localization Service Using Sparse Visual Information Based on Recent Augmented Reality Platforms [C]//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019 : 415-416.
基于最近增强现实平台的稀疏视觉信息定位服务
Cologne Intelligence ISMAR:AR 领域顶级会议
[19] Zillner J, Mendez E, Wagner D. Augmented Reality Remote Collaboration with Dense Reconstruction [C]//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019 : 38-39.
具有稠密重建的增强现实远程协作
DAQRI 智能眼镜:https://daqri.com/products/smart-glasses/ ISMAR:CCF 计算机图形学与多媒体 B 类会议
[20] Grandi, Jerônimo & Debarba, Henrique & Maciel, Anderson. Characterizing Asymmetric Collaborative Interactions in Virtual and Augmented Realities . IEEE Conference on Virtual Reality and 3D User Interfaces. 2019 .
表征虚拟现实和增强现实中的非对称协作交互
巴西南里奥格兰德联邦大学 演示视频
[21] Chen Y S, Lin C Y. Virtual Object Replacement Based on Real Environments: Potential Application in Augmented Reality Systems [J]. Applied Sciences, 2019 , 9(9): 1797.
基于真实环境的虚拟对象替换:在增强现实系统中的潜在应用
台湾科技大学 Applied Sciences 开源期刊
[22] Ferraguti F, Pini F, Gale T, et al. Augmented reality based approach for on-line quality assessment of polished surfaces [J]. Robotics and Computer-Integrated Manufacturing, 2019 , 59: 158-167.
基于增强现实 的抛光表面在线质量评估方法
意大利摩德纳大学 中科院二区,JCR Q1,IF 4.031
[23] Wang J, Liu H, Cong L, et al. CNN-MonoFusion: Online Monocular Dense Reconstruction Using Learned Depth from Single View [C]//2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019 : 57-62.
单视图中学习深度的在线单目密集重建
网易 AR 研究所 ISMAR:AR 领域顶级会议,CCF 计算机图形学与多媒体 B 类会议
[24] He Z, Rosenberg K T, Perlin K. Exploring Configuration of Mixed Reality Spaces for Communication [C]//Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 2019 : LBW0222.
探索混合现实空间的通信配置
纽约大学 CHI:CCF 人机交互与普适计算 A 类会议
3. Semantic/Deep SLAM
[25] von Stumberg L, Wenzel P, Khan Q, et al. GN-Net: The Gauss-Newton Loss for Deep Direct SLAM [J]. arXiv preprint arXiv:1904.11932, 2019 .
[26] Wang R, Yang N, Stueckler J, et al. DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation [J]. arXiv preprint arXiv:1904.10097, 2019.
[27] Feng M, Hu S, Ang M, et al. 2D3D-MatchNet: Learning to Match Keypoints Across 2D Image and 3D Point Cloud [J]. arXiv preprint arXiv:1904.09742, 2019.
2D3D-MatchNet:学习匹配 2D 图像和 3D 点云的关键点
新加坡国立大学
[28] Wei Y, Liu S, Zhao W, et al. Conditional Single-view Shape Generation for Multi-view Stereo Reconstruction [J]. arXiv preprint arXiv:1904.06699, 2019 .
多视角立体重建的条件单视图外形生成
清华大学 代码开源
[29] Behl A, Paschalidou D, Donné S, et al. Pointflownet: Learning representations for rigid motion estimation from point clouds [C]. CVPR 2019 .
Pointflownet:从点云学习刚体运动估计的表示
图宾根大学 即将开源代码 (还未放出)
[30] Xue F, Wang X, Li S, et al. Beyond Tracking: Selecting Memory and Refining Poses for Deep Visual Odometry [J]. arXiv preprint arXiv:1904.01892, 2019 .
4. learning others
[31] Hou J, Dai A, Nießner M. 3D-SIC: 3D Semantic Instance Completion for RGB-D Scans [J]. arXiv preprint arXiv:1904.12012, 2019 .
[32] Phalak A, Chen Z, Yi D, et al. DeepPerimeter: Indoor Boundary Estimation from Posed Monocular Sequences [J]. arXiv preprint arXiv:1904.11595, 2019 .
[33] Yang Z, Liu S, Hu H, et al. RepPoints: Point Set Representation for Object Detection [J]. arXiv preprint arXiv:1904.11490, 2019 .
[34] Jiang S, Xu T, Li J, et al. Foreground Feature Enhancement for Object Detection [J]. IEEE Access, 2019, 7: 49223-49231.
[35] Zakharov S, Shugurov I, Ilic S. DPOD: 6D Pose Object Detector and Refiner [J]. 2019 .
DPOD:6 自由度物体姿态检测与细化
慕尼黑工业大学,西门子
[36] Liu C, Yang Z, Xu F, et al. Image Generation from Bounding Box-represented Semantic Labels [J]. Computers & Graphics, 2019 .
从边界框表示的语义标签中生成图像
清华大学 Computers & Graphics 中科院四区,JCR Q3, IF 1.352
[37] Qiu Z, Yan F, Zhuang Y, et al. Outdoor Semantic Segmentation for UGVs Based on CNN and Fully Connected CRFs [J]. IEEE Sensors Journal, 2019 .
基于 CNN 和全连通 CRF 的 UGV 室外语义分割
大连理工大学 点云处理代码 中科院三区,JCR Q2,IF 2.698
[38] Ma X, Wang Z, Li H, et al. Accurate Monocular 3D Object Detection via Color-Embedded 3D Reconstruction for Autonomous Driving [J]. arXiv preprint arXiv:1903.11444, 2019 .
用于自动驾驶的彩色嵌入式三维重建精准单目三维物体检测
大连理工大学
[39] Sindagi V A, Zhou Y, Tuzel O. MVX-Net: Multimodal VoxelNet for 3D Object Detection [J]. arXiv preprint arXiv:1904.01649, 2019 .
用于三维物体检测 的多模态 VoxelNet
美国约翰斯·霍普金斯大学 个人主页
[40] Li J, Lee G H. USIP: Unsupervised Stable Interest Point Detection from 3D Point Clouds [J]. arXiv preprint arXiv:1904.00229, 2019 .
三维点云的无监督稳定兴趣点检测
新加坡国立大学 即将开源代码 (还未放出)
5. event
6. Sensor Fusion
[43] Xiao Y, Ruan X, Chai J, et al. Online IMU Self-Calibration for Visual-Inertial Systems [J]. Sensors, 2019 , 19(7): 1624.
视觉惯性系统 IMU 在线标定
北京工业大学 Sensors 开源期刊
[44] Eckenhoff K, Geneva P, Huang G. Closed-form preintegration methods for graph-based visual–inertial navigation [J]. The International Journal of Robotics Research, 2018.
基于图的视觉惯性导航的封闭式预积分 方法
特拉华大学 代码开源
[45] Joshi B, Rahman S, Kalaitzakis M, et al. Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain [J]. arXiv preprint arXiv:1904.02215, 2019 .
[46] Xia L, Meng Q, Chi D, et al. An Optimized Tightly-Coupled VIO Design on the Basis of the Fused Point and Line Features for Patrol Robot Navigation [J]. Sensors, 2019 , 19(9): 2004.
基于点线特征融合的巡检机器人紧耦合的 VIO
东北电力大学 Sensors 开源期刊
[47] Ye H, Chen Y, Liu M. Tightly Coupled 3D Lidar Inertial Odometry and Mapping [J]. arXiv preprint arXiv:1904.06993, 2019.
[48] Usenko V, Demmel N, Schubert D, et al. Visual-Inertial Mapping with Non-Linear Factor Recovery [J]. arXiv preprint arXiv:1904.06504, 2019 .
[49] Qiu X, Zhang H, Fu W, et al. Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End [J]. Sensors, 2019 , 19(8): 1941.
7. Others
2019 年 4 月论文更新(17 篇)
[1] Rambach J, Lesur P, Pagani A, et al. SlamCraft: Dense Planar RGB Monocular SLAM [C]. International Conference on Machine Vision Applications MVA 2019 .
SlamCraft:单目平面稠密 SLAM
德国人工智能研究中心 作者主页 谷歌学术 增强现实 应用
[2] Liu C, Yang J, Ceylan D, et al. Planenet: Piece-wise planar reconstruction from a single rgb image [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2018 : 2579-2588.
[3] Weng X, Kitani K. Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud [J]. arXiv preprint arXiv:1903.09847, 2019 .
利用伪激光点云进行单目 3D 物体检测
CMU 谷歌学术
[4] Hassan M., Mohamed & Hemayed, Elsayed.. A Fast Linearly Back-End SLAM for Navigation Based on Monocular Camera . International Journal of Civil Engineering and Technology 2018 . 627-645.
单目 SLAM 的快速线性后端优化
埃及法尤姆大学
[5] Chen B, Yuan D, Liu C, et al. Loop Closure Detection Based on Multi-Scale Deep Feature Fusion [J]. Applied Sciences, 2019 , 9(6): 1120.
基于多尺度深度特征融合 的闭环检测
中南大学自动化学院
[6] Ling Y, Shen S. Real‐time dense mapping for online processing and navigation [J]. Journal of Field Robotics.
[7] Chen-Hsuan Lin, Oliver Wang et al.Photometric Mesh Optimization for Video-Aligned 3D Object Reconstruction [C].IEEE Conference on Computer Vision and Pattern Recognition (CVPR ), 2019
[8] Tang F, Li H, Wu Y. FMD Stereo SLAM: Fusing MVG and Direct Formulation Towards Accurate and Fast Stereo SLAM [J]. 2019 .
融合多视图几何与直接法的快速精准双目 SLAM
中科院自动化研究所 ,模式识别国家重点实验室,吴毅红团队
[9] Duff T, Kohn K, Leykin A, et al. PLMP-Point-Line Minimal Problems in Complete Multi-View Visibility [J]. arXiv preprint arXiv:1903.10008, 2019 .
[10] Seong Hun Lee, Javier Civera. Loosely-Coupled Semi-Direct Monocular SLAM [J] IEEE Robotics and Automation Letters. 2019
[12] Jinyu Li, Bangbang Yang, Danpeng Chen, Nan Wang, Guofeng Zhang*, Hujun Bao*. Survey and Evaluation of Monocular Visual-Inertial SLAM Algorithms for Augmented Reality [J] Journal of Virtual Reality & Intelligent Hardware 2019 .
[13] Pablo Speciale, Johannes L. Schonberg, Sing Bing Kang. Privacy Preserving Image-Based Localization [J] 2019 .
利用线云 进行基于图像的定位
苏黎世 联邦理工、微软,作者主页 ,工程地址
Speciale P, Pani Paudel D, Oswald M R, et al. Consensus maximization with linear matrix inequality constraints [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2017 : 4941-4949. 最大化线性矩阵不等式约束 [PDF ] [Code ] [Video ] [Project Page ]
[14] Li M, Zhang W, Shi Y, et al. Bionic Visual-based Data Conversion for SLAM [C]//2018 IEEE International Conference on Robotics and Biomimetics (ROBIO ). IEEE, 2018 : 1607-1612.
基于仿生视觉 的 SLAM 数据转换
北京理工大学 仿生机器人与系统教育部重点实验室
[15] Cheng J, Sun Y, Chi W, et al. An Accurate Localization Scheme for Mobile Robots Using Optical Flow in Dynamic Environments [C]//2018 IEEE International Conference on Robotics and Biomimetics (ROBIO ). IEEE, 2018 : 723-728.
动态环境 下使用光流 的移动机器人精确定位方案
香港中文大学 ,实验室主页
[16] Zichao Zhang, Davide Scaramuzza, Beyond Point Clouds: Fisher Information Field for Active Visual Localization .[C], IEEE International Conference on Robotics and Automation (ICRA ), 2019 .
[17] Georges Younes, Daniel Asmar, John Zelek. A Unified Formulation for Visual Odometry [J]. arXiv preprint arXiv:1903.04253, 2019 .
2019 年 3 月论文更新(13 篇)
[1] Han L, Gao F, Zhou B, et al. FIESTA: Fast Incremental Euclidean Distance Fields for Online Motion Planning of Aerial Robots [J]. arXiv preprint arXiv:1903.02144, 2019 .
基于快速增量式欧氏距离场 的飞行器实时运动规划
沈邵劼老师团队
[2] ICRA 2019 :Multimodal Semantic SLAM with Probabilistic Data Association
具有概率数据关联的多模态语义SLAM
麻省理工学院海洋机器人团队
[3] Zhang F, Rui T, Yang C, et al. LAP-SLAM: A Line-Assisted Point-Based Monocular VSLAM [J]. Electronics , 2019 , 8(2): 243.
[4] Zhang H, Jin L, Zhang H, et al. A Comparative Analysis of Visual-Inertial SLAM for Assisted Wayfinding of the Visually Impaired [C]//2019 IEEE Winter Conference on Applications of Computer Vision (WACV ). IEEE, 2019 : 210-217.
VI-SLAM 辅助寻径对比分析
弗吉尼亚联邦大学
[5] Chen Z, Liu L. Creating Navigable Space from Sparse Noisy Map Points [J]. arXiv preprint arXiv:1903.01503, 2019 .
[6] Antigny N, Uchiyama H, Servières M, et al. Solving monocular visual odometry scale factor with adaptive step length estimates for pedestrians using handheld devices [J]. Sensors , 2019 , 19(4): 953.
[7] Zhou D, Dai Y, Li H. Ground Plane based Absolute Scale Estimation for Monocular Visual Odometry [J]. arXiv preprint arXiv:1903.00912, 2019 .
[8] Duong N D, Kacete A, Soladie C, et al. Accurate Sparse Feature Regression Forest Learning for Real-Time Camera Relocalization [C]//2018 International Conference on 3D Vision (3DV). IEEE, 2018 : 643-652.
[9] Patra S, Gupta K, Ahmad F, et al. EGO-SLAM: A Robust Monocular SLAM for Egocentric Videos [C]//2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019 : 31-40.
[10] Rosinol A, Sattler T, Pollefeys M, et al. Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities [J]. arXiv preprint arXiv:1903.01067, 2019 .
增量式 VI-SLAM 三维网格 生成
麻省理工学院信息与决策系统实验室,项目主页
[11] Wang Z. Structure from Motion with Higher-level Environment Representations [J]. 2019 .
具有高级环境 表示的 SFM
澳大利亚国立大学 硕士学位
[12] Vakhitov A, Lempitsky V. Learnable Line Segment Descriptor for Visual SLAM [J]. IEEE Access, 2019 .
视觉SLAM中的可学习线段描述 ,基于 ORB-SLAM2
Samsung AI Center, Moscow
[13] Grinvald M, Furrer F, Novkovic T, et al. Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery [J]. arXiv preprint arXiv:1903.00268, 2019 .
语义 感知建图与三维物体探索 ,基于 mask-RCNN
苏黎世联邦理工学院
wuyanminmax[AT]gmail.com