手机怎么修改不了,spring data jpa 修改/spring data jpa 修改,里面的文件为什么修改不了

小木虫 --- 600万学术达人喜爱的学术科研平台
热门搜索:
&&查看话题
关于Lammps中data文件的修改
Lammps新手求教,自己将SiC的模型从Ms中转换成data文件,但在使用read_data命令时总是出现“No bonds allowed with this atom style”的错误提示,自己的atom_style是atomic,这个怎么解决啊,哪位高人给指点一下
感谢您的赐教,确实将atom_style设置成full形式,这个error就不出现了,但现在出现的是Bonds defined but no bond types这个错误,手册上解释说The data file header lists bonds but no bond types,但怎样在data文件中加入bond type呢?
贴出data文件的前20几行吧
LAMMPS data file. msi2lmp v3.9.6 / 11 Sep 2014 / CGCMM for SiC
& &&&64 atoms
& & 128 bonds
& & 384 angles
& &1152 dihedrals
& && &0 impropers
& &2 atom types
& &1 bond types
& &2 angle types
& &1 dihedral types
& & -1.& &&&4. xlo xhi
& & -0.& &&&4. ylo yhi
& & -0.& & 19. zlo zhi
& & -3.& &&&0.& &&&0. xy xz yz
& &1&&12.011150 # c
& &2&&28.086000 # si
Pair Coeffs # lj/cut/coul/long
& &1& &0.& &3. # c
& &2& &0.& &4. # si
Bond Coeffs # harmonic
& &1& &238.0000& &&&1.8090 # c-si
Angle Coeffs # harmonic
& &1& & 42.2000& &122.5000 # si-c-si
& &2& & 44.4000& &113.5000 # c-si-c
Dihedral Coeffs # harmonic
&&1& & -1.7000& &1& &3 # si-c-si-c
Atoms # full
& && &1& && &1& &1&&0.000000& & -0.& &&&0.& &&&1.& &0& &0& &0 # c
& && &2& && &1& &2&&0.000000& & -0.& &&&0.& &&&0.& &0& &0& &0 # si
& && &3& && &1& &1&&0.000000& &&&0.& & -0.& &&&6.& &0& &0& &0 # c
& && &4& && &1& &2&&0.000000& & -0.& & -0.& &&&5.& &0& &0& &0 # si
& && &5& && &1& &1&&0.000000& &&&1.& &&&0.& &&&4.& &0& &0& &0 # c
& && &6& && &1& &2&&0.000000& &&&1.& &&&0.& &&&2.& &0& &0& &0 # si
& && &7& && &1& &1&&0.000000& &&&0.& &&&1.& &&&9.& &0& &0& &0 # c
& && &8& && &1& &2&&0.000000& &&&0.& &&&1.& &&&7.& &0& &0& &0 # si
& && &9& && &1& &1&&0.000000& &&&3.& &&&0.& &&&1.& &0& &0& &0 # c
& &&&10& && &1& &2&&0.000000& &&&3.& &&&0.& &&&0.& &0& &0& &0 # si
& &&&11& && &1& &1&&0.000000& &&&3.& & -0.& &&&6.& &0& &0& &0 # c
& &&&12& && &1& &2&&0.000000& &&&3.& & -0.& &&&5.& &0& &0& &0 # si
& &&&13& && &1& &1&&0.000000& & -1.& &&&0.& &&&4.& &1& &0& &0 # c
& &&&14& && &1& &2&&0.000000& & -1.& &&&0.& &&&2.& &1& &0& &0 # si
& &&&15& && &1& &1&&0.000000& & -3.& &&&1.& &&&9.& &1& &0& &0 # c
& &&&16& && &1& &2&&0.000000& & -3.& &&&1.& &&&7.& &1& &0& &0 # si
& &&&17& && &1& &1&&0.000000& & -1.& &&&2.& &&&1.& &0& &0& &0 # c
& &&&18& && &1& &2&&0.000000& & -1.& &&&2.& &&&0.& &0& &0& &0 # si
& &&&19& && &1& &1&&0.000000& & -1.& &&&2.& &&&6.& &0& &0& &0 # c
& &&&20& && &1& &2&&0.000000& & -1.& &&&2.& &&&5.& &0& &0& &0 # si
& &&&21& && &1& &1&&0.000000& & -0.& &&&3.& &&&4.& &0& &0& &0 # c
& &&&22& && &1& &2&&0.000000& & -0.& &&&3.& &&&2.& &0& &0& &0 # si
& &&&23& && &1& &1&&0.000000& & -1.& &&&4.& &&&9.& &0& &0& &0 # c
& &&&24& && &1& &2&&0.000000& & -1.& &&&4.& &&&7.& &0& &0& &0 # si
& &&&25& && &1& &1&&0.000000& &&&1.& &&&2.& &&&1.& &0& &0& &0 # c
& &&&26& && &1& &2&&0.000000& &&&1.& &&&2.& &&&0.& &0& &0& &0 # si
& &&&27& && &1& &1&&0.000000& &&&1.& &&&2.& &&&6.& &0& &0& &0 # c
& &&&28& && &1& &2&&0.000000& &&&1.& &&&2.& &&&5.& &0& &0& &0 # si
& &&&29& && &1& &1&&0.000000& & -3.& &&&3.& &&&4.& &1& &0& &0 # c
& &&&30& && &1& &2&&0.000000& & -3.& &&&3.& &&&2.& &1& &0& &0 # si
& &&&31& && &1& &1&&0.000000& & -4.& &&&4.& &&&9.& &1& &0& &0 # c
& &&&32& && &1& &2&&0.000000& & -4.& &&&4.& &&&7.& &1& &0& &0 # si
& &&&33& && &1& &1&&0.000000& & -0.& &&&0.& & 11.& &0& &0& &0 # c
& &&&34& && &1& &2&&0.000000& &&&0.& & -0.& & 10.& &0& &0& &0 # si
& &&&35& && &1& &1&&0.000000& &&&0.& &&&0.& & 16.& &0& &0& &0 # c
& &&&36& && &1& &2&&0.000000& &&&0.& & -0.& & 15.& &0& &0& &0 # si
& &&&37& && &1& &1&&0.000000& &&&1.& &&&0.& & 14.& &0& &0& &0 # c
& &&&38& && &1& &2&&0.000000& &&&1.& &&&0.& & 12.& &0& &0& &0 # si
& &&&39& && &1& &1&&0.000000& & -0.& &&&1.& & 19.& &0& &0& &0 # c
& &&&40& && &1& &2&&0.000000& & -0.& &&&1.& & 17.& &0& &0& &0 # si
& &&&41& && &1& &1&&0.000000& &&&3.& &&&0.& & 11.& &0& &0& &0 # c
& &&&42& && &1& &2&&0.000000& &&&3.& & -0.& & 10.& &0& &0& &0 # si
& &&&43& && &1& &1&&0.000000& &&&3.& & -0.& & 16.& &0& &0& &0 # c
& &&&44& && &1& &2&&0.000000& &&&3.& & -0.& & 15.& &0& &0& &0 # si
& &&&45& && &1& &1&&0.000000& & -1.& &&&0.& & 14.& &1& &0& &0 # c
& &&&46& && &1& &2&&0.000000& & -1.& &&&0.& & 12.& &1& &0& &0 # si
& &&&47& && &1& &1&&0.000000& & -3.& &&&1.& & 19.& &1& &0& &0 # c
& &&&48& && &1& &2&&0.000000& & -3.& &&&1.& & 17.& &1& &0& &0 # si
& &&&49& && &1& &1&&0.000000& & -1.& &&&2.& & 11.& &0& &0& &0 # c
& &&&50& && &1& &2&&0.000000& & -1.& &&&2.& & 10.& &0& &0& &0 # si
& &&&51& && &1& &1&&0.000000& & -1.& &&&2.& & 16.& &0& &0& &0 # c
& &&&52& && &1& &2&&0.000000& & -1.& &&&2.& & 15.& &0& &0& &0 # si
& &&&53& && &1& &1&&0.000000& &&&0.& &&&3.& & 14.& &0& &0& &0 # c
& &&&54& && &1& &2&&0.000000& &&&0.& &&&3.& & 12.& &0& &0& &0 # si
& &&&55& && &1& &1&&0.000000& & -1.& &&&4.& & 19.& &0& &0& &0 # c
& &&&56& && &1& &2&&0.000000& & -1.& &&&4.& & 17.& &0& &0& &0 # si
& &&&57& && &1& &1&&0.000000& &&&1.& &&&2.& & 11.& &0& &0& &0 # c
& &&&58& && &1& &2&&0.000000& &&&1.& &&&2.& & 10.& &0& &0& &0 # si
& &&&59& && &1& &1&&0.000000& &&&1.& &&&2.& & 16.& &0& &0& &0 # c
& &&&60& && &1& &2&&0.000000& &&&1.& &&&2.& & 15.& &0& &0& &0 # si
& &&&61& && &1& &1&&0.000000& & -3.& &&&3.& & 14.& &1& &0& &0 # c
& &&&62& && &1& &2&&0.000000& & -3.& &&&3.& & 12.& &1& &0& &0 # si
& &&&63& && &1& &1&&0.000000& & -4.& &&&4.& & 19.& &1& &0& &0 # c
& &&&64& && &1& &2&&0.000000& & -4.& &&&4.& & 17.& &1& &0& &0 # si
& &&&1& &1& && &1& && &2
& &&&2& &1& && &1& && &6
& &&&3& &1& && &1& &&&14
& &&&4& &1& && &1& &&&30
& &&&5& &1& &&&39& && &2
& &&&6& &1& &&&63& && &2
& &&&7& &1& &&&55& && &2
& &&&8& &1& && &3& && &4
& &&&9& &1& && &3& &&&32
& & 10& &1& && &3& &&&24
& & 11& &1& && &3& && &8
& & 12& &1& &&&29& && &4
& & 13& &1& && &5& && &4
& & 14& &1& &&&13& && &4
& & 15& &1& && &5& && &6
& & 16& &1& && &5& &&&28
& & 17& &1& && &5& &&&12
& & 18& &1& && &9& && &6
& & 19& &1& &&&25& && &6
& & 20& &1& && &7& && &8
& & 21& &1& && &7& &&&34
& & 22& &1& && &7& &&&58
& & 23& &1& && &7& &&&50
& & 24& &1& &&&27& && &8
& & 25& &1& &&&19& && &8
& & 26& &1& && &9& &&&10
& & 27& &1& && &9& &&&14
& & 28& &1& && &9& &&&22
& & 29& &1& &&&47& &&&10
& & 30& &1& &&&55& &&&10
& & 31& &1& &&&63& &&&10
& & 32& &1& &&&11& &&&12
& & 33& &1& &&&11& &&&24
& & 34& &1& &&&11& &&&32
& & 35& &1& &&&11& &&&16
& & 36& &1& &&&21& &&&12
& & 37& &1& &&&13& &&&12
& & 38& &1& &&&13& &&&14
& & 39& &1& &&&13& &&&20
& & 40& &1& &&&17& &&&14
& & 41& &1& &&&15& &&&16
& & 42& &1& &&&15& &&&42
& & 43& &1& &&&15& &&&50
& & 44& &1& &&&15& &&&58
& & 45& &1& &&&19& &&&16
& & 46& &1& &&&27& &&&16
& & 47& &1& &&&17& &&&18
& & 48& &1& &&&17& &&&22
& & 49& &1& &&&17& &&&30
& & 50& &1& &&&55& &&&18
& & 51& &1& &&&47& &&&18
& & 52& &1& &&&39& &&&18
& & 53& &1& &&&19& &&&20
& & 54& &1& &&&19& &&&24
& & 55& &1& &&&21& &&&20
& & 56& &1& &&&29& &&&20
& & 57& &1& &&&21& &&&22
& & 58& &1& &&&21& &&&28
& & 59& &1& &&&25& &&&22
& & 60& &1& &&&23& &&&24
& & 61& &1& &&&23& &&&50
& & 62& &1& &&&23& &&&42
& & 63& &1& &&&23& &&&34
& & 64& &1& &&&25& &&&26
& & 65& &1& &&&25& &&&30
& & 66& &1& &&&63& &&&26
& & 67& &1& &&&39& &&&26
& & 68& &1& &&&47& &&&26
& & 69& &1& &&&27& &&&28
& & 70& &1& &&&27& &&&32
& & 71& &1& &&&29& &&&28
& & 72& &1& &&&29& &&&30
& & 73& &1& &&&31& &&&32
& & 74& &1& &&&31& &&&58
& & 75& &1& &&&31& &&&34
& & 76& &1& &&&31& &&&42
& & 77& &1& &&&33& &&&34
& & 78& &1& &&&33& &&&38
& & 79& &1& &&&33& &&&46
& & 80& &1& &&&33& &&&62
& & 81& &1& &&&35& &&&36
& & 82& &1& &&&35& &&&64
& & 83& &1& &&&35& &&&56
& & 84& &1& &&&35& &&&40
& & 85& &1& &&&61& &&&36
& & 86& &1& &&&37& &&&36
& & 87& &1& &&&45& &&&36
& & 88& &1& &&&37& &&&38
& & 89& &1& &&&37& &&&60
& & 90& &1& &&&37& &&&44
& & 91& &1& &&&41& &&&38
& & 92& &1& &&&57& &&&38
& & 93& &1& &&&39& &&&40
& & 94& &1& &&&59& &&&40
& & 95& &1& &&&51& &&&40
& & 96& &1& &&&41& &&&42
& & 97& &1& &&&41& &&&46
& & 98& &1& &&&41& &&&54
& & 99& &1& &&&43& &&&44
& &100& &1& &&&43& &&&56
& &101& &1& &&&43& &&&64
& &102& &1& &&&43& &&&48
& &103& &1& &&&53& &&&44
& &104& &1& &&&45& &&&44
& &105& &1& &&&45& &&&46
& &106& &1& &&&45& &&&52
& &107& &1& &&&49& &&&46
& &108& &1& &&&47& &&&48
& &109& &1& &&&51& &&&48
& &110& &1& &&&59& &&&48
& &111& &1& &&&49& &&&50
& &112& &1& &&&49& &&&54
& &113& &1& &&&49& &&&62
& &114& &1& &&&51& &&&52
& &115& &1& &&&51& &&&56
& &116& &1& &&&53& &&&52
& &117& &1& &&&61& &&&52
& &118& &1& &&&53& &&&54
& &119& &1& &&&53& &&&60
& &120& &1& &&&57& &&&54
& &121& &1& &&&55& &&&56
& &122& &1& &&&57& &&&58
& &123& &1& &&&57& &&&62
& &124& &1& &&&59& &&&60
& &125& &1& &&&59& &&&64
& &126& &1& &&&61& &&&60
& &127& &1& &&&61& &&&62
& &128& &1& &&&63& &&&64
你的data文件没问题,问题在你的in文件,在in文件中需要定义bond _Style和angle_style
我在in文件中加入bond_style和angle_style,但是仍然显示这个错误提示 Bonds defined but no bond types
这是我定义的那部分,
pair_style&&hybrid lj/cut/coul/long&&0.15
bond_style&&hybrid harmonic
angle_style hybrid&&cosine
dihedral_style&&harmonic
read_data& & data.sic& &add merge
非常感谢您的帮忙
贴一下你的in文件吧
学术必备与600万学术达人在线互动!
扫描下载送金币&p&谢邀。&/p&&p&在回答之前,先简单介绍一下,作为一个快速成长的AI初创企业,竹间智能已经拥有一百多位AI工程师与研发人员,深度学习工程师也是我们招聘的主要岗位之一。&/p&&p&为了回答这个问题,我们专门请教了&i&&b&竹间智能深度学习科学家赵宁远&/b&&/i&,请他分享一下面试“深度学习工程师”时的标准与经验,可能有主观的地方,但都是干货,希望对想进入AI领域的朋友有所帮助。&/p&&p&(打个硬广:文末有竹间智能正在招聘的一些岗位,有需求的朋友可以关注一下)&/p&&p&&i&---------------------------------------------------正式回答分割线---------------------------------------------------------&/i&&/p&&p&深度学习虽然并不是新鲜事物,尤其近几年发展迅速,但还远称不上是一个“成熟”领域。就算是“传统”机器学习,其中所包含的思想和方法也千差万别。因此,我们并不强求面试者必须要懂哪些tricks或者某类特定的方法。&b&坦白说没有面试官(在我们看来,就算是Hinton本人)能对机器学习/深度学习的每个领域了如指掌,所以我们会尽量避免用一些自己主观的理解去考别人。&/b&&/p&&p&因此,&b&我们的原则是希望面试者有比较好的机器学习基础,比较优秀的编程能力,以及分析和解决实际问题的能力(或者说,critical thinking)&/b&。当然,对深度学习的理解和实际经验会是一个加分项,但是crash course或者用于炫技的冷知识并不能取代扎实的基本功(比方说把Andrew Ng的机器学习课程吃透,将Deep Learning一本书学好)。&/p&&p&冒着听起来过分自大的风险,我想分享一下我们关心的一些问题:&/p&&p&1. &b&在使用一种方法(无论是深度学习或是“传统”方法)的时候,&/b&面试者&b&对它的优点和局限性是否都有所认识&/b&。在面对不同的问题的时候,我们希望面试者可以通过独立思考做出一个informed choice,而不是因为“上周看了一篇paper是这样做的”或者“BAT/FLAG就是这样做的”。&/p&&p&2. 面试者&b&是否有完整的机器学习项目经验。&/b&这意味着从理解需求开始,到收集数据、分析数据,确定学习目标,选择算法、实现、测试并且改进的完整流程。因为我们希望面试者对于机器学习在实际业务中所带来的影响有正确的判断能力。当然,如果是可以通过python/或是结合Java/Scala来完成所有这些事情就更好啦。&/p&&p&3. 面试者&b&是否具备基本的概率/统计/线性代数的知识&/b&——数学期望,CLT,Markov Chain,normal/student’s t distribution(只是一些例子),或是PCA/SVD这些很基础的东西。另外(最理想的),希望面试者&b&对于高维空间的一些特性有直觉上的认识&/b&。这部分并不是强行要求背公式,只要有理解就可以。毕竟这不是在面试数学系的教职——我们只是希望面试者可以较好地理解论文中的算法,并且正确地实现、最好可以做出改进;另外,在深度学习的调参过程中,比较好的数学sense会有助于理解不同的超参数对于结果的影响。&/p&&p&4. 面试者&b&是否有比较好的编程能力,代码习惯和对计算效率的分析能力&/b&(这个一般会按照最基本的算法工程师的要求来看,从略)&/p&&p&5. 面试者&b&在机器学习方面,对基本的概念是否有所了解&/b&(譬如说,线性回归对于数据的假设是怎样的),&b&以及对于常见的问题有一定的诊断能力&/b&(如果训练集的正确率一直上不去,可能会出现哪些问题——在这里,我们希望面试者能够就实际情况,做一些合理的假设,然后将主要的思考逻辑描述清楚)。我们会根据面试者所掌握的方法再比较深入地问一些问题,而且我们希望面试者不仅仅是背了一些公式/算法,或是在博客/知乎上看到了一些名词(比如VC维度,KKT条件,KL divergence),实际上却不理解背后的理论基础(有时候这些问题确实很难,&b&但“知道自己不知道”和“不知道自己不知道”是差别很大的&/b&)。打个比方,如果面试者提到核技巧,那么给到一个实际的线性不可分的数据(譬如XOR,或者Swiss Roll),面试者能清楚地设计,并通过实际计算证明某个kernel可以将此数据转化到一个高维并线性可分的空间吗?&/p&&p&6. 在深度学习方面,我们希望面试者&b&具备神经网络的基础知识(BP),以及常见的目标函数,激活函数和优化算法&/b&。在此基础上,对于常见的CNN/RNN网络,我们当然希望面试者能够理解它们各自的参数代表什么,比较好的初始参数,BP的计算,以及常见超参数的调整策略——这些相信Ian Goodfellow的Deep Learning一书都有非常好的介绍——我们也希望面试者能够在具体领域有利用流行框架(我们主要用tensorflow——但是这并不是必须的)搭建实际应用的经验。当然,我们希望面试者读过本领域的paper,并且手动验证过他们的想法,并且可以对他们方法的优缺点进行分析。当然,如果面试者有更多兴趣,我们可以探讨更深入的一些问题,比如如何避免陷入鞍点,比如通过引入随机噪音来避免过拟合,比如CNN的参数压缩,比如RNN对于动力系统的建模,比如基于信息理论的模型解释,等等等等,在这些方面,我们是抱着与面试者互相切磋的心态的。&/p&&p&7. 通常上面我们说的都是监督学习,往往结果是回归或分类。当然,也许面试者还精通RL/transfer learning/unsupervised learning这些内容,那么我们可以逐一讨论。&/p&&p&此外,如果面试者应聘的是某一个特定领域的职位,那么当然地,我们会希望他同时具备很强的领域知识,这里就不展开说明了。&/p&&p&&b&在很短的时间内想要全面地了解一个人确实非常困难。调查显示,往往面试官自以为很准的“感觉”,其实是一个糟糕的performance predictor。我希望可以结合相对客观的基础问题,以及面试者自身的特长,来对面试者的理论和实战能力做一个判断。基础扎实,有实战经验并且有一技之长的面试者通常会是非常理想的候选人&/b&。&/p&&p&最后的一点小tip,我真诚地希望面试者对问题有自己的思考和理解、有自己的体系,argument都是能够自洽的。坚持自己的观点并与面试官争论,远远好过为了刷面试而去背诵所谓标准答案(或者来知乎上找面试tips)。&/p&&p&欢迎大家批评指正。&/p&&p&&i&本回答来自竹间智能深度学习科学家 赵宁远&/i&&/p&&br&&p&&i&------------------------------------------以下是竹间深度学习方向的&b&【在招岗位】&/b&--------------------------------&/i&&/p&&p&1. 深度学习工程师&/p&&p&2. 高级算法工程师&/p&&p&3. 自然语言处理工程师&/p&&p&4. 语音识别工程师&/p&&p&5. 机器学习工程师&/p&&p&6. Consulting Engineer&/p&&p&有意向的同学请私信哦~&/p&
谢邀。在回答之前,先简单介绍一下,作为一个快速成长的AI初创企业,竹间智能已经拥有一百多位AI工程师与研发人员,深度学习工程师也是我们招聘的主要岗位之一。为了回答这个问题,我们专门请教了竹间智能深度学习科学家赵宁远,请他分享一下面试“深度学习…
有几个问题时间原因还没来的及展开回答,最近会补上。&br&完整版移步&a href=&/p/& class=&internal&&知乎专栏&/a&&br&另外,求面试!&br&&br&&p&以下问题来自&a href=&///people/f5911fddc7fa5fd74a80d5ce2c12e1a2& data-hash=&f5911fddc7fa5fd74a80d5ce2c12e1a2& class=&member_mention& data-editable=&true& data-title=&@Naiyan Wang& data-hovercard=&p$b$f5911fddc7fa5fd74a80d5ce2c12e1a2&&@Naiyan Wang&/a&&/p&&ul&&li&&b&CNN最成功的应用是在CV,那为什么NLP和Speech的很多问题也可以用CNN解出来?为什么AlphaGo里也用了CNN?这几个不相关的问题的相似性在哪里?CNN通过什么手段抓住了这个共性?&/b&&/li&&ul&&li&&a href=&///?target=https%3A//www.researchgate.net/publication/_Deep_Learning& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Deep Learning -Yann LeCun, Yoshua Bengio & Geoffrey Hinton&i class=&icon-external&&&/i&&/a&&/li&&li&&b&&a href=&///?target=https%3A///presentation/d/1TVixw6ItiZ8igjp6U17tcgoFrLSaHWQmMOwjlgQY9co/pub%3Fslide%3Did.p& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Learn TensorFlow and deep learning, without a Ph.D.&i class=&icon-external&&&/i&&/a&&/b&&br&&/li&&li&&b&&a href=&///?target=http%3A///s/AoN5oNl5t04h& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&The Unreasonable Effectiveness of Deep Learning -LeCun 16 NIPS Keynote&i class=&icon-external&&&/i&&/a&&/b&&br&&/li&&li&以上几个不相关问题的相关性在于,都存在&b&局部与整体的关系&/b&,由低层次的特征经过组合,组成高层次的特征,并且得到不同特征之间的空间相关性。如下图:低层次的直线/曲线等特征,组合成为不同的形状,最后得到汽车的表示。&img src=&/v2-e31f6e3967fe0fab83b3_b.png& data-rawwidth=&2076& data-rawheight=&1332& class=&origin_image zh-lightbox-thumb& width=&2076& data-original=&/v2-e31f6e3967fe0fab83b3_r.png&&&/li&&li&&b&CNN抓住此共性的手段主要有四个:局部连接/权值共享/池化操作/多层次结构。&/b&&/li&&li&局部连接使网络可以提取数据的局部特征;权值共享大大降低了网络的训练难度,一个Filter只提取一个特征,在整个图片(或者语音/文本) 中进行卷积;池化操作与多层次结构一起,实现了数据的降维,将低层次的局部特征组合成为较高层次的特征,从而对整个图片进行表示。如下图:&img src=&/v2-d39d970fae7e40fd99edf3_b.png& data-rawwidth=&1875& data-rawheight=&428& class=&origin_image zh-lightbox-thumb& width=&1875& data-original=&/v2-d39d970fae7e40fd99edf3_r.png&&&/li&&li&上图中,&b&如果每一个点的处理使用相同的Filter,则为全卷积,如果使用不同的Filter,则为Local-Conv。&/b&&/li&&/ul&&li&&b&为什么很多做人脸的Paper会最后加入一个Local Connected Conv?&/b&&/li&&ul&&li&&b&&a href=&///?target=https%3A///wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf%3F& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&DeepFace: Closing the Gap to Human-Level Performance in Face Verification&i class=&icon-external&&&/i&&/a&&br&&/b&&/li&&li&以FaceBook DeepFace 为例:&img src=&/v2-e37ce5df4fdfedb7b0a6bf_b.png& data-rawwidth=&1738& data-rawheight=&524& class=&origin_image zh-lightbox-thumb& width=&1738& data-original=&/v2-e37ce5df4fdfedb7b0a6bf_r.png&&&/li&&li&DeepFace 先进行了两次全卷积+一次池化,提取了低层次的边缘/纹理等特征。&/li&&li&后接了3个Local-Conv层,这里是用Local-Conv的原因是,&b&人脸在不同的区域存在不同的特征(眼睛/鼻子/嘴的分布位置相对固定),当不存在全局的局部特征分布时,Local-Conv更适合特征的提取。&/b&&/li&&/ul&&/ul&&br&&p&以下问题来自&a href=&///people/dbc0d9a83f& data-hash=&dbc0d9a83f& class=&member_mention& data-editable=&true& data-title=&@抽象猴& data-hovercard=&p$b$dbc0d9a83f&&@抽象猴&/a&&/p&&ul&&li&&b&什麽样的资料集不适合用深度学习?&/b&&/li&&ul&&li&&b&数据集太小&/b&,数据样本不足时,深度学习相对其它机器学习算法,没有明显优势。&/li&&li&&b&数据集没有局部相关特性,&/b&目前深度学习表现比较好的领域主要是图像/语音/自然语言处理等领域,这些领域的一个共性是局部相关性。图像中像素组成物体,语音信号中音位组合成单词,文本数据中单词组合成句子,这些特征元素的组合一旦被打乱,表示的含义同时也被改变。对于没有这样的局部相关性的数据集,不适于使用深度学习算法进行处理。举个例子:预测一个人的健康状况,相关的参数会有年龄、职业、收入、家庭状况等各种元素,将这些元素打乱,并不会影响相关的结果。&/li&&/ul&&li&&b&对所有优化问题来说, 有没有可能找到比現在已知算法更好的算法?&/b&&/li&&ul&&li&&a href=&///?target=https%3A///share/link%3Fuk%3D%26shareid%3D& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&机器学习-周志华&i class=&icon-external&&&/i&&/a&&/li&&li&&b&没有免费的午餐定理:&img src=&/v2-eeab8ac299840_b.png& data-rawwidth=&1178& data-rawheight=&488& class=&origin_image zh-lightbox-thumb& width=&1178& data-original=&/v2-eeab8ac299840_r.png&&&/b&&/li&&li&对于训练样本(黑点),不同的算法A/B在不同的测试样本(白点)中有不同的表现,这表示:对于一个学习算法A,若它在某些问题上比学习算法 B更好,则必然存在一些问题,在那里B比A好。&br&&/li&&li&也就是说:对于所有问题,无论学习算法A多聪明,学习算法 B多笨拙,它们的期望性能相同。&/li&&li&但是:没有免费午餐定力假设所有问题出现几率相同,实际应用中,不同的场景,会有不同的问题分布,所以,在&b&优化算法时,针对具体问题进行分析,是算法优化的核心所在。&/b&&/li&&/ul&&br&&li&&b&用贝叶斯机率说明Dropout的原理&/b&&/li&&ul&&li&&b&&a href=&///?target=http%3A//mlg.eng.cam.ac.uk/yarin/PDFs/Dropout_as_a_Bayesian_approximation.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Dropout as a Bayesian Approximation: Insights and Applications&i class=&icon-external&&&/i&&/a&&/b&&/li&&/ul&&/ul&&ul&&li&&b&何为共线性, 跟过拟合有啥关联?&/b&&br&&/li&&ul&&li&&b&&a href=&///?target=https%3A//en.wikipedia.org/wiki/Multicollinearity& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Multicollinearity-Wikipedia&i class=&icon-external&&&/i&&/a&&/b&&br&&/li&&li&共线性:多变量线性回归中,变量之间由于存在高度相关关系而使回归估计不准确。&/li&&li&共线性会造成冗余,导致过拟合。&/li&&li&解决方法:排除变量的相关性/加入权重正则。&/li&&/ul&&br&&li&&b&说明如何用支持向量机实现深度学习(列出相关数学公式)&/b&&/li&&ul&&li&这个不太会,最近问一下老师。&/li&&/ul&&li&&b&广义线性模型是怎被应用在深度学习中?&/b&&/li&&ul&&li&&b&&a href=&///?target=http%3A///2015/01/a-statistical-view-of-deep-learning-i-recursive-glms/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&A Statistical View of Deep Learning (I): Recursive GLMs&i class=&icon-external&&&/i&&/a&&br&&/b&&/li&&li&深度学习从统计学角度,可以看做&b&递归的广义线性模型&/b&。&/li&&li&广义线性模型相对于经典的线性模型(y=wx+b),核心在于引入了连接函数g(.),形式变为:y=g-1(wx+b)。&/li&&li&深度学习时递归的广义线性模型,神经元的激活函数,即为广义线性模型的链接函数。逻辑回归(广义线性模型的一种)的Logistic函数即为神经元激活函数中的Sigmoid函数,很多类似的方法在统计学和神经网络中的名称不一样,容易引起初学者(这里主要指我)的困惑。下图是一个对照表:&img src=&/v2-29d9d4e71c2f3e_b.png& data-rawwidth=&952& data-rawheight=&1366& class=&origin_image zh-lightbox-thumb& width=&952& data-original=&/v2-29d9d4e71c2f3e_r.png&&&/li&&/ul&&/ul&&ul&&li&&b&什麽造成梯度消失问题? 推导一下&/b&&/li&&ul&&li&&b&&a href=&///?target=https%3A///%40karpathy/yes-you-should-understand-backprop-e2f06eab496b%23.urj9svxwg& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Yes you should understand backdrop-Andrej Karpathy&i class=&icon-external&&&/i&&/a&&br&&/b&&/li&&li&&a href=&///?target=https%3A///How-does-the-ReLu-solve-the-vanishing-gradient-problem& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How does the ReLu solve the vanishing gradient problem?&i class=&icon-external&&&/i&&/a&&br&&/li&&li&神经网络的训练中,通过改变神经元的权重,使网络的输出值尽可能逼近标签以降低误差值,训练普遍使用BP算法,核心思想是,计算出输出与标签间的损失函数值,然后计算其相对于每个神经元的梯度,进行权值的迭代。&/li&&li&梯度消失会造成权值更新缓慢,模型训练难度增加。造成梯度消失的一个原因是,许多激活函数将输出值挤压在很小的区间内,在激活函数两端较大范围的定义域内梯度为0。造成学习停止&img src=&/v2-dffaa8e273c13_b.png& data-rawwidth=&1446& data-rawheight=&540& class=&origin_image zh-lightbox-thumb& width=&1446& data-original=&/v2-dffaa8e273c13_r.png&&&/li&&/ul&&br&&/ul&&br&&p&以下问题来自匿名用户&/p&&ul&&li&&b&Weights Initialization. 不同的方式,造成的后果。为什么会造成这样的结果。&/b&&/li&&ul&&li&&strong&几种主要的权值初始化方法:
lecun_uniform /
&/strong&&strong&glorot_normal / &/strong&&strong&he_normal / batch_normal&/strong&&br&&/li&&li&&strong&lecun_uniform:&a href=&///?target=https%3A//www.researchgate.net/profile/Yann_Lecun/publication/2811922_Efficient_BackProp/links/0deec519dfa1dc2f.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Efficient BackProp&i class=&icon-external&&&/i&&/a&&/strong&&/li&&li&&b&glorot_normal:&/b&&b&&a href=&///?target=http%3A//jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Understanding the difficulty of training deep feedforward neural networks &i class=&icon-external&&&/i&&/a&&/b&&br&&/li&&li&&b&he_normal:&a href=&///?target=https%3A//arxiv.org/pdf/.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification&i class=&icon-external&&&/i&&/a&&/b&&br&&/li&&li&&b&batch_normal:&a href=&///?target=https%3A//arxiv.org/pdf/.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift&i class=&icon-external&&&/i&&/a&&/b&&/li&&/ul&&br&&li&&b&为什么网络够深(Neurons 足够多)的时候,总是可以避开较差Local Optima?&/b&&/li&&ul&&li&&a href=&/& class=&internal&&The Loss Surfaces of Multilayer Networks&/a&&br&&/li&&/ul&&li&&b&Loss. 有哪些定义方式(基于什么?), 有哪些优化方式,怎么优化,各自的好处,以及解释。&/b&&/li&&ul&&li&&b&Cross-Entropy / MSE / K-L散度&/b&&/li&&/ul&&/ul&&ul&&li&&b&Dropout。 怎么做,有什么用处,解释。&/b&&/li&&ul&&li&&b&&a href=&///?target=https%3A///How-does-the-dropout-method-work-in-deep-learning& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How does the dropout method work in deep learning?&i class=&icon-external&&&/i&&/a&&br&&/b&&/li&&li&&b&&a href=&///?target=https%3A//arxiv.org/pdf/.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Improving neural networks by preventing co-adaptation of feature detectors&i class=&icon-external&&&/i&&/a&&br&&/b&&/li&&li&&b&&a href=&///?target=https%3A//arxiv.org/pdf/.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&An empirical analysis of dropout in piecewise linear networks&i class=&icon-external&&&/i&&/a&&/b&&/li&&/ul&&li&&b&Activation Function. 选用什么,有什么好处,为什么会有这样的好处。&/b&&/li&&ul&&li&&b&几种主要的激活函数:Sigmond / ReLU /PReLU&/b&&/li&&li&&b&&a href=&///?target=http%3A//jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Deep Sparse Rectifier Neural Networks&i class=&icon-external&&&/i&&/a&&br&&/b&&/li&&li&&a href=&///?target=https%3A//arxiv.org/pdf/.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification&i class=&icon-external&&&/i&&/a&&/li&&/ul&&/ul&
有几个问题时间原因还没来的及展开回答,最近会补上。 完整版移步 另外,求面试! 以下问题来自CNN最成功的应用是在CV,那为什么NLP和Speech的很多问题也可以用CNN解出来?为什么AlphaGo里也用了CNN?这几个不相关的问题的相似性在哪里…
&img src=&/v2-cdc6c6fbfc73df60d402c_b.jpg& data-rawwidth=&1600& data-rawheight=&1067& class=&origin_image zh-lightbox-thumb& width=&1600& data-original=&/v2-cdc6c6fbfc73df60d402c_r.jpg&&&h2&目录&/h2&&ul&&li&课程&/li&&li&论文&/li&&li&实验室&/li&&li&数据集&/li&&li&开源项目&/li&&/ul&&p&&br&&/p&&h2&课程&/h2&&ul&&li&[Udacity] &a href=&/?target=https%3A///course/self-driving-car-engineer-nanodegree--nd013& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Self-Driving Car Nanodegree Program&i class=&icon-external&&&/i&&/a& - teaches the skills and techniques used by self-driving car teams. Program syllabus can be found &a href=&/?target=https%3A///self-driving-cars/term-1-in-depth-on-udacitys-self-driving-car-curriculum-ffcf46af0c08%23.bfgw9uxd9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&here&i class=&icon-external&&&/i&&/a&.&/li&&li&[University of Toronto] &a href=&/?target=http%3A//www.cs.toronto.edu/%7Eurtasun/courses/CSC2541/CSC2541_Winter16.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&CSC2541 Visual Perception for Autonomous Driving&i class=&icon-external&&&/i&&/a& - A graduate course in visual perception for autonomous driving. The class briefly covers topics in localization, ego-motion estimaton, free-space estimation, visual recognition (classification, detection, segmentation).&/li&&li&[INRIA] &a href=&/?target=https%3A//www.fun-mooc.fr/courses/inria/41005S02/session02/about%3Futm_source%3Dmooc-list& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Mobile Robots and Autonomous Vehicles&i class=&icon-external&&&/i&&/a& - Introduces the key concepts required to program mobile robots and autonomous vehicles. The course presents both formal and algorithmic tools, and for its last week's topics (behavior modeling and learning), it will also provide realistic examples and programming exercises in Python.&/li&&li&[Universty of Glasgow] &a href=&/?target=http%3A//www.gla.ac.uk/coursecatalogue/course/%3Fcode%3DENG5017& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ENG5017 Autonomous Vehicle Guidance Systems&i class=&icon-external&&&/i&&/a& - Introduces the concepts behind autonomous vehicle guidance and coordination and enables students to design and implement guidance strategies for vehicles incorporating planning, optimising and reacting elements.&/li&&li&[David Silver - Udacity] &a href=&/?target=https%3A///self-driving-cars/how-to-land-an-autonomous-vehicle-job-coursework-e7acc2bfe740%23.j5b2kwbso& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&How to Land An Autonomous Vehicle Job: Coursework&i class=&icon-external&&&/i&&/a&David Silver, from Udacity, reviews his coursework for landing a job in self-driving cars coming from a Software Engineering background.&/li&&li&[Stanford] &a href=&/?target=http%3A//stanford.edu/%7Ecpiech/cs221/index.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&CS221 Artificial Intelligence: Principles and Techniques&i class=&icon-external&&&/i&&/a& - Contains a simple self-driving project and simulator.&/li&&li&[MIT] &a href=&/?target=http%3A//selfdrivingcars.mit.edu/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&6.S094: Deep Learning for Self-Driving Cars&i class=&icon-external&&&/i&&/a& - an introduction to the practice of deep learning through the applied theme of building a self-driving car.&/li&&li&[MIT] &a href=&/?target=http%3A//duckietown.mit.edu/index.html& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&2.166 Duckietown&i class=&icon-external&&&/i&&/a& - Class about the science of autonomy at the graduate level. This is a hands-on, project-focused course focusing on self-driving vehicles and high-level autonomy. The problem: Design the Autonomous Robo-Taxis System for the City of Duckietown.&/li&&/ul&&h2&论文&/h2&&h2&综合&/h2&&ul&&li&[2016] &i&Combining Deep Reinforcement Learning and Safety Based Control for Autonomous Driving&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&An Empirical Evaluation of Deep Learning on Highway Driving&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Self-Driving Vehicles: The Challenges and Opportunities Ahead&/i&. [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D2823464& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Making Bertha Drive - An Autonomous Journey on a Historic Route&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Making-Bertha-Drive-An-Autonomous-Journey-on-a-Ziegler-Bender/ec26d7b1cbd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Towards Autonomous Vehicles&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Towards-Autonomous-Vehicles-Schwarz-Thomas/bcad21f00dab2b7fa8f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Towards a viable autonomous driving research platform&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Towards-a-viable-autonomous-driving-research-Wei-Snider/da5cee7a6eb817bbbfbd8b7122359& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&An ontology-based model to determine the automation level of an automated vehicle for co-driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/An-ontology-based-model-to-determine-the-Pollard-Morignot/259166dfe15adf229fcdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Autonomous Vehicle Navigation by Building 3d Map and by Detecting Human Trajectory Using Lidar&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Vehicle-Navigation-by-Building-3d-Map-Kagami-Thompson/81bd819d032b6ce0bc0be& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Autonomous Ground Vehicles - Concepts and a Path to the Future&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Ground-Vehicles-Concepts-and-a-Path-to-Luettel-Himmelsbach/5e8d51a1f6ba313a38a35af414a00bcfd3b5c0ae& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2011] &i&Experimental Evaluation of Autonomous Driving Based on Visual Memory and Image-Based Visual Servoing&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Experimental-Evaluation-of-Autonomous-Driving-Diosi-Segvic/2aeb9aa42e8e9ec9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2011] &i&Learning to Drive: Perception for Autonomous Cars&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Learning-to-Drive-Perception-for-Autonomous-Cars-Stavens-Thrun/be25d7bff3b5928adf6c0a7ff80997& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2010] &i&Toward robotic cars&/i&. [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D1721679& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Autonomous Driving in Traffic: Boss and the Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Driving-in-Traffic-Boss-and-the-Urban-Urmson-Baker/2bcb9dc5dbe& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Mapping, navigation, and learning for off-road traversal&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Mapping-navigation-and-learning-for-off-road-Konolige-Agrawal/57db386dfce4fced2160& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Autonomous Driving in Urban Environments: Boss and the Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Driving-in-Urban-Environments-Boss-and-Urmson-Anhalt/1c0fb6b1bbfde0f9babcce2bd3bc5bd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Caroline: An autonomously driving vehicle for urban environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Caroline-An-autonomously-driving-vehicle-for-urban-Rauskolb-Berger/08f4efc78bdc672b17edd5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Design of an Urban Driverless Ground Vehicle&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Design-of-an-Urban-Driverless-Ground-Vehicle-Benenson-Parent/852a672c3d4a2fca3ff7b215d9c096b0be54feb7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Little Ben: The Ben Franklin Racing Team's Entry in the 2007 DARPA Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Little-Ben-The-Ben-Franklin-Racing-Team-s-Entry-in-Bohren-Foote/b6d5e01cdb7b0dda6c36f121c573f0& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Odin: Team VictorTango's Entry in the DARPA Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Odin-Team-VictorTango-s-Entry-in-the-DARPA-Urban-Reinholtz-Hong/aaeaa58bedf6fa9b4f55f48cf26209& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Robosemantics: How Stanley the Volkswagen Represents the World&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Robosemantics-How-Stanley-the-Volkswagen-Parisien-Thagard/9fabfe3da37591ca& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Team AnnieWAY's autonomous system for the 2007 DARPA Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Team-AnnieWAY-s-Autonomous-System-Stiller-Kammel/5d3cce7c77df3af94d57c9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&The MIT-Cornell collision and why it happened&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/The-MIT-Cornell-collision-and-why-it-happened-Fletcher-Teller/0df4f3efac& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Self-Driving Cars - An AI-Robotics Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Self-Driving-Cars-An-AI-Robotics-Challenge-Thrun/31d17c77d2ea18f71dfd3030caa94& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&2007 DARPA Urban Challenge: The Ben Franklin Racing Team Team B156 Technical Paper&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/2007-Darpa-Urban-Challenge-the-Ben-Franklin-Racing-Franklin-Lee/510b0fa02d6bddf197ba& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Team Mit Urban Challenge Technical Report&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Team-Mit-Urban-Challenge-Technical-Report-Leonard-Barrett/6ac15ed077dcc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&DARPA Urban Challenge Technical Report Austin Robot Technology&/i& [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Darpa-Urban-Challenge-Technical-Report-Executive-Technology-Tuttle/37e78b1bd135df5c5a1fcbf2a8debd260d28a55c& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Spirit of Berlin: an Autonomous Car for the Darpa Urban Challenge Hardware and Software Architecture&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Spirit-of-Berlin-an-Autonomous-Car-for-the-Darpa-Berlin-Rojo/8c96cbc752dfcdeca1fbbf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Team Case and the 2007 Darpa Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Team-Case-and-the-2007-Darpa-Urban-Challenge-Newman-Lead/e68c745b7807e77ccf67fea325aeeb& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2006] &i&A Personal Account of the Development of Stanley, the Robot That Won the DARPA Grand Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/A-Personal-Account-of-the-Development-of-Stanley-Thrun/74a4de58be068d2dc38bb31cf54c3c49bdc0d4e4& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2006] &i&Stanley: The robot that won the DARPA Grand Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Stanley-The-robot-that-won-the-DARPA-Grand-Thrun-Montemerlo/b17fa2ebe7bde0a1b8ebc00ea07f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&h2&激光雷达与点云&/h2&&ul&&li&[2017] PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&][&a href=&/?target=https%3A///charlesq34/pointnet& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&github&i class=&icon-external&&&/i&&/a&]&/li&&li&[2017] &a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&[] 3D Fully Convolutional Network for Vehicle Detection in Point Cloud&i class=&icon-external&&&/i&&/a&&/li&&li&[2017] &a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&[] Fast LIDAR-based Road Detection Using Fully Convolutional Neural Networks&i class=&icon-external&&&/i&&/a&&/li&&li&[2016] Motion-based Detection and Tracking in 3D LiDAR Scans [&a href=&/?target=http%3A//rmatik.uni-freiburg.de/publications/papers/dewan16icra.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&][&a href=&/?target=https%3A//youtu.be/cyufiAyTLE0& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&youtube&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] Lidar-based Methods for Tracking and Identification [&a href=&/?target=http%3A//publications.lib.chalmers.se/records/fulltext/972.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&][&a href=&/?target=https%3A//youtu.be/_Mhgm2BXdFI& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&youtube&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] Efficient L-shape fitting of laser scanner data for vehicle pose estimation [&a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/7274568/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/7007125/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Road Detection Using High Resolution LIDAR&i class=&icon-external&&&/i&&/a&&/li&&li&[2012] LIDAR-based 3D Object Perception [&a href=&/p//www.cs.princeton.edu/courses/archive/spring11/cos598A/pdfs/Himmelsbach08.pdf& class=&internal&&ref&/a&]&/li&&li&[2011] Radar/Lidar sensor fusion for car-following on highways [&a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/6144918/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &a href=&/?target=http%3A///lidar/hdlpressroom/pdf/Articles/Real-time%2520Road%2520Detection%2520in%2Point%2520Clouds%2520using%2520Four%2520Directions%2520Scan%2520Line%2520Gradient%2520Criterion.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Real-time road detection in 3d point clouds using four directions scan line gradient criterion&i class=&icon-external&&&/i&&/a&&/li&&li&[2006] Real-time Pedestrian Detection Using LIDAR and Convolutional Neural Networks [&a href=&/?target=http%3A//ieeexplore.ieee.org/abstract/document/1689630/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&h2&定位与测绘&/h2&&ul&&li&[2016] &i&MultiCol-SLAM - A Modular Real-Time Multi-Camera SLAM System.&/i& [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Image Based Camera Localization: an Overview&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Ubiquitous real-time geo-spatial localization&/i& [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D3005426& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Robust multimodal sequence-based loop closure detection via structured sparsity&/i&. [&a href=&/?target=http%3A//www.roboticsproceedings.org/rss12/p43.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition&/i&. [&a href=&/?target=http%3A//ieeexplore.ieee.org/document/7839213/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&], [&a href=&/?target=https%3A///hanfeiid/SRAL& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&code&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Precise Localization of an Autonomous Car Based on Probabilistic Noise Models of Road Surface Marker Features Using Multiple Cameras&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Precise-Localization-of-an-Autonomous-Car-Based-on-Jo-Jo/85f9ddf59c9ed0ce80& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Planar Segments Based Three-dimensional Robotic Mapping in Outdoor Environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Planar-Segments-Based-Three-dimensional-Robotic-Xiao/ebddeb22f3b5cfe51aaf847ad444e7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Vehicle Localization along a Previously Driven Route Using Image Database&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Vehicle-Localization-along-a-Previously-Driven-Kume-Supp%25C3%25A9/e5a7ac37d281f1e2a571fc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Can priors be trusted? Learning to anticipate roadworks&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Can-priors-be-trusted-Learning-to-anticipate-Mathibela-Osborne/0a7ecf9ee481a51fc12& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Laser Scanner Based Slam in Real Road and Traffic Environment&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Laser-Scanner-Based-Slam-in-Real-Road-and-Traffic-Garcia-Favrot-Parent/2accb1d9f7ce3f08aa1cde735dcca& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Map-Based Precision Vehicle Localization in Urban Environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Map-Based-Precision-Vehicle-Localization-in-Urban-Levinson-Montemerlo/924ff97ad4e96f48ad774d982ef3& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&h2&感知&/h2&&ul&&li&[2016] &i&VisualBackProp: visualizing CNNs for autonomous driving&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Lost and Found: Detecting Small Road Hazards for Self-Driving Vehicles&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Image segmentation of cross-country scenes captured in IR spectrum&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Traffic-Sign Detection and Classification in the Wild&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Traffic-Sign-Detection-and-Classification-in-the-Zhu-Liang/da82e3cad81db857aa75b& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Persistent-self-supervised-learning-principle-from-Hecke-Croon/a48c4c6707fca20ae64b044b6e8f7ffc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Deep-Multispectral-Semantic-Scene-Understanding-of-Valada-Oliveira/8be99dd94bff76c6a7& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Joint Attention in Autonomous Driving (JAAD)&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Joint-Attention-in-Autonomous-Driving-JAAD--Kotseruba-Rasouli/1e6a26deea0ac2a6dadc317b50bdf8& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&, &a href=&/?target=http%3A//data.nvision2.eecs.yorku.ca/JAAD_dataset/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&data&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Perception for driverless vehicles: design and implementation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Perception-for-driverless-vehicles-design-and-Benenson-Suarez/bf1c728e3ef720b3f6& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Robust multimodal sequence-based loop closure detection via structured sparsity&/i&. [&a href=&/?target=http%3A//www.roboticsproceedings.org/rss12/p43.pdf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition&/i&. [&a href=&/?target=http%3A//ieeexplore.ieee.org/document/7839213/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&], [&a href=&/?target=https%3A///hanfeiid/SRAL& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&code&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Pixel-wise Segmentation of Street with Neural Networks&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Deep convolutional neural networks for pedestrian detection&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Fast Algorithms for Convolutional Neural Networks&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Fusion of color images and LiDAR data for lane classification&/i&. [&a href=&/?target=http%3A//dl.acm.org/citation.cfm%3Fid%3D2820859& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Environment Perception for Autonomous Vehicles in Challenging Conditions Using Stereo Vision&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Environment-Perception-for-Autonomous-Vehicles-in-Gal%25C3%25A1n-Hayet/8f56fd10fc441fdb5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Intention-aware online POMDP planning for autonomous driving in a crowd&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Intention-aware-online-POMDP-planning-for-Bai-Cai/481aaa5bea7db755862cded42081& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Survey on Vanishing Point Detection Method for General Road Region Identification&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Survey-on-Vanishing-Point-Detection-Method-for-Patel-Mistry/39c6be1ebe2bbefdadbc9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&Visual road following using intrinsic images&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Visual-road-following-using-intrinsic-images-Krajn%25C3%25ADk-Blazicek/ccf78bfc80c505d100540f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Rover – a Lego* Self-driving Car&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Rover-a-Lego-Self-driving-Car-Tan-Wojtczyk-Wojtczyk/6e2ffbf8afee& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Classification and Tracking of Dynamic Objects with Multiple Sensors for Autonomous Driving in Urban Environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Classification-and-Tracking-of-Dynamic-Objects-Darms-Rybski/6c9ce40060fa3efea7d04a4a0eddf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Generating Omni-directional View of Neighboring Objects for Ensuring Safe Urban Driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Generating-Omni-directional-View-of-Neighboring-Seo/29e53add392de54d439aaf6e9baadeb& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Autonomous Visual Navigation and Laser-Based Moving Obstacle Avoidance&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Visual-Navigation-and-Laser-Based-Cherubini-Spindler/089fa5a7babc906dc46a58f986c5ac8c46aa9017& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Extending the Stixel World with online self-supervised color modeling for road-versus-obstacle segmentation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Extending-the-Stixel-World-with-online-self-Sanberg-Dubbelman/6dd60ef49abff4967& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2014] &i&Modeling Human Plan Recognition Using Bayesian Theory of Mind&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Plan-Activity-and-Intent-Recognition-Baker-Tenenbaum/4cbb1ea46c09d11b0b986a7baaacf8& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Focused Trajectory Planning for autonomous on-road driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Focused-Trajectory-Planning-for-autonomous-on-road-Gu-Snider/03bf26d72d8cc0cf401c31e31c242e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Avoiding moving obstacles during visual navigation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Avoiding-moving-obstacles-during-visual-navigation-Cherubini-Grechanichenko/7c0e580c0fc918aef1df44& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Mobile robot navigation system in outdoor pedestrian environment using vision-based road recognition&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Mobile-robot-navigation-system-in-outdoor-Siagian-Chang/c3d87cd50d1bedb25696& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Obstacle detection and mapping in low-cost, low-power multi-robot systems using an Inverted Particle Filter&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Obstacle-detection-and-mapping-in-low-cost-low-Kleppe-Skavhaug/646cc0e592b77d553cc20fb79a8e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Real-time estimation of drivable image area based on monocular vision&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Real-time-estimation-of-drivable-image-area-based-Neto-Victorino/c50a769ce6b6f8d389806& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Road model prediction based unstructured road detection&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Road-model-prediction-based-unstructured-road-Zuo-Yao/b8b2d3daed2988216dbb3ddb6081ed& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2013] &i&Selective Combination of Visual and Thermal Imaging for Resilient Localization in Adverse Conditions: Day and Night, Smoke and Fire&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Selective-Combination-of-Visual-and-Thermal-Brunner-Peynot/85b4b1af84904a1cfc3eeeb605c9bd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Road Tracking Method Suitable for Both Unstructured and Structured Roads&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/International-Journal-of-Advanced-Robotic-Systems-Proch%25C3%25A1zka/4819fda4bca4b30db46ec56aa45bc& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Autonomous Navigation and Sign Detector Learning&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Navigation-and-Sign-Detector-Learning-Ellis-Pugeault/0cffeecdcdaf0d11b33e12cf3c67213e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Design-of-a-Multi-Sensor-Cooperation-Travel-Chen-Li/f5feb2a151c54eca66c193ddd3c8b& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Learning in Reality: a Case Study of Stanley, the Robot That Won the Darpa Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Learning-in-Reality-a-Case-Study-of-Stanley-the-Glaser-Hennig/01c1f49f5e7f4e7f5d6& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&Portable and Scalable Vision-Based Vehicular Instrumentation for the Analysis of Driver Intentionality&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Portable-and-Scalable-Vision-Based-Vehicular-Beauchemin-Bauer/c76b5bc64ffd6e13a6ca803e5209d5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&What could move? Finding cars, pedestrians and bicyclists in 3D laser data&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/What-could-move-Finding-cars-pedestrians-and-Wang-Posner/f56b01df806bc224d5babba08cb44& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2012] &i&The Stixel World&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/The-Stixel-World-N-Im/ff2f18ca5812965dcfb90& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2011] &i&Stereo-based road boundary tracking for mobile robot navigation&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Stereo-based-road-boundary-tracking-for-mobile-Chiku-Miura/8bcbb1f13f2ab7f974ba30a0d68aeccf& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Autonomous Information Fusion for Robust Obstacle Localization on a Humanoid Robot&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Autonomous-Information-Fusion-for-Robust-Obstacle-Sridharan-Li/e5cb801ba421c35ea639& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&Learning long-range vision for autonomous off-road driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Learning-long-range-vision-for-autonomous-off-road-Hadsell-Sermanet/2d8f527d1a96b0dae209daa6a241cfd& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2009] &i&On-line road boundary modeling with multiple sensory features, flexible road model, and particle filter&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/On-line-road-boundary-modeling-with-multiple-Matsushita-Miura/0fcac22dceb7a7d49a8cc500a804d9& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&The Area Processing Unit of Caroline - Finding the Way through DARPA's Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/The-Area-Processing-Unit-of-Caroline-Finding-the-Berger-Lipski/4b9db808cce6c7bcd684& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2008] &i&Vehicle detection and tracking for the Urban Challenge&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Vehicle-detection-and-tracking-for-the-Urban-Darms-Baker/757fbaaa9962819fda64d51307e1& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] &i&Low cost sensing for autonomous car driving in highways&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Low-cost-sensing-for-autonomous-car-driving-in-Gon%25C3%25A7alves-Godinho/b7f302bc8eb3de03128& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2007] _Stereo and Colour Vision Techniques for Autonomous Vehicle Guidance _. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Stereo-and-Colour-Vision-Techniques-for-Autonomous-Mark-Proefschrift/51df5ef614a01a55f3da818aae0e& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2000] &i&Real-time multiple vehicle detection and tracking from a moving vehicle&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Real-time-multiple-vehicle-detection-and-tracking-Betke-Haritaoglu/864aecbc4ef6c4da66e4c8bcc83fe560& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&/ul&&p&&br&&/p&&h2&导航与路径规划&/h2&&ul&&li&[2017] Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car[&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&End to End Learning for Self-Driving Cars&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&A Convex Optimization Approach to Smooth Trajectories for Motion Planning with Car-Like Robots&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/A-Convex-Optimization-Approach-to-Smooth-Zhu-Schmerling/785b22bbdb04f2dddd798ed3194374f& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Routing Autonomous Vehicles in Congested Transportation Networks: Structural Properties and Coordination Algorithms&/i&. [&a href=&/?target=https%3A//arxiv.org/abs/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Machine Learning for Visual Navigation of Unmanned Ground Vehicles&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Machine-Learning-for-Visual-Navigation-of-Unmanned-Lenskiy-Lee/9b2ed3cd54a7e3a3c7c25b311e1ced& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Real-time self-driving car navigation and obstacle avoidance using mobile 3D laser scanner and GNSS&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Real-time-self-driving-car-navigation-and-obstacle-Li-Bao/4e8b5a99ae628eea43d7e7410cdfa7f8a2e847d5& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2016] &i&Watch this: Scalable cost-function learning for path planning in urban environments&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/Watch-this-Scalable-cost-function-learning-for-Wulfmeier-Wang/d1e51c7e374dcae98bfb2& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&ref&i class=&icon-external&&&/i&&/a&]&/li&&li&[2015] &i&DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving&/i&. [&a href=&/?target=https%3A//www.semanticscholar.org/paper/DeepDriving-Learning-Affordance-for-Direct-Chen-Seff/3babddd87516c0fab%3FcitingPapersSort%3Dis-influential%26citingPapersLimit%

我要回帖

更多关于 秦殇data怎么修改爆率 的文章

 

随机推荐