Method for establishing the UAV-rice vortex 3D model and extracting spatial parameters

Jiyu Li, Han Wu, Xiaodan Hu, Gangao Fan, Yifan Li, Bo Long, Xu Wei, Yubin Lan

Abstract


Abstract: With the deepening research on the rotor wind field of UAV operation, it has become a mainstream to quantify the UAV operation effect and study the distribution law of rotor wind field via the spatial parameters of the UAV-rice interaction wind field vortex.  At present, the point cloud segmentation algorithms involved in most wind field vortex spatial parameter extraction methods cannot adapt to the instantaneous changes and indistinct boundary of the vortex.  As a result, there are problems such as inaccurate three-dimensional (3D) shape and boundary contour of the wind field vortex as well as large errors in the vortex’s spatial parameters.  To this end, this paper proposes an accurate method for establishing the UAV-rice interaction vortex 3D model and extracting vortex spatial parameters.  Firstly, the original point cloud data of the wind filed vortex were collected in the image acquisition area.  Secondly, DDC-UL processed the original point cloud data to develop the 3D point cloud image of the wind field vortex.  Thirdly, the 3D curved surface was reconstructed and spatial parameters were then extracted.  Finally, the volume parameters and top surface area parameters of the UAV-rice interaction vortex were calculated and analyzed.  The results show that the error rate of the 3D model of the UAV-rice interaction wind field vortex developed by the proposed method is kept within 2%, which is at least 13 percentage points lower than that of algorithms like PointNet.  The average error rates of the volume parameters and the top surface area parameters extracted by the proposed method are 1.4% and 4.12%, respectively.  This method provides 3D data for studying the mechanism of rotor wind field in the crop canopy through the 3D vortex model and its spatial parameters.

Keywords: UAV, rice canopy, wind field, binocular vision, point cloud segmentation, spatial parameters

DOI: 10.33440/j.ijpaa.20200302.84

 

Citation: Li J Y, Wu H, Hu X D, Fan G A, Li Y F, Long B, Wei X, Lan Y B.  Method for establishing the UAV-rice vortex 3D model and extracting spatial parameters.  Int J Precis Agric Aviat, 2020; 3(2): 56–64.


Full Text:

PDF

References


Lan Y B, Chen S D, Fritz B K. Current status and future trends of precision agricultural aviation technologies. International Journal of Agricultural & Biological Engineering, 2017; 10(3): 1–17. doi: 10.3965/j.ijabe.20171003.3088.

Zhou Z Y, Zang Y, Luo X W, et al. Technology innovation development strategy on agricultural aviation industry for plant protection in China. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2013, 29(24): 1–10. (in Chinese)

He X K, Bonds J, Herbst A, et al. Recent development of unmanned

aerial vehicle for plant protection in East Asia. Int J Agric Biol Eng, 2017; 10: 18–30. doi: 10.19518/j.cnki.cn11-2531/s.2017.0130.

Mogili UR, Deepak BBVL. Review on application of drone systems in precision agriculture. Procedia Computer Science, 2018; 133: 502–509. doi: 10.1016/j.procs.2018.07.063.

LI J Y, Zhou Z Y, Lan Y B, et al. Distribution of canopy wind field produced by rotor unmanned aerial vehicle pollination operation. Transactions of the Chinese Society of Agricultural Engineering, 2015; 31(03):77–86.

Chen S D, Lan Y B, Bradley K F, et al. Effect of Wind Field below Rotor on Distribution of Aerial Spraying Droplet Deposition by Using Multi-rotor UAV. Transactions of the Chinese Society for Agricultural Machinery, 2017; 48(08):105–113.

Liu J T, Wu W H, Li J, et al. Trajectory controller design for quadrotor UAVs on wind field disturbance. Flight Dynamics, 2016; 34(02): 47–50, 54. doi:10.13645/j.cnki.f.d.20160110.015.

Jiang K, Wang Z H, Fan J R, et al. Droplet Deposition Rules of Multi Rotor UAV Flight Loads Impact. Journal of Agricultural Mechanization Research, 2020; 42(05): 25–32. doi: 10.13427/j.cnki.njyi.2020.05.004.

Zheng Y J, Yang S H, Liu X X, et al. The computational fluid dynamic modeling of downwash flow field for a six-rotor UAV. Frontiers of Agricultural Science and Engineering, 2018(02).

Li J Y, Zhou Z Y, Lan Y B, et al. Distribution of canopy wind field produced by rotor unmanned aerial vehicle pollination operation. Transactions of the Chinese Society of Agricultural Engineering, 2015; 31(3): 77–86. (in Chinese)

Li J Y, Shi Y Y, Lan Y B, et al. Vertical distribution and vortex structure of rotor wind field under the influence of rice canopy. Computers and Electronics in Agriculture, 2019; 159, 140–146. doi: 10.1016/j.compag. 2019.02.027.

Qin N, Hu X, Dai H. Deep fusion of multi-view and multimodal representation of ALS point cloud for 3D terrain scene recognition. ISPRS Journal of Photogrammetry and Remote Sensing, 2018; 143: 205–212. doi: 10.1016/j.isprsjprs.2018.03.011.

Boulch A, Guerry J, Le Saux B, et al. SnapNet:3D point cloud semantic labeling with 2D deep segmentation networks. Computers and Graphics, 2018; 71: 189–198. doi: 10.1016/j.cag.2017.11.010.

Zhang Z, Cui Z,Xu C, et al. Joint task-recursive learning for semantic segmentation and depth estimation. Proceedings of European Conference on Computer Vision, 2018. doi: 10.1007/978-3-030-01249-6 15.

Wu B, Wan A, Yue X, et al. SqueezeSeg:Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D lidar point cloud. Proceedings of IEEE International Conference on Robotics and Automation, 2018: 1887–1893.

Maturana D,Scherer S. VoxNet:A 3D convolutional neural network for real-time object recognition. Proceedings of IEEE International Conference on Intelligent Robots and Systems, 2015. doi: 10.1109/ IROS.2015.7353481.

Chang A X, Funkhouser T, Guibas L, et al. ShapeNet:An information-rich 3D model repository. arXiv:1512.03012, 2015.

Huang J, You S. Point cloud labeling using 3D convolutional neural network. Proceedings of International Conference on Pattern Recognition, 2016. doi: 10.1109/ICPR.2016.7900038.

Tchapmi L, Choy C, Armeni I, et al. SEGCloud:Semantic segmentation of 3D point clouds. Proceedings of International Conference on 3D Vision, 2017: 537–547.

Charles R Q, Hao S, Mo K, et al. PointNet:Deep learning on point sets for 3D classification and segmentation. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2017.

Qi C R, Yi L, Su H, et al. PointNet++:Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems, 2017: 5099–5108.

Riegler G, Ulusoy A O, Geiger A. OctNet:Learning deep3D representations at high resolutions. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2017.

Klokov R, Lempitsky V. Escape from cells:Deep KDnetworks for the recognition of 3D point cloud models. Proceedings of IEEE International Conference on Computer Vision, 2017.

Su H, Jampani V, Sun D, et al. SPLATNet:Sparse lattice networks for point cloud processing. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2018.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.