Real-Time Fusion of Image and Inertial Sensors for Navigation PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Real-Time Fusion of Image and Inertial Sensors for Navigation PDF full book. Access full book title Real-Time Fusion of Image and Inertial Sensors for Navigation by J. Fletcher. Download full books in PDF and EPUB format.

Real-Time Fusion of Image and Inertial Sensors for Navigation

Real-Time Fusion of Image and Inertial Sensors for Navigation PDF Author: J. Fletcher
Publisher:
ISBN:
Category :
Languages : en
Pages : 13

Book Description
As evidenced by many biological systems, the fusion of optical and inertial sensors represents an attractive method for passive navigation. In our previous work, a rigorous theory for optical and inertial fusion was developed for precision navigation applications. The theory was based on a statistical transformation of the feature space based on inertial sensor measurements. The transformation effectively constrained the feature correspondence search to a given level of a priori statistical uncertainty. When integrated into a navigation system, the fused system demonstrated performance in indoor environments which were comparable to that of GPS-aided systems. In order to improve feature tracking performance, a robust feature transformation algorithm "Lowe?s SIFT" was chosen. The SIFT features are ideal for navigation applications in that they are invariant to scale, rotation, and illumination. Unfortunately, there exists a correlation between feature complexity and computer processing time. This limits the effectiveness of robust feature extraction algorithms for real-time applications using traditional microprocessor architectures. While recent advances in computer technology have made image processing more commonplace, the amount of information that can be processed is still limited by the power and speed of the CPU. In this paper, a new theory which exploits the highly parallel nature of General Programmable Graphical Processing Units "GPGPU" is developed which supports deeply integrated optical and inertial sensors for real-time navigation. Recent advances in GPGPU technology have made realtime, image-aided navigation a reality. Our approach leverages the existing OpenVIDIA core GPGPU library and commercially available computer hardware to solve the image and inertial fusion problem. The open-source libraries are extended to include the statistical featur.

Real-Time Fusion of Image and Inertial Sensors for Navigation

Real-Time Fusion of Image and Inertial Sensors for Navigation PDF Author: J. Fletcher
Publisher:
ISBN:
Category :
Languages : en
Pages : 13

Book Description
As evidenced by many biological systems, the fusion of optical and inertial sensors represents an attractive method for passive navigation. In our previous work, a rigorous theory for optical and inertial fusion was developed for precision navigation applications. The theory was based on a statistical transformation of the feature space based on inertial sensor measurements. The transformation effectively constrained the feature correspondence search to a given level of a priori statistical uncertainty. When integrated into a navigation system, the fused system demonstrated performance in indoor environments which were comparable to that of GPS-aided systems. In order to improve feature tracking performance, a robust feature transformation algorithm "Lowe?s SIFT" was chosen. The SIFT features are ideal for navigation applications in that they are invariant to scale, rotation, and illumination. Unfortunately, there exists a correlation between feature complexity and computer processing time. This limits the effectiveness of robust feature extraction algorithms for real-time applications using traditional microprocessor architectures. While recent advances in computer technology have made image processing more commonplace, the amount of information that can be processed is still limited by the power and speed of the CPU. In this paper, a new theory which exploits the highly parallel nature of General Programmable Graphical Processing Units "GPGPU" is developed which supports deeply integrated optical and inertial sensors for real-time navigation. Recent advances in GPGPU technology have made realtime, image-aided navigation a reality. Our approach leverages the existing OpenVIDIA core GPGPU library and commercially available computer hardware to solve the image and inertial fusion problem. The open-source libraries are extended to include the statistical featur.

Fusion of Imaging and Inertial Sensors for Navigation

Fusion of Imaging and Inertial Sensors for Navigation PDF Author: Michael J. Veth
Publisher:
ISBN: 9780542834059
Category : Artificial satellites in navigation
Languages : en
Pages : 191

Book Description
The introduction of the Global Positioning System changed the way the United States Air Force fights, by delivering world-wide, precision navigation capability to even the smallest platforms. Unfortunately, the Global Positioning System signal is not available in all combat environments (e.g., under tree cover, indoors, or underground). Thus, operations in these environments are limited to non-precision tactics. The motivation of this research is to address the limitations of the current precision navigation methods by fusing imaging and inertial systems, which is inspired by observing the navigation capabilities of animals. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a rigorous, statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory online using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer-grade inertial sensors. While limitations of the extended Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions.

Fusion of Low-Cost Imaging and Inertial Sensors for Navigation

Fusion of Low-Cost Imaging and Inertial Sensors for Navigation PDF Author:
Publisher:
ISBN:
Category :
Languages : en
Pages : 12

Book Description
Aircraft navigation information (position, velocity, and attitude) can be determined using optical measurements from imaging sensors combined with an inertial navigation system. This can be accomplished by tracking the locations of optical features in multiple images and using the resulting geometry to estimate and remove inertial errors. A critical factor governing the performance of image-aided inertial navigation systems is the robustness of the feature tracking algorithm. Previous research has shown the strength of rigorously coupling the image and inertial sensors at the measurement level using a tactical-grade inertial sensor. While the tactical-grade inertial sensor is a reasonable choice for larger platforms, the greater physical size and cost of the sensor limits its use in smaller, low-cost platforms. In this paper, an image-aided inertial navigation algorithm is implemented using a multi-dimensional stochastic feature tracker. In contrast to previous research, the algorithms are specifically evaluated for operation using lowcost, CMOS imagers and MEMS inertial sensors. The performance of the resulting image-aided inertial navigation system is evaluated using Monte Carlo simulation and experimental data and compared to the performance using more expensive inertial sensors.

Continuous Models for Cameras and Inertial Sensors

Continuous Models for Cameras and Inertial Sensors PDF Author: Hannes Ovrén
Publisher: Linköping University Electronic Press
ISBN: 917685244X
Category :
Languages : en
Pages : 67

Book Description
Using images to reconstruct the world in three dimensions is a classical computer vision task. Some examples of applications where this is useful are autonomous mapping and navigation, urban planning, and special effects in movies. One common approach to 3D reconstruction is ”structure from motion” where a scene is imaged multiple times from different positions, e.g. by moving the camera. However, in a twist of irony, many structure from motion methods work best when the camera is stationary while the image is captured. This is because the motion of the camera can cause distortions in the image that lead to worse image measurements, and thus a worse reconstruction. One such distortion common to all cameras is motion blur, while another is connected to the use of an electronic rolling shutter. Instead of capturing all pixels of the image at once, a camera with a rolling shutter captures the image row by row. If the camera is moving while the image is captured the rolling shutter causes non-rigid distortions in the image that, unless handled, can severely impact the reconstruction quality. This thesis studies methods to robustly perform 3D reconstruction in the case of a moving camera. To do so, the proposed methods make use of an inertial measurement unit (IMU). The IMU measures the angular velocities and linear accelerations of the camera, and these can be used to estimate the trajectory of the camera over time. Knowledge of the camera motion can then be used to correct for the distortions caused by the rolling shutter. Another benefit of an IMU is that it can provide measurements also in situations when a camera can not, e.g. because of excessive motion blur, or absence of scene structure. To use a camera together with an IMU, the camera-IMU system must be jointly calibrated. The relationship between their respective coordinate frames need to be established, and their timings need to be synchronized. This thesis shows how to automatically perform this calibration and synchronization, without requiring e.g. calibration objects or special motion patterns. In standard structure from motion, the camera trajectory is modeled as discrete poses, with one pose per image. Switching instead to a formulation with a continuous-time camera trajectory provides a natural way to handle rolling shutter distortions, and also to incorporate inertial measurements. To model the continuous-time trajectory, many authors have used splines. The ability for a spline-based trajectory to model the real motion depends on the density of its spline knots. Choosing a too smooth spline results in approximation errors. This thesis proposes a method to estimate the spline approximation error, and use it to better balance camera and IMU measurements, when used in a sensor fusion framework. Also proposed is a way to automatically decide how dense the spline needs to be to achieve a good reconstruction. Another approach to reconstruct a 3D scene is to use a camera that directly measures depth. Some depth cameras, like the well-known Microsoft Kinect, are susceptible to the same rolling shutter effects as normal cameras. This thesis quantifies the effect of the rolling shutter distortion on 3D reconstruction, depending on the amount of motion. It is also shown that a better 3D model is obtained if the depth images are corrected using inertial measurements. Att använda bilder för att återskapa världen omkring oss i tre dimensioner är ett klassiskt problem inom datorseende. Några exempel på användningsområden är inom navigering och kartering för autonoma system, stadsplanering och specialeffekter för film och spel. En vanlig metod för 3D-rekonstruktion är det som kallas ”struktur från rörelse”. Namnet kommer sig av att man avbildar (fotograferar) en miljö från flera olika platser, till exempel genom att flytta kameran. Det är därför något ironiskt att många struktur-från-rörelse-algoritmer får problem om kameran inte är stilla när bilderna tas, exempelvis genom att använda sig av ett stativ. Anledningen är att en kamera i rörelse ger upphov till störningar i bilden vilket ger sämre bildmätningar, och därmed en sämre 3D-rekonstruktion. Ett välkänt exempel är rörelseoskärpa, medan ett annat är kopplat till användandet av en elektronisk rullande slutare. I en kamera med rullande slutare avbildas inte alla pixlar i bilden samtidigt, utan istället rad för rad. Om kameran rör på sig medan bilden tas uppstår därför störningar i bilden som måste tas om hand om för att få en bra rekonstruktion. Den här avhandlingen berör robusta metoder för 3D-rekonstruktion med rörliga kameror. En röd tråd inom arbetet är användandet av en tröghetssensor (IMU). En IMU mäter vinkelhastigheter och accelerationer, och dessa mätningar kan användas för att bestämma hur kameran har rört sig över tid. Kunskap om kamerans rörelse ger möjlighet att korrigera för störningar på grund av den rullande slutaren. Ytterligare en fördel med en IMU är att den ger mätningar även i de fall då en kamera inte kan göra det. Exempel på sådana fall är vid extrem rörelseoskärpa, starkt motljus, eller om det saknas struktur i bilden. Om man vill använda en kamera tillsammans med en IMU så måste dessa kalibreras och synkroniseras: relationen mellan deras respektive koordinatsystem måste bestämmas, och de måste vara överens om vad klockan är. I den här avhandlingen presenteras en metod för att automatiskt kalibrera och synkronisera ett kamera-IMU-system utan krav på exempelvis kalibreringsobjekt eller speciella rörelsemönster. I klassisk struktur från rörelse representeras kamerans rörelse av att varje bild beskrivs med en kamera-pose. Om man istället representerar kamerarörelsen som en tidskontinuerlig trajektoria kan man på ett naturligt sätt hantera problematiken kring rullande slutare. Det gör det också enkelt att införa tröghetsmätningar från en IMU. En tidskontinuerlig kameratrajektoria kan skapas på flera sätt, men en vanlig metod är att använda sig av så kallade splines. Förmågan hos en spline att representera den faktiska kamerarörelsen beror på hur tätt dess knutar placeras. Den här avhandlingen presenterar en metod för att uppskatta det approximationsfel som uppkommer vid valet av en för gles spline. Det uppskattade approximationsfelet kan sedan användas för att balansera mätningar från kameran och IMU:n när dessa används för sensorfusion. Avhandlingen innehåller också en metod för att bestämma hur tät en spline behöver vara för att ge ett gott resultat. En annan metod för 3D-rekonstruktion är att använda en kamera som också mäter djup, eller avstånd. Vissa djupkameror, till exempel Microsoft Kinect, har samma problematik med rullande slutare som vanliga kameror. I den här avhandlingen visas hur den rullande slutaren i kombination med olika typer och storlekar av rörelser påverkar den återskapade 3D-modellen. Genom att använda tröghetsmätningar från en IMU kan djupbilderna korrigeras, vilket visar sig ge en bättre 3D-modell.

Intelligent Information Processing for Inertial-Based Navigation Systems

Intelligent Information Processing for Inertial-Based Navigation Systems PDF Author: Chong Shen
Publisher: Springer Nature
ISBN: 9813345160
Category : Technology & Engineering
Languages : en
Pages : 131

Book Description
This book introduces typical inertial devices and inertial-based integrated navigation systems, gyro noise suppression, gyro temperature drift error modeling compensation, inertial-based integrated navigation systems under discontinuous observation conditions, and inertial-based brain integrated navigation systems. Integrated navigation is the result of the development of modern navigation theory and technology. The inertial navigation system has the advantages of strong autonomy, high short-term accuracy, all-day time, all weather, and so on. And it has been applied in most integrated navigation systems. Among them, the information processing of inertial-based integrated navigation system is the core technology. Due to the effect of the device mechanism and working environment, there are errors in the output information of the inertial-based integrated navigation system, including gyroscope noise, temperature drift, and discontinuous observations, which will seriously reduce the accuracy and robustness of the system. And the book helps readers to solve these problems. The intelligent information processing technology involved is equipped with simulation verification, which can be used as a reference for undergraduate, graduate, and Ph.D. students, and also scientific researchers or engineers engaged in navigation-related specialties.

Optimal Image-Aided Inertial Navigation

Optimal Image-Aided Inertial Navigation PDF Author: Nilesh Sharma Gopaul
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Book Description
The utilization of cameras in integrated navigation systems is among the most recent scientific research and high-tech industry development. The research is motivated by the requirement of calibrating off-the-shelf cameras and the fusion of imaging and inertial sensors in poor GNSS environments. The three major contributions of this dissertation are The development of a structureless camera auto-calibration and system calibration algorithm for a GNSS, IMU and stereo camera system. The auto-calibration bundle adjustment utilizes the scale restraint equation, which is free of object coordinates. The number of parameters to be estimated is significantly reduced in comparison with the ones in a self-calibrating bundle adjustment based on the collinearity equations. Therefore, the proposed method is computationally more efficient. The development of a loosely-coupled visual odometry aided inertial navigation algorithm. The fusion of the two sensors is usually performed using a Kalman filter. The pose changes are pairwise time-correlated, i.e. the measurement noise vector at the current epoch is only correlated with the one from the previous epoch. Time-correlated errors are usually modelled by a shaping filter. The shaping filter developed in this dissertation uses Cholesky factors as coefficients derived from the variance and covariance matrices of the measurement noise vectors. Test results with showed that the proposed algorithm performs better than the existing ones and provides more realistic covariance estimates. The development of a tightly-coupled stereo multi-frame aided inertial navigation algorithm for reducing position and orientation drifts. Usually, the image aiding based on the visual odometry uses the tracked features only from a pair of the consecutive image frames. The proposed method integrates the features tracked from multiple overlapped image frames for reducing the position and orientation drifts. The measurement equation is derived from SLAM measurement equation system where the landmark positions in SLAM are algebraically by time-differencing. However, the derived measurements are time-correlated. Through a sequential de-correlation, the Kalman filter measurement update can be performed sequentially and optimally. The main advantages of the proposed algorithm are the reduction of computational requirements when compared to SLAM and a seamless integration into an existing GNSS aided-IMU system.

Towards Improved Inertial Navigation by Reducing Errors Using Deep Learning Methodology

Towards Improved Inertial Navigation by Reducing Errors Using Deep Learning Methodology PDF Author: Hua Chen
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Book Description
Autonomous vehicles make use of an Inertial Navigation System (INS) as part of vehicular sensor fusion in many situations including Global Navigation Satellite System (GNSS)- denied environments such as dense urban places, multi-level parking structures, and areas with thick tree-coverage. The INS unit incorporates an Inertial Measurement Unit (IMU) to process the linear acceleration and angular velocity data to obtain orientation, position and velocity information using mechanization equations. In this work, we developed a novel deep learning-based methodology, using Convolutional Neural Networks (CNN) to reduce errors from MEMS IMU sensors. We developed a methodology of using CNN algorithms that can learn from the responses of a particular inertial sensor while subject to inherent noise errors and provide a near real-time error correction. We implemented a time-division method to divide the IMU output data into small step sizes. By using this method, we make the IMU outputs fit the input format of the CNN. We optimized the CNN algorithm for higher performance and lower complexity that would allow its implementation on ultra-low power hardware such as microcontrollers. We examined the performance of our CNN algorithm under various situations with IMUs of various performance grades, IMUs of the same type but different manufactured batch, and controlled, fixed and un-controlled vehicle motion paths.

Proceedings of 3rd 2023 International Conference on Autonomous Unmanned Systems (3rd ICAUS 2023)

Proceedings of 3rd 2023 International Conference on Autonomous Unmanned Systems (3rd ICAUS 2023) PDF Author: Yi Qu
Publisher: Springer Nature
ISBN: 981971107X
Category :
Languages : en
Pages : 679

Book Description


Technological Innovation for Collective Awareness Systems

Technological Innovation for Collective Awareness Systems PDF Author: Luis M. Camarinha-Matos
Publisher: Springer
ISBN: 3642547346
Category : Computers
Languages : en
Pages : 614

Book Description
This book constitutes the refereed proceedings of the 5th IFIP WG 5.5/SOCOLNET Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2014, held in Costa de Caparica, Portugal, in April 2014. The 68 revised full papers were carefully reviewed and selected from numerous submissions. They cover a wide spectrum of topics ranging from collaborative enterprise networks to microelectronics. The papers are organized in the following topical sections: collaborative networks; computational systems; self-organizing manufacturing systems; monitoring and supervision systems; advances in manufacturing; human-computer interfaces; robotics and mechatronics, Petri nets; multi-energy systems; monitoring and control in energy; modelling and simulation in energy; optimization issues in energy; operation issues in energy; power conversion; telecommunications; electronics: design; electronics: RF applications; and electronics: devices.

Artificial Intelligence in Real-Time Control 1989

Artificial Intelligence in Real-Time Control 1989 PDF Author: Hua-Tian Li
Publisher: Elsevier
ISBN: 1483298337
Category : Technology & Engineering
Languages : en
Pages : 127

Book Description
Papers presented at the workshop are representative of the state-of-the art of artificial intelligence in real-time control. The issues covered included the use of AI methods in the design, implementation, testing, maintenance and operation of real-time control systems. While the focus was on the fundamental aspects of the methodologies and technologies, there were some applications papers which helped to put emerging theories into perspective. The four main subjects were architectural issues; knowledge - acquisition and learning; techniques; and scheduling, monitoring and management.