Fundamentals Of Navigation And Inertial Sensors Pdf

  • and pdf
  • Friday, May 14, 2021 8:57:09 AM
  • 1 comment
fundamentals of navigation and inertial sensors pdf

File Name: fundamentals of navigation and inertial sensors .zip
Size: 23854Kb
Published: 14.05.2021

An indoor navigation system based on stereo camera and inertial sensors with points and lines is proposed to further improve the accuracy and robustness of the navigation system in complex indoor environments. The point and line features, which are fast extracted by ORB method and line segment detector LSD method, are both employed in this system to improve its ability to adapt to complex environments.

An inertial navigation system INS is a navigation device that uses a computer , motion sensors accelerometers and rotation sensors gyroscopes to continuously calculate by dead reckoning the position, the orientation, and the velocity direction and speed of movement of a moving object without the need for external references. INSs are used on mobile robots [2] [3] and on vehicles such as ships , aircraft , submarines , guided missiles , and spacecraft. Older INS systems generally used an inertial platform as their mounting point to the vehicle and the terms are sometimes considered synonymous. Inertial navigation is a self-contained navigation technique in which measurements provided by accelerometers and gyroscopes are used to track the position and orientation of an object relative to a known starting point, orientation and velocity.

Inertial navigation system

An indoor navigation system based on stereo camera and inertial sensors with points and lines is proposed to further improve the accuracy and robustness of the navigation system in complex indoor environments. The point and line features, which are fast extracted by ORB method and line segment detector LSD method, are both employed in this system to improve its ability to adapt to complex environments. In addition, two different representations of lines are adopted to improve the efficiency of the system.

Besides stereo camera, an inertial measurement unit IMU is also used in the system to further improve its accuracy and robustness. An estimator is designed to integrate the camera and IMU measurements in a tightly coupled approach. The experimental results show that the performance of the proposed navigation system is better than the point-only VINS and the vision-only navigation system with points and lines.

Indoor navigation technique, which has been widely applied in the field of mobile robot e. One major challenge of the indoor navigation system is the unavailability of the global position system GPS signal in indoor environment. Therefore, many other sensors have been applied in the system such as sonar [ 3 ], odometry [ 4 ], light detection and ranging LiDAR [ 2 ], camera [ 5 ], and inertial measurement unit IMU [ 6 ]. With the recent development in vision-based techniques, cameras used as sensors make the vision-based navigation system more and more attractive [ 7 ].

Direct method and feature-based method are two main methods for indoor navigation system. While the direct method estimates the motion by minimizing the photometric error [ 8 ], we deal with feature-based method in this paper.

For conventional feature-based visual navigation system, feature extraction methods are applied to extract the point features from images collected by the camera.

After matching these point features between two frames, the pose of the system can be estimated through ego-motion estimation methods e. Note that only point features are used in the conventional feature-based visual navigation system. However, in some low-textured scenarios exemplified by man-made or indoor environments, it is hard to extract enough key points by suitable point feature extraction method.

As a result, the performance of conventional feature-based method will decrease, which thus implies that other complementary features need to be explored for further performance improvement.

In recent years, line feature has received more and more attention since lines or line segments provide significantly more information than points in encoding the structure of the surrounding environment [ 11 ].

They are usually abundant in human-made scenarios, which are characterized by regular structures rich in edges and linear shapes [ 12 ]. In addition, recent advances in the line segment detection method have made it possible to extract line features fast and accurately.

Therefore, in some low-textured environments, the line features can be used in the feature-based visual navigation system to improve the navigation accuracy [ 11 , 13 ]. In this paper, we choose the stereo camera to acquire images of indoor environments; thereafter, points and lines features are all extracted from these images.

In addition, the scale information of the features can be estimated by stereo camera because of the small scale of the indoor environments.

Although the camera can be used efficiently in navigation system, it is not robust to motion blur induced by rapid motion [ 14 ] and the vision-based navigation system often fails in rapid motion scenarios. In order to make the system more robust, a common solution is fusing camera with inertial sensors, which leads to the visual-inertial navigation system VINS [ 6 , 15 ].

However, it suffers from the accumulated error and relatively large measurement uncertainty at slow motion [ 16 ]. It can be found that visual and inertial measurements offer complementary properties which make them particularly suitable for fusion [ 17 ], since inertial sensors have a relatively low uncertainty at high velocity, whereas cameras can track features very accurately at low velocity and less accurately with increasing velocity [ 16 ].

Consequently, VINS works better than vision-only or pure inertial navigation system. Without loss of generality, the VINS can be classified into two methods, filtering-based method and optimization-based method. Although the accuracy of optimization approach is better than the filtering approach, it is computationally expensive. Considering the extra computational cost in line feature extraction, we adopt the filtering approach which usually uses the extended Kalman filter EKF to estimate the pose of the system.

In this paper, we present a visual-inertial navigation system with point and line features for indoor environments. The stereo camera is combined with an inertial sensor in a tightly coupled filtering approach to overcome the defect in single sensor.

The point and line features are both used in the proposed system, such that this system can perform well in low-textured environments such as indoor environments. In addition, the robustness of this navigation system can be improved by these two schemes.

The performance of this system is also tested in an experiment which uses visual-inertial benchmark dataset. The rest of the paper is organized as follows. The related work is described simply in Section 2. In Section 3 , we present the IMU model based on the inertial measurements and the representations of point and line features.

The estimator and the implementation of the algorithm are described in Section 4. Experiments are conducted in Section 5 to demonstrate the performance of the proposed system.

Finally, conclusions are drawn in Section 6. For feature-based method, pioneering work has been carried out in [ 18 ]. However, the systems mentioned above would reduce their performance in low-textured environment with insufficient point features. Therefore, massive efforts are devoted to the line segment detection method and its application in visual navigation and SLAM system.

A linear-time line segment detector has been proposed in [ 24 , 25 ] with a high detection accuracy. In [ 13 ], a line-based monocular 2D SLAM system has been also presented, which incorporates vertical and horizontal lines to estimate the motion with EKF. In addition, another line-based system has been proposed in [ 11 ] to improve the efficiency of the system by adopting two different representations of a line segment.

However, the line feature only lends itself to the structured environment, which implies that the system performance is likely to be degraded in complex environment by employing the single line feature.

Taking into account the complementarity between point and line features, a combination of both features has beneficial effects on the robustness and accuracy of the navigation system. Recently, visual navigation systems with points and line segments have been proposed in many works [ 26 — 28 ], which reveals superior performance in a wide variety of real-time scenarios.

However, the visual-only system has its own limitation and the VINS is developed to overcome it. The filtering-based VINS is particularly relevant to our work. A vision-aided inertial navigation system based on a multistate constraint Kalman filter has been proposed in [ 6 , 15 ], presenting a novel system model which does not contain the feature position in state vector.

Therefore, the computational overhead of the method is low. However, this system relies on large storage capacity, which makes it unavailable in some applications. In this paper, the author proposes a new method which improves the consistency of the system based on analyzing the observability of the VINS.

An autonomous navigation system combines the stereo camera, and IMU has been proposed in [ 30 ], which improves the positioning accuracy by considering both the near and the far point features.

However, all these methods mentioned above use point features only. Line-based VINS has been proposed in [ 31 , 32 ], which use straight lines as features.

These systems exhibit superior reconstruction performance against point-based navigation system in line-rich environment.

However, in our work, we mainly focus on the VINS combining both point and line features. In this section, we will describe the IMU model and the camera model which contain the point and line features. To begin with, we define the reference coordinate frames which would be applied in the rest of the paper.

The pose information of the system is all characterized with respect to in this coordinate frame. In addition, the navigation coordinate frame is fixed with the first frame of the system such that the effect of the earth rotation can be ignored cf. Their origins are located at the center of the IMU and the optical center of the camera. The -axis of camera coordinate frame points aligns with the optical axis. There are two camera coordinate frames, which are denoted in terms of left camera coordinate frame and the right camera coordinate frame.

The angular velocity and the specific force of the system in the body coordinate frame can be measured by the IMU. These measurements include the angular velocity and the acceleration information with noise [ 34 ]: where denotes the true angular velocity in body coordinate frame, denotes the true acceleration in navigation coordinate frame, and are the constant drifts of the gyroscopes and the accelerometers in body coordinate frame, respectively, and are the random noises of the gyroscopes and the accelerometers in body coordinate frame, which can be modeled as uncorrelated zero-mean white Gaussian noise, denotes the rotation matrix from the navigation coordinate frame to the body coordinate frame, and is the gravity vector.

The estimates of the attitude information described in the form of unit quaternion , the velocity , and the position can be updated by the IMU measurements as follows [ 34 ]: where denotes the estimated rotation matrix from the body coordinate frame to the navigation coordinate frame, and are the estimates of the constant drifts of the gyroscopes and the accelerometers, which can be obtained by the calibration method [ 35 ], and.

In addition, the unit quaternion , the attitudes , and the rotation matrixes and can be interchangeable [ 34 ]. In addition, and can be modeled as follows:. According to the analysis above, the IMU state vector can be described as follows:. In practice, the angular velocity and the specific force are sampled by IMU at discrete times, so 3 should be calculated in a discrete method. Therefore, we assume that the and are constant between the two sampling times; thereafter, 3 can be discretized in an easy way.

The point and line features are extracted from the images collected by the stereo camera. In this subsection, we will describe the representations of the point and line features. Owing to small scale of the indoor environments, the depth of the most points in images can be estimated by stereo camera through the baseline between the left and right cameras, so point features can be coded with a Cartesian coordinate frame as follows: where denotes the position of point features in body coordinate frame and it can be estimated from the pixel coordinate value of the point features as follows: where is the baseline between the left camera and the right camera, is the disparity, and are the pixel coordinate value of the point features in the left camera image, and are the pixel coordinate value of the optical center in the left camera image, and is the focal length.

In addition, , , and are the intrinsic parameters of the camera which can be obtained by the camera calibration method [ 36 ]. The position of point features in navigation coordinate frame is also important in the proposed system, and the estimated position can be computed as follows:. This process is only executed when the point feature is detected in the first time.

In this section, for the sake of simplicity, the rotation matrix between IMU coordinate frame and camera coordinate frame and the relative position between these two coordinate frames are ignored, but in practice, they should be considered.

According to the analysis above, the point feature state vector can be described as follows: where denotes the position of the th point features in navigation coordinate frame and is the number of the point features and this number is a variable because of the different images collected by the camera. Different from the points represented by in three-dimensional spaces, lines are parameterized in 4 DOFs. It can be found that only is used in this projection process.

If using this representation in estimator, superfluous elements will cost more computation. In addition, the orthogonal constraint between the and , that is, , will decrease the numerical stability, and thus, we need to use another representation method in the estimator. We define a matrix and decompose this matrix by QR decomposition as and set. The 3D line can be represented by where. This means the and are three- and two-dimensional rotation matrices, respectively.

Thus, the matrix can be updated accordingly by a vector containing 3 parameters such as Euler angles and can be updated by a scalar parameter as follows: where and denote the three-dimensional rotation matrices. The detailed discussions will be provided in Section 4.

In this section, an EKF estimator is presented. The IMU measurements along with the point and line features mentioned in Section 3 are combined based on this estimator in a tightly coupled approach. In addition, we will elaborate the feature detection methods, the feature initialization, and some other processing steps. In our work, the EKF estimator uses the error model, which is defined as follows: where , , and are the error-state vectors of IMU, point features, and line features, respectively.

The IMU error-state vector can be defined as where , , , and are the velocity error, the position error, the error of gyroscope constant drift, and the error of accelerometer constant drift, respectively.

AERO4701: Space Engineering

NOTE: Short courses are available only upon request, at our location or at yours. To request a registration form or a course cost-estimate call Power Seminar in Navigation 1 day - This overview series of lectures introduces the art and science of navigation spanning the classical techniques of celestial astronomy to the future applications of advanced Intelligent Transportation Systems. Introductory lectures on the role of navigation in military systems, the history of navigation and the future of navigation are provided. The status and fundamentals of radio, satellite and inertial navigation are addressed. The history, advantages and disadvantages of inertial navigation are addressed. A description and demonstration of systems presently being utilized or in development is provided.

If you wish to contribute or participate in the discussions about articles you are invited to join SKYbrary as a registered user. On the one hand the term INS is used as a blanket description for a wide variety of navigation sensors and systems of different design; and on the other hand, it is also used to describe a specific version of these sensors and systems! The term has also changed over the years as the technology has improved. What can be said with confidence is that all these systems work on a similar principle and for the same purpose. Below is a list of commonly used terms that are used colloquially and interchangeably by pilots if not by all designers, manufacturers and engineers to mean very much the same thing, with differences in some of the detail. Where necessary two or more definitions are provided. For the purposes of this Article, the definition of INS what it is and what it does that is likely to be most commonly used in the aviation community is as follows:.

Inertial sensors technologies for navigation applications: state of the art and future trends

Metrics details. Inertial navigation represents a unique method of navigation, in which there is no dependency on external sources of information. As opposed to other position fixing navigation techniques, inertial navigation performs the navigation in a relative sense with respect to the initial navigation state of the moving platform.

Fundamentals of Inertial Navigation, Satellite-based Positioning and their Integration is an introduction to the field of Integrated Navigation Systems. It serves as an excellent reference for working engineers as well as textbook for beginners and students new to the area. The book is easy to read and understand with minimum background knowledge. The authors explain the derivations in great detail.

Inertial navigation system

Author Contributions: The work presented in this paper was carried out in collaboration between all authors. Youssef Tawk conceived, and designed the TCAPLL architecture, carried out the simulations and the field vehicle test measurements, and wrote the paper. The use of global navigation satellite system receivers for navigation still presents many challenges in urban canyon and indoor environments, where satellite availability is typically reduced and received signals are attenuated. To improve the navigation performance in such environments, several enhancement methods can be implemented.

Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. Groves Published Geography. This tutorial provides an introduction to navigation using inertial sensors, explaining the underlying principles.

 Неужели так. - Утечка информации! - кричал кто-то.  - Стремительная. Все люди на подиуме потянулись к терминалу в одно и то же мгновение, образовав единое сплетение вытянутых рук. Но Сьюзан, опередив всех, прикоснулась к клавиатуре и нажала цифру 3. Все повернулись к экрану, где над всем этим хаосом появилась надпись: ВВЕСТИ ПАРОЛЬ.

An Indoor Navigation System Based on Stereo Camera and Inertial Sensors with Points and Lines

1 Comments

  1. Clearcomcyle 16.05.2021 at 07:43

    Lange q&a psychiatry 11th edition pdf free download calculus early transcendentals 6th edition solutions manual pdf