Low-Cost Centimeter-Accurate Mobile Positioning Ken Pesyna*†, Todd Humphreys*†, and Robert Heath* *The
University of Texas at Austin †Radiosense,
LLC
Motivation 2
“I predict that by the GPS World dinner in 2020, carrier-phase differential GNSS, will be cheap and pervasive. We’ll have it on our cell phones and our tablets. There will be app families devoted to decimeter- and centimeter-level accuracy…This will be the commoditization of centimeter-level GNSS.” –Todd Humphreys, GPS World Dinner 2012
Strategy 3
We believe the two most critical factors for mainstream cm-accurate GNSS users will be time to fix and cost. This requires network RTK or PPPRTK with a dense network: 1. As compared to traditional PPP, network RTK and PPP-RTK have faster convergence times 2. As the number of users increases, it makes sense to shift costs from the user devices to the network: if having a 15-km spaced reference network saves even $1 per user device, it makes economic sense
The Primary Challenge: Awful Antennas 4
Antenna
Axial Ratio
Polarization
Loss in Gain compared to Survey-grade
Survey-grade
1 dB @ 45°
RHCP
0 dB
High-quality Patch
2 dB @ 45°
RHCP
0 – 0.5 dB
Low-quality Patch
3 dB (average)
RHCP
0.6 dB
Smartphonegrade
10+ dB
Linear
11 dB
Test Platform 5
Clock
Antenna
Front-end
Smartphone GNSS Chipset
Filter LNA
Data Storage GRID SDR Outputs: • Phase/pseudorange measurements • Complex (I,Q) accumulations
GRID SDR
RTK Engine
RTK Filter Outputs: • Cm-Accurate Position • Phase Residuals • Theoretical Integer Resolution Success Bounds • Empirical Integer Resolution Success Rates
Gain Compared to a Geodetic-Grade Antenna 6
(dB)
Gain Compared to a Geodetic-Grade Antenna 6
(dB)
December 2014: Successful RTK positioning solution with a smartphone
Handheld RTK result with some signals passing through user’s body
GNSS “light painting” with a smartphone
Residuals comparison 8
Standard Deviation: 3.4 mm
Residuals comparison 8
Standard Deviation: 4.6 mm
Residuals comparison 8
Standard Deviation: 5.5 mm
Residuals comparison 8
Standard Deviation: 11.4 mm
Residuals comparison 8
Standard Deviation: 8.6 mm
Time to ambiguity resolution for static antennas 9
Time to ambiguity resolution for static antennas 9
Overcoming multipath with more signals 10
A Mitigation Suited for Smartphones: Multipath suppression via receiver motion (1 of 2) 11
Phase Residuals (No Motion)
Phase Residuals (Motion)
Residual Autocorrelation (No Motion)
Residual Autocorrelation (Motion)
A Mitigation Suited for Smartphones: Multipath suppression via receiver motion (2 of 2) 12
Impact of Antenna Motion on Ambiguity Resolution 22
No Antenna Motion
Antenna Motion
Impact of Motion Trajectory Knowledge on Ambiguity Resolution 23
Antenna Motion
Antenna Motion + Trajectory Aiding
Summary so far: For low-cost antennas, TAR is reduced by 1. More satellites (more DD phase measurements) 2. Multipath decorrelation via wavelength-scale antenna motion
Summary so far: For low-cost antennas, TAR is reduced by 1. More satellites (more DD phase measurements) 2. Multipath decorrelation via wavelength-scale antenna motion
How can TAR be further reduced?
VISRTK: GNSS-enabled GloballyReferenced Structure from Motion
Scene Reconstruction
Sparse Reconstruction
Dense Reconstruction
But without control points, reconstruction has a scale, rotation, and translation ambiguity
Rotational, Translational, and Scale Ambiguity
Vision Reference Frame
We must resolve this ambiguity before our camera poses and point feature positions are globally referenced
Vision Reference Frame
Global Reference Frame
Resolving the Ambiguity: Method 1
Method 1: Horn Transformation Similarity Transform
Vision Reference Frame
Global Reference Frame
Goal: Compute the transformation (scale, rotation, translation) to the vision-frame that minimizes the square distance between each known control point (red circle) and the associated vision-produced camera position.
Method 1: Horn Transformation Vision-based relative errors persist
Vision Reference Frame
Control Points in Global Frame
Global Reference Frame
We must compute the transformation (scale, rotation, translation) to the vision frame to bring it into the global frame. We must minimize the square distance between each known control point (red circle) and the associated vision-produced camera position.
Method 1: Horn Transformation Vision-based relative errors persist
UPSIDE: Computationally Efficient DOWNSIDE: No way to fix relative position/pose errors of from the visiononly solution Vision Reference Frame
Control Points in Global Frame
Global Reference Frame
We must compute the transformation (scale, rotation, translation) to the vision frame to bring it into the global frame. We must minimize the square distance between each known control point (red circle) and the associated vision-produced camera position.
Resolving the Ambiguity: Method 2
Method 2: Loosely-Coupled GNSS Position + Vision Measurements Horn Transform to initialize
Optimal ML solution
Jointly fuse GNSS antenna position and vision measurement into the same cost function: Point Feature Position Camera Position Camera Orientation
Position Measurement
Vision Meas. Model Pos. Meas. Model Vision Measurement
Method 2: Loosely-Coupled GNSS Position + Vision Measurements Horn Transform to initialize
Optimal ML solution
UPSIDE: Achieve optimal ML solution based on vision and GNSS position measurements DOWNSIDE: No way to recover from an incorrect CDGNSS carrier phase ambiguity poisoning the position measurements Jointly fuse GNSS antenna position and vision measurement into the same cost function: Point Feature Position Camera Position Camera Orientation
Position Measurement
Vision Meas. Model Pos. Meas. Model Vision Measurement
Resolving the Ambiguity: Method 3
Method 3: Tightly-Coupled GNSS Phase + Vision Measurements Horn Transform to initialize
Optimal ML solution
Jointly fuse GNSS carrier phase and vision measurement in the same nonCamera Orientation linear estimator: Point Feature Position Camera Position DD Integer Ambiguities
DD Phase Measurements Phase Meas. Model
Vision Measurement Vision Meas. Model
Method 3: Tightly-Coupled GNSS Phase + Vision Measurements Horn Transform to initialize
Optimal ML solution
UPSIDE: Achieve optimal ML solution based on vision and GNSS carrier phase measurements Can also use vision measurements to aid in CDGNSS ambiguity resolution AND CDGNSS cycle slip detection Jointly fuse GNSS carrier phase and vision measurement in the same nonCamera Orientation linear estimator: Point Feature Position Camera Position DD Integer Ambiguities
DD Phase Measurements Phase Meas. Model
Vision Measurement Vision Meas. Model
How do we perform?
Tightly-coupled estimator-based point position:
Antenna Surveyed Position: (-24.2766, -3.7213, 7.4477) Difference (in meters): E: -0.0038508 N: -0.0009541 U: 0.0087948
Reverse the process: Can we use a pre-exiting map to “jumpstart” our CDGNSS ambiguity resolution?
CDGNSS Jumpstart: 1. Take a photo of a pre-mapped area 2. Compute the camera’s position and orientation to cm- and sub-degree-accuracy 3. Compute GNSS antenna position from camera position/orientation 4. Instantly resolve CDGNSS ambiguities
radionavlab.ae.utexas.edu
13