No version for distro humble. Known supported distros are highlighted in the buttons above.
No version for distro jazzy. Known supported distros are highlighted in the buttons above.
No version for distro rolling. Known supported distros are highlighted in the buttons above.
Repository Summary
Description | Localize the car in a static map with a particle filter. |
Checkout URI | https://github.com/mit-rss/localization.git |
VCS Type | git |
VCS Version | master |
Last Updated | 2025-04-02 |
Dev Status | UNKNOWN |
CI status | No Continuous Integration |
Released | UNRELEASED |
Tags | No category tags. |
Contributing |
Help Wanted (0)
Good First Issues (0) Pull Requests to Review (0) |
Packages
Name | Version |
---|---|
localization | 0.0.0 |
README
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "kzoIJTE67uqm"
},
"source": [
"# Lab 5 - Localization\n",
"\n",
"(*Open this notebook in Chrome or Firefox to avoid incorrect equation rendering in Safari*)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "lfQw3IPw7uqx"
},
"source": [
"## Part A - Writing Assignment (INDIVIDUAL)\n",
"\n",
"<a id='question1'></a>\n",
"\n",
"**Question 1**. (**Motion Model**) Consider a deterministic motion model (no added noise) based on odometry information. The motion model, $f$, takes as arguments the old particle pose, $\\mathbf{x}_{k-1}$, as well as the current odometry data, $\\Delta\\mathbf{x}$, and returns a new pose, $\\mathbf{x}_k$ with the odometry “applied” to the old poses:\n",
"\n",
"$$\n",
"\\mathbf{x}_k = \n",
" \\begin{bmatrix}\n",
" x_k \\\\\n",
" y_k \\\\\n",
" \\theta_k \\\\\n",
" \\end{bmatrix}\n",
"= f(\\mathbf{x}_{k-1}, \\Delta x)\n",
"$$\n",
"\n",
"Note that the poses $\\mathbf{x}$ are expressed in the world frame relative to some arbitrary origin (typically the map frame). The odometry $\\Delta \\mathbf{x}$ is necessarily a local quantity, and typically represents the measured change of pose between time $k-1$ and time $k$. Note that some systems provide the odometry as an absolute pose at time $k$ (this is the result of motion integration, which we know is diverging over time), in which case you can use the \"absolute\" odometry at time $k-1$ and time $k$ to get $\\Delta\\mathbf{x}$.\n",
"\n",
"**i.** Suppose you are given $\\mathbf{x}_{k-1} = \\left[0, 0, \\frac{\\pi}{6}\\right]^T$ and $\\mathbf{x}_{k} = \\left[0.2, 0.1, \\frac{11\\pi}{60}\\right]^T$. Compute the odometry data $\\Delta \\mathbf{x}$ (expressed in the body frame at time $k-1$) that results in this transformation. *(Hint: review the geometry lectures on pose composition and inverse)*\n",
"\n",
"**ii.** Now suppose you received the odometry data $\\Delta\\mathbf{x}$ from part **i**, but your car was previously at position $\\mathbf{x}_{k-1} = \\left[3, 4, \\frac{\\pi}{3}\\right]^T$. Compute the current pose $\\mathbf{x}_k$.\n",
"\n",
"If you were to use this deterministic motion model in your particle filter all of your particles would end up in the same place - which defeats the purpose of having so many. When you build your actual motion model, you will be injecting noise into the function $f$ which will make your particles spread out as the car moves. This accounts for any uncertainty that exists in the odometry information. The sensor model will collapse this spreading of particles back into a small region."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "dZcN6Ssz7uqx"
},
"source": [
"<a id='question2'></a>\n",
"**Question 2**. (**Sensor Model**) The sensor model, $p(z_{k}| x_{k}, m)$, defines how likely it is to record a given sensor reading $z_{k}$ from a hypothesis position $x_{k}$ in a known, static map $m$ at time $k$. We will use this likelihood to \"prune\" the particles as the motion model tries to spread them out. Particles will be kept if the sensor reading is consistent with the map from the point of view of that particle. The definition of this likelihood is strongly dependent on the type of sensor used - a laser scanner in our case. \n",
"\n",
"As explained during lecture, the likelihood of a scan is computed as the product of the likelihoods of each of $n$ range measurements in the scan, where $p(z_{k}^{(i)} | x_{k}, m)$ is the likelihood of range measurement $i$:\n",
"\n",
"$$\n",
"p(z_{k}| x_{k}, m) = p(z_{k}^{(1)}, ... , z_{k}^{(n)} | x_{k}, m) = \\prod_{i=1}^{n}{p(z_{k}^{(i)} | x_{k}, m)}\n",
"$$\n",
"\n",
"For a range measurement, there are typically a few cases to be modeled in determining $p(z_{k}^{(i)}| x_{k}, m)$:\n",
"\n",
"1. Probability of detecting a known obstacle in the map\n",
"2. Probability of a short measurement. Maybe due to internal lidar reflections (scratches or oils on the surface), hitting parts of the vehicle itself, or other unknown obstacles (people, cats, etc).\n",
"3. Probability of a very large (aka missed) measurement. Usually due to lidar beams that hit an object with strange reflective properties and did not bounce back to the sensor\n",
"4. Probability of a completely random measurement. Just in case of an asteroid strike.\n",
" \n",
"We typically represent (1) with a Gaussian distribution centered around the ground truth distance between the hypothesis pose and the nearest map obstacle. Thus, if the measured range exactly matches the expected range, the probability is maximum. If the measured range is $z_{k}^{(i)}$ and the ground truth range is determined (via ray casting on the map $m$ from pose $x_k$) to be $d$ then we have that:\n",
"\n",
"$$\n",
"\tp_{hit}(z_{k}^{(i)}| x_{k}, m) = \\begin{cases}\n",
"\t\\eta \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp\\left(-\\frac{(z_k^{(i)} - d)^2}{2\\sigma^2}\\right) & \\text{if} \\quad 0 \\leq z_{k} \\leq z_{max}\\\\\n",
" 0 & \\text{otherwise}\n",
" \\end{cases}\n",
"$$\n",
"where $\\eta$ is a normalization constant such that the Gaussian integrates to $1$ on the interval $[0, z_{max}]$. For this problem, set $\\eta = 1$.\n",
"\n",
"Case (2) is represented as a downward sloping line as the ray gets further from the robot.\n",
"This is because if the unknown obstacles (people cats, etc.) are distributed uniformly in the environment, the lidar is more likely to hit ones that are closer (think about how area scales...). This likelihood can be modeled as:\n",
"\n",
"$$\n",
"\tp_{short}\\left(z_{k}^{(i)}| x_{k}, m\\right) = \\frac{2}{d} \\begin{cases}\n",
" 1 - \\frac{z_{k}^{(i)}}{d} & \\text{if} \\quad 0 \\leq z_{k}^{(i)} \\leq d \\text{ and } d \\neq 0\\\\\n",
" 0 & \\text{otherwise}\n",
"\\end{cases}\n",
"$$\n",
"\n",
"Case (3) is represented by a large spike in probability at the maximal range value, so that reflected measurements do not significantly discount particle weights. While this is really a delta function $\\delta(z_k^{(i)} - z_{max})$ it can be approximated by a uniform distribution close to $z_{max}$, for small $\\epsilon$. For this problem, choose $\\epsilon = 0.1$.\n",
"\n",
"$$\n",
"\tp_{max}(z_{k}^{(i)} | x_{k}, m) =\\begin{cases}\n",
"\t\\frac{1}{\\epsilon} & \\text{if} \\quad z_{max} - \\epsilon \\leq z_k^{(i)} \\leq z_{max}\\\\\n",
"\t0 & \\text{otherwise} \n",
"\t\\end{cases}\n",
"$$\n",
"\n",
"Case (4) is represented by a small uniform value, to account for unforeseen effects.\n",
"\n",
"$$\n",
"\tp_{rand}(z_{k}^{(i)} | x_{k}, m) = \\begin{cases}\n",
"\t\\frac{1}{z_{max}} & \\text{if} \\quad 0\\leq z_{k}^{(i)} \\leq z_{max}\\\\\n",
"\t0 & \\text{otherwise} \n",
"\t\\end{cases}\n",
"$$\n",
"\t\n",
"These four different distributions are now mixed by a weighted average, defined by the parameters $\\alpha_{hit}, \\alpha_{short},\\alpha_{max},\\alpha_{rand}$:\n",
"\n",
"$$\n",
"\t p(z_{k}^{(i)}| x_{k}, m) = \\alpha_{hit} \\cdot p_{hit}(z_{k}^{(i)}| x_{k}, m) + \\alpha_{short} \\cdot p_{short}(z_{k}^{(i)}| x_{k}, m) + \\alpha_{max} \\cdot p_{max}(z_{k}^{(i)}| x_{k}, m) + \\alpha_{rand} \\cdot p_{rand}(z_{k}^{(i)}| x_{k}, m) \n",
"$$\n",
"\n",
"Note that in order for $p(z_{k}^{(i)}| x_{k}, m)$ to be a probability distrubution we must have that:\n",
"\n",
"$$\n",
"\\alpha_{hit}+\\alpha_{short}+\\alpha_{max}+\\alpha_{rand}=1\n",
"$$\n",
"\n",
"All together this should look like this:\n",
"\n",
"<img src=\"figures/sensor_model_slice2.png\" width=\"600\">\n",
"\t \n",
"Find the values of $p(z_{k}^{(i)}| x_{k}, m)$ for the values of $z_k^{(i)}$ below and the following parameters:\n",
"$\\alpha_{hit} = 0.74$,\n",
"$\\alpha_{short}=0.07$,\n",
"$\\alpha_{max}=0.07$,\n",
"$\\alpha_{rand}=0.12$,\n",
"$\\sigma=0.5\\text{ m}$,\n",
"$z_{max}=10\\text{ m}$, and\n",
"$d = 7\\text{ m}$. \n",
"*Note: exp(x) is the same as $e^{x}$*\n",
"\n",
"**i.** $z_{k}^{(i)} = 0\\text{ m}$\n",
"\n",
"**ii.** $z_{k}^{(i)} = 3\\text{ m}$\n",
"\n",
"**iii.** $z_{k}^{(i)} = 5\\text{ m}$\n",
"\n",
"**iv.** $z_{k}^{(i)} = 8\\text{ m}$\n",
"\n",
"**v.** $z_{k}^{(i)} = 10\\text{ m}$"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4NYdsbw57uqy"
},
"source": [
"## Part B - Programming Assignment and Gradescope Autograder\n",
"\n",
"### 1. Getting Started\n",
"\n",
"Grab the skeleton code from this repository and make a repo in your team's organization - clone it in your
```racecar_ws/src
``` folder.\n",
"\n",
"#### 1.1 Installing Dependencies\n",
"\n",
"We are helping you out a little bit by taking care of ray tracing for you. This step needs to be repeated many many times and it can be quite slow without the help of C++. Plus it happens to be the same exact code used to generate the simulated scan in the racecar simulator!\n",
"\n",
"As mentioned in the README, a TA will help you verify that this component is installed correctly. Please find a TA to help you with this step.\n",
"\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "AQLqs5Vm7uq0"
},
"source": [
"### 2. Implementation\n",
"\n",
"#### 2.1 Motion Model\n",
"\n",
"Implement your motion model in the
```motion_model.py
``` file.\n",
"\n",
"While you could use the IMU in the motion model, we recommend using the wheel odometry coming from dead-reckoning integration of motor and steering commands. It will probably be less noisy than the IMU except in extreme operating conditions.\n",
"\n",
"The `/odom` topic in the simulator and the `/vesc/odom` topic on the car already give you odometry values, expressed both as an instantaneous velocity and angular velocity, as well as a pose in the global frame that has been accumulated over time. See the [Odometry message](http://docs.ros.org/melodic/api/nav_msgs/html/msg/Odometry.html) for more information.\n",
"\n",
"The precise definition of your motion model is up to you. The main constraints are:\n",
"\n",
"- You must add random noise! Otherwise, as mentioned earlier, the particles won't spread out.\n",
"- As the noise approaches zero, the motion model should become exactly equal to the deterministic model you worked with in the writing section.\n",
"\n",
"You might empirically determine your noise coefficients based on what works, or could try to gather data from the car which allows you to directly determine your measurement uncertainty. You might also consider how your steering commands correspond to changes in state and use this information in fusion with your odometry data for a more accurate model.\n",
"\n",
"We recommend you start with the formulas you derived in [Question 1 of part A](#question1) and think of ways to add noise to these equations in a way that makes sense geometrically. The algorithm
```sample_motion_model_odometry
``` in the [Probabilistic Robotics textbook](https://docs.ufpr.br/~danielsantos/ProbabilisticRobotics.pdf) may also help although it is not necessarily accurate to the car's geometry. This book is a fantastic resource on many parts of this lab."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-W4TI3Rd7uq1"
},
"source": [
"#### 2.2 Sensor Model\n",
"\n",
"Implement your sensor model in the
```sensor_model.py
``` file.\n",
"\n",
"Once you have updated particle positions via your motion model, you use the sensor model to assign likelihood weights to each particle. This allows good hypotheses to have better likelihood of being sampled on the next iteration, and visa versa for bad hypotheses.\n",
"\n",
"In the starter code, we've provided function signatures for `evaluate` and `precompute_sensor_model` please make sure you implement these without changing the function signature.\n",
"\n",
"\n",
"#### Precomputing the model\n",
"\n",
"You may have noticed while doing the writing section that the sensor model is quite complicated and requires a lot of multiplications and non-trivial functions. In order to perform these operations more efficiently, we will resort to discretization of the range values. Indeed you can think about the sensor model as a function that takes as input the actual distance $d$ and the measured distance $z_k^{(i)}$ and returns a probability. So rather than evaluating this function millions of times at runtime, you can evaluate it on a discretized grid of inputs $z_k^{(i)}$ and $d$ and store these values in a lookup table.\n",
"\n",
"When we discretize in this way, for calculating $p_{max}$ we can set $\\epsilon$ to $1$ \"pixel\" or grid square of the lookup table. Thus the expression for $p_{max}$ with discretization simplifies to:\n",
"\n",
"$$\n",
"\tp_{max}(z_{k}^{(i)} | x_{k}, m) =\\begin{cases}\n",
"\t1 & \\text{if} \\quad z_k^{(i)} = z_{max}\\\\\n",
"\t0 & \\text{otherwise} \n",
"\t\\end{cases}\n",
"$$\n",
"\n",
"This strategy is described in section 3.4.1 of [2]. The motivation is twofold:\n",
"\n",
"1. Reading off a probability value from a table based on your actual sensor measurement and raycast ‘ground truth’ values is **MUCH** faster than doing all the math every time;\n",
"2. This provides a good opportunity to numerically normalize the $p(z_{k}^{(i)} |x_k, m)$ function (make sure your probabilities sum to $1$).\n",
"\n",
"Hints for normalization (and passing the unit tests):\n",
"- before normalizing your whole precomputed table, you should make sure you are normalizing your $p_{hit}$ values across columns ($d$ values); while the $\\frac{1}{\\sqrt{2\\pi\\sigma^2}}$ term is enough to normalize in the continuous case, once we discretize the $p_{hit}$ values will no longer sum to $1$ without an additional normalization\n",
"- make sure you normalize all of the columns of your precomputed table (corresponding to $d$ values) to sum to $1$\n",
"\n",
"If you plot your surface (using $\\alpha_{hit} = 0.74$, $\\alpha_{short} = 0.07$, $\\alpha_{max} = 0.07$, $\\alpha_{rand} = 0.12$, and $\\sigma_{hit} = 8.0$) you should get something like this:\n",
"\n",
"<img src=\"figures/sensor_model2.png\" width=\"600\">\n",
"\n",
"\n",
"#### Ray casting\n",
"\n",
"As we mentioned, we are doing ray casting for you. If you pass an array of particles (and $n\\times 3$ matrix) into our ray tracer you will get an $n\\times m$ matrix where $m$ is the number of lidar beams. This is essentially a stack of $n$ lidar messages which you are probably very familiar with at this point. There is one lidar message for each particle, which is the scan that would be seen from that point.\n",
"\n",
"For example if you wanted to call it on a bunch of particles that are all $(0, 0, 0)$ you could do:\n",
"\n",
" poses = np.zeros((n, 3))\n",
" scans = self.scan_sim.scan(poses)\n",
"\n",
"Also note that when you are downsampling your lidar, you should make sure the `num_beams_per_particle` parameter in `params.yaml` matches the number of beams you are downsampling to - this parameter will determine the number of columns $m$ in the matrix returned by the `self.scan_sim.scan` call. For more notes on downsampling see the Tips and Tricks section.\n",
"\n",
"#### Converting from meters to pixels\n",
"\n",
"In your sensor model `evaluate` function, make sure you are converting the distances in your lidar observations as well as in the result of ray casting from meters to pixels!\n",
"\n",
"In your `params.yaml`, make sure the `lidar_scale_to_map_scale` parameter is set to $1.0$ for the 2D racecar simulator or the real car (it might be convenient to make separate launch files for running localization in the different environments that require different parameter settings). \n",
"\n",
"Then, to scale your lidar observations and ray casting scans from meters to pixels, divide them by `self.map_resolution`*`lidar_scale_to_map_scale`. \n",
"\n",
"Finally, clip all lidar or ray casting distances that are $> z_{max}$ to $z_{max}$, and all distances that are $< 0$ to $0$.\n",
"\n",
"After these steps, your lidar observation and ray casting distances should be ready to use in your precomputed sensor model table!"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "U0Wo5yXy7uq2"
},
"source": [
"#### 2.3 Putting it all together: The particle filter\n",
"\n",
"Once you have a motion model and a sensor model you can use them to construct the complete MCL algorithm. Do this in the
```particle_filter.py
``` file.\n",
"\n",
"From a very high level this will work as follows:\n",
"\n",
"- Whenever you get odometry data use the motion model to update the particle positions\n",
"- Whenever you get sensor data use the sensor model to compute the particle probabilities. Then resample the particles based on these probabilities\n",
"- Anytime the particles are update (either via the motion or sensor model), determine the \"average\" (term used loosely) particle pose and publish that transform.\n",
"\n",
"You will also consider how you want to initialize your particles. We recommend that you should be able to use some of the interactive topics in rviz to set an initialize \"guess\" of the robot's location with a random spread of particles around a clicked point or pose. Localization without this type of initialization (aka the global localization or the [kidnapped robot problem](https://en.wikipedia.org/wiki/Kidnapped_robot_problem)) is very hard.\n",
"\n",
"As for determining the \"average pose\" of all of your particles, be careful with taking the average of $\\theta$ values. See this page: [mean of circular quantities](https://en.wikipedia.org/wiki/Mean_of_circular_quantities). Also consider the case where your distribution is multi modal - an average could pick a very unlikely pose between two modes. What better notions of \"average\" could you use?\n",
"\n",
"Finally, publish this \"average pose\" as a dynamic transformation between `/map` and `/base_link`. This will allow you to visualize the location of the car on the map in RViz when testing on the real car! Note that publishing this transform isn't necessary when testing in the simulator; in fact, the simulator already publishes a ground-truth `/map` to `/base_link` transformation. To avoid conflicting with the simulator's `/map` to `/base_link` transformation, you should publish the \"average pose\" transform with `/base_link_pf` instead of `/base_link` when testing in simulation.\n",
"\n",
"You may find the [numpy.random.choice](https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html) function useful for resampling!\n",
"\n",
"Check out the Tips and Tricks section later in this document for more helpful info! ( ̄ー ̄)b\n",
"\n",
"#### Debugging\n",
"\n",
"The best way to debug the particle filter is by isolation. First, turn off your sensor model (don't ever resample). If your particles start out in a small location, they will begin to move together and slowly spread out as the travel distance increases. Remember in the absense of noise the motion model should exactly mimic the motion of the car.\n",
"\n",
"Make some test cases for the sensor model. For example scans that are identical should have a very high probability. Scans that are completely different should have a very low probability. What other useful cases could you write tests for? Visualizing is super helpful. Use the [PoseArray message](http://docs.ros.org/lunar/api/geometry_msgs/html/msg/PoseArray.html)!!!!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PBU6IaKk7uq2"
},
"source": [
"### 3. Test your implementation in simulation\n",
"\n",
"First, launch your localization node. Then, launch the racecar simulator. The order is important as your node will wait for the simulator to start up.\n",
"\n",
"
```bash\n",
"ros2 launch localization localize.launch.xml\n",
"ros2 launch racecar_simulator simulate.launch.xml\n",
"
```\n",
"\n",
"on the racecar, make sure you use `localization_simulate.launch.xml` rather than `simulate.launch.xml`, as we don't want the real and simulation scans to be conflicting. \n",
"\n",
"The simulator publishes ground truth position so it is a great way to test your particle filter before testing it on the real car. Augment the odometry data with various types of noise and measure how your particle filter does compared to ground truth in a number of of experiments.\n",
"\n",
"Record several rosbags of random driving in the simulator. You can play these back while your particle filter is running for repeated tests. \n",
"\n",
"First try adding gaussian noise to the velocity and angular velocity given by the simulated odometry (`/odom`). Plot the error of the pose estimated by your particle filter versus the ground truth position for varying levels of noise. You can get the ground truth position by listening to the transformation between `/map` and `/base_link`.\n",
"\n",
"Consider making a node that adds noise to the odometry rather than hard coding it into your particle filter. What other types of noise could you add to better replicate the noise you see on the actual car? How can you improve the performance of your particle filter in the presence of this noise?\n",
"\n",
"In all of these tests make sure that your position is being published in \"realtime\". This means **at least 20hz**. This may seem quite fast, but when you get your car to race speeds (~10 meters per second) the poses you publish could be as much as half a meter apart which will be tough for your controller to handle. Your publish rate is upper bounded by the speed of the odometry which is 50hz on the car."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "hc-7x3Gf7uq3"
},
"source": [
"### 4. Tips and Tricks\n",
"\n",
"Since the algorithm must run in realtime with a large number of particles, an efficient implementation is a requirement for success. There are a few tricks you can use, primarily:\n",
"\n",
"- Downsample your laser scan: you lidar has > 1000 beams but many of them are redundant. Downsample to ~100 for good performance (you could even try lower if you want). This will make the probability distribution over your state space less \"peaked\" and increase the number of particles you can maintain in real time (less ray casting).\n",
"\n",
"- Also to make your probability distribution less peaked you could also try to \"squash\" your sensor model output probability by raising it to a power of less than one (1/3 for example) to make your distribution even less peaked. If you are confused by this paragraph, look at [4,5]. \n",
"\n",
"- Don't go crazy with particles: start with ~200. You can probably get your code running with thousands of particles but it will take some well crafted code to run in real time.\n",
"\n",
"- Remember that your sensor model and motion model don't need to run at the same rate! The motion model is probably much faster and over short periods of time it will accurately track the motion of the car. The sensor model can correct the dift of the motion model at a slower rate if necessary.\n",
"\n",
"- Use
```ros2 topic hz
``` to check the rate at which you are publishing the expected transformation from the map to the car's position. It should be greater than 20hz for realtime performance.\n",
"\n",
"- Use numpy arrays for absolutely everything - python lists → slow\n",
" - Use numpy functions on numpy arrays to do any computations.\n",
" - avoid Python for loops like the plague\n",
" - [Slice indexing is your (best) friend.](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html)\n",
" \n",
"- Use the smallest number of operations required to perform your arithmetic\n",
"\n",
"- Avoid unnecessary memory allocations\n",
" - Cache and reuse important numpy arrays by setting them to the right size during initialization of your particle filter as “self” variables\n",
" \n",
"- Identify your critical code paths, and keep them clean\n",
" - Conversely, don’t worry too much about code that is called infrequently\n",
" \n",
"- Push code to Cython/C++ if necessary\n",
" - But don't worry about this too much - we already did this for the ray tracing part.\n",
" \n",
"- Avoid excessive function calls - function call overhead in Python → slow\n",
"\n",
"- Don’t publish visualization messages unless someone is subscribed to those topics, this can cause your system to be slower\n",
"\n",
"- Use a profiler to identify good candidates for optimization, but also, try a teammate's computer, some computers are just slower\n",
"\n",
"- On the real car, make sure your Jetson is running in Max-N mode for best performance\n",
"\n",
"- If you want an even faster (albeit more complicated to interface with) ray tracer check out [range_libc](https://github.com/kctess5/range_libc). This was written by RSS TA Corey Walsh and it is heavily optimized."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "JUJkcmQkAyUi"
},
"source": [
"## Part C - Localization in ROBOT\n",
"\n",
"For this part you need to refactor and tune your implementation from part B to work in your robot as well as the 2D simulation environment.\n",
"To get started, you will need to adapt your solution to use the correct topics from the car:\n",
"If you are using `/launch/localize.launch`, these topics and other parameters can be set in `/params.yaml`. You may also decide to make your own separate launch files for running your implementation in 2D and the real world, and set the parameters directly from the launch files.\n",
"\n",
"Don't forget about **rosbags** as a useful tool for both debugging and experimental analysis! Refer back to the Lab 3 handout for a reminder on how to use rosbag recording functionality.\n",
"\n",
"Finally, don't forget that the `lidar_scale_to_map_scale` parameter in `params.yaml` $1.0$ for the 2D racecar simulation environment or the real car."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NUl6iDa57uq4"
},
"source": [
"## Part D - (OPTIONAL Extra Credit) Bayes Filter\n",
"The Bayes Filter presented in class in Lecture 10 consists of 2 parts:\n",
"1. The prediction step: $$\\mathbb{P}(x_k | u_{1:k}, z_{1:k-1}) = \\int \\underbrace{\\mathbb{P}(x_k | x_{k-1}, u_k)}_{\\text{Motion Model}} \\mathbb{P}(x_{k-1} | u_{1:k-1}, z_{1:k-1}) dx_{k-1}$$\n",
"\n",
"2. The update step: $$ \\mathbb{P}(x_k | u_{1:k}, z_{1:k}) = \\alpha \\underbrace{\\mathbb{P}(z_k|x_k)}_{\\text{Measurement Model}}\\mathbb{P}(x_k | u_{1:k}, z_{1:k-1})$$\n",
"\n",
"In the above expressions, $x_k, u_k$, and $z_k$ represent the belief state, control input (odometry) and the sensor measurement at time $k$, respectively.\n",
"\n",
"**Given** the previous belief state $x_{k-1}$, the motion model $\\mathbb{P}(x_k | x_{k-1}, u_k)$, and the measurement model $\\mathbb{P}(z_k|x_k)$, **derive the prediction and update formulas stated above**.\n",
"To do this, you can rely on 3 basic probability principles:\n",
"1. Bayes Rule: $$\\mathbb{P}(A|B) = \\frac{\\mathbb{P}(A \\cap B)}{\\mathbb{P}(B)}$$ which states that the probability of Event A given Event B is equal to the probability of Event A and Event B over the probability of Event B.\n",
"\n",
"2. The Law of Total Probability: $$\\mathbb{P}(A) = \\sum_{n} \\mathbb{P}(A | B_{n}) \\mathbb{P}(B_n)$$ which states that the probability of Event A is sum over the possible outcomes of Event B of the probability of Event A given a specific outcome of event B times the probability of that outcome.\n",
"\n",
"3. The Markov Property: $$\\mathbb{P}(x_{t} | x_{t-1}, u_{t}, z_{t-1}, ... u_1, z_0) = \\mathbb{P}(x_t | x_{t-1}, u_{t}) $$ which states that the current belief state is only dependent on the previous belief state and previous action.\n",
"\n",
"Note: We define $u_k$ to be the action that moves us from state $x_{k-1}$ to $x_k$, so the sequence of inputs starts at $u_1$."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "p6RNvklA7uq5"
},
"source": [
"## Part E - (OPTIONAL Extra Credit) Simultaneous Localization and Mapping \n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1pqD8sAF7uq6"
},
"source": [
"## References\n",
"\n",
"1. <a name=\"ThrunRobust\"></a>[S. Thrun, D. Fox, W. Burgard and F. Dellaert. “Robust Monte Carlo Localization for Mobile Robots.” Artificial Intelligence Journal. 2001](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.6016&rep=rep1&type=pdf).\n",
"2. <a name=\"FoxMarkov\"></a>[D. Fox, W. Burgard, and S. Thrun. “Markov localization for mobile robots in dynamic environments,” Journal of Artificial Intelligence Research, vol. 11, pp. 391427, 1999](https://arxiv.org/abs/1106.0222).\n",
"3. <a name=\"FoxAdvances\"></a>[D. Fox. “KLD-sampling: Adaptive particle filters,” Advances in Neural Information Processing Systems 14 (NIPS), Cambridge, MA, 2002. MIT Press](https://papers.nips.cc/paper/1998-kld-sampling-adaptive-particle-filters.pdf).\n",
"4. <a name=\"BagnellPracticle\"></a> [D. Bagnell “Particle Filters: The Good, The Bad, The Ugly”\n",
"](http://www.cs.cmu.edu/~16831-f12/notes/F14/16831_lecture05_gseyfarth_zbatts.pdf)\n",
"5. <a name=\"BootsImportance\"></a>[B. Boots “Importance Sampling, Particle Filters”](https://web.archive.org/web/20170209092754/http://www.cc.gatech.edu/~bboots3/STR-Spring2017/Lectures/Lecture4/Lecture4_notes.pdf)\n",
"6. <a name=\"WalshCDDT\"></a>[C. Walsh and S. Karaman “CDDT: Fast Approximate 2D Ray Casting for Accelerated Localization”](https://arxiv.org/abs/1705.01167)"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "loc lab.ipynb",
"provenance": []
},
"hide_input": false,
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"latex_envs": {
"LaTeX_envs_menu_present": true,
"autoclose": false,
"autocomplete": false,
"bibliofile": "biblio.bib",
"cite_by": "apalike",
"current_citInitial": 1,
"eqLabelWithNumbers": true,
"eqNumInitial": 1,
"hotkeys": {
"equation": "Ctrl-E",
"itemize": "Ctrl-I"
},
"labels_anchors": false,
"latex_user_defs": false,
"report_style_numbering": false,
"user_envs_cfg": false
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 0
}
CONTRIBUTING
No CONTRIBUTING.md found.
No version for distro noetic. Known supported distros are highlighted in the buttons above.
No version for distro ardent. Known supported distros are highlighted in the buttons above.
No version for distro bouncy. Known supported distros are highlighted in the buttons above.
No version for distro crystal. Known supported distros are highlighted in the buttons above.
No version for distro eloquent. Known supported distros are highlighted in the buttons above.
No version for distro dashing. Known supported distros are highlighted in the buttons above.
No version for distro galactic. Known supported distros are highlighted in the buttons above.
No version for distro foxy. Known supported distros are highlighted in the buttons above.
No version for distro iron. Known supported distros are highlighted in the buttons above.
No version for distro lunar. Known supported distros are highlighted in the buttons above.
No version for distro jade. Known supported distros are highlighted in the buttons above.
No version for distro indigo. Known supported distros are highlighted in the buttons above.
No version for distro hydro. Known supported distros are highlighted in the buttons above.
No version for distro kinetic. Known supported distros are highlighted in the buttons above.
No version for distro melodic. Known supported distros are highlighted in the buttons above.