CMPUT 503 Exercise 2

This is the second lab assignment of the course.

Part 1

ROS Subscriber and Publisher

For this exercise were tasked to implement a basic ROS subscriber and publisher that we could use later in the exercise. The ROS framework uses a publish-subscribe architecture to send and receive data (or more specifically messages) to and from different nodes. These messages are transported using topics, these topics are named channels or buffers that publishers can publish messages to while subscribers can subscribe to these topics to listen to the messages. Messages are strongly typed, meaning their schema must be defined during compile time otherwise things will not work. Moreover, topics can only support a single message type, publishing multiple different data types will have all but the last topic type be overwritten by the publisher. However, you can define your own and ROS has a collection of different data types that the end user could use.

While the Duckiebot software stack is based on ROS the robot developers made a custom template repository that one could fork to implement their own ROS package(s) that can be deployed to the Duckiebots. This template repository contains a Dockerfile that is used to download the images, build the container, and run the software programs that is specific to the Duckiebot and the end-user uses it to bootstrap their project.

For making our first custom publisher and subscriber we were tasked to grab the video image from the built-in camera and publish it to a custom topic. Following the Duckietown developer guide I created a new python file that will act as my subscriber to the camera’s image topic and publisher to my custom topic. I also added my new file into the launch file so that ROS will know to deploy that file as a new node. Finally, I added the launch file into the launch script that will be used to call roslaunch on the launch file to start my node when the container is running on the robot.

Image Subscriber

To get the image data from the camera I first need to find the topic that contains the data. Running rostopic list gave me the list of all available topics running on the duckiebot. One stood out to me, the /csc22935/camera_node/image/compressed topic seemed most likely to contain the data I need based on its name. Using rqt_image_view and subscribing to that topic confirms my suspicion as I was able to see the images being generated by the camera. Running rostopic info on the topic returned a CompressedImage message type used by the topic. Running rosmsg info on the message type gave me the schema that represented CompressedImage I was only interested in the data itself and thankfully there was an attribute called data that was an array of unsigned bytes, the rest was just metadata. So I created a rospy.Subscriber to listen on the /csc22935/camera_node/image/compressed topic and printing the result out gave me a bunch of random value. Thankfully, this meant that I was getting data from the camera and the garbage mess I was reading was the JPEG encoded data of the image. Now I just needed to publish this to my own custom topic

Image Publisher

Republishing the image data was pretty straightforward. First I needed to create a new CompressedImage variable and do a deep copy of the received data as it is consumed by the subscriber. This new CompressedImage variable also has it header set, so it knows when it was created, and published it to a custom topic. The publisher was created using rospy.Publisher to a custom topic that I called /csc22935/raw_image/compressed, and it uses the same datatype as the subscriber.

Screenshot of the source code of the image subscriber and publisher

Below is a screenshot of the source code of my image subscriber and publisher

image sub code

Screenshot of the new image subscriber

Below is a screenshot of rqt_image_view showing the image being published to the custom topic

image sub

Odometry

In order to know where our robot is within its workspace based on movement alone we first need to know the robot’s odometry. The motors themselves have rotor encoders that counts the number of ticks (or degrees) of rotation that each wheel have rotated. Counting the number of ticks and using a simple equation (see below) allows us to determine how far each wheel has moved laterally.

$$ \Delta X = \frac{2\cdot\pi\cdot R \cdot N_{ticks}}{N_{total}} $$

Where:

  • $R$ is the radius
  • $N_{ticks}$ is the number of ticks measured
  • $N_{total}$ is the number of total ticks for a full rotation which in our case is 135.

Now that we can calculate the distance each wheel has traveled we can use transformation matrices to convert them from the robot frame to the world frame and vice versa.

  1. What is the relation between your initial robot frame and world frame? How do you transform between them?

The initial world frame is 0.32, 0.32 in the x and y coordinates respectively and the theta component is

$\dfrac{\pi}{2}$ . The robot frame is 0, 0, 0 for the x, y, and theta component respectively. We can convert between the two frames using a forward and reverse kinematics equation. To convert between the world frame to the robot frame we use this equation:

$$ \begin{bmatrix}\dot{x_I}\\\dot{y_I}\\\dot{\theta_I}\ \end{bmatrix} = \begin{bmatrix}\cos(\theta) & -\sin(\theta) & 0\\sin(\theta) & \cos(\theta) & 0\\ 0 & 0 & 1\ \end{bmatrix} \begin{bmatrix} \dot{x_R}\\\dot{y_R}\\\dot{\theta_R}\ \end{bmatrix} + \begin{bmatrix} 0.32 \\ 0.32 \\ \frac{\pi}{2}\end{bmatrix} $$

The last term is the offset term that is used to account for the different origins between the robot frame and the world frame.

  1. How do you convert the location and theta at the initial robot frame to the world frame?

To convert between the robot frame to the world frame we use this equation:

$$ \begin{bmatrix}\dot{x_R}\\\dot{y_R}\\\dot{\theta_R}\ \end{bmatrix} = \begin{bmatrix}\cos(\theta) & \sin(\theta) & 0\\ -\sin(\theta) & \cos(\theta) & 0\\ 0 & 0 & 1\ \end{bmatrix} \begin{bmatrix} \dot{x_I}\\\dot{y_I}\\\dot{\theta_I}\ \end{bmatrix} - \begin{bmatrix} 0.32 \\ 0.32 \\ \frac{\pi}{2}\end{bmatrix} $$

Likewise in the previous answer, the last term is the offset term that is used to account for the different origins between the robot frame and the world frame.

  1. Can you explain why there is a difference between actual and desired location?

There could be many factors that could cause an error between the true location and the desired location. Some of these factors could include:

  • Wheel slip.
  • Loose tolerances within the encoders.
  • Non-consistent driving surface.
  • No feedback mechanism to check if the motors moved the desired amount.
  • Overshooting and undershooting of the desired target distance.
  1. Which topic(s) did you use to make the robot move? How did you figure out the topic that could make the motor move?

We used the /hostname/wheels_driver_node/wheels_cmd and published WheelsCmdStamped messages to move the left and right motors at a desired velocity. We figured that this topic would move the robot as we looked at the list of all available topics using rostopic list and using intuition guessed that this topic will move the wheels based on the descriptive topic name.

  1. Which speed are you using? What happens if you increase/decrease the speed?

We used a value of 0.6 for forward movement and 0.6 for rotational movement. If we increase the speed the robot will move faster but runs the risk of overshooting the desired distance. However, decreasing the speed could prevent the robot from moving as the static friction is greater the motor’s torque.

  1. How did you keep track of the angle rotated?

By using the following kinematic equation below:

$$\begin{bmatrix}\dot{x}_R \\\dot{y}_R \\\dot{\theta}_R \\ \end{bmatrix} = \begin{bmatrix} \frac{r\dot{\varphi}_r}{2} + \frac{r\dot{\varphi}_l}{2} \\ 0 \\ \frac{r\dot{\varphi}_r}{2\cdot l} - \frac{r\dot{\varphi}_l}{2\cdot l} \\ \end{bmatrix}$$

We can find the change of the robot’s directional pose based on the left and right wheels’ linear distance change.

  1. Which topic(s) did you use to make the robot rotate?

We used the same topic to rotate the robot as to move the robot forward.

  1. How did you estimate/track the angles your duckieBot has traveled?

Using the equation listed from the previous answer for question 6. We added all the changes of the robot’s angle to the initial angle the robot started with throughout the whole execution of the robot’s movement.

While we can implement these equations into the motor control node that is used to estimate the robot’s pose and correct for drift. We would much rather use the Duckietown’s implementation [1] for the Duckiebot as it is more likely to be correct and robust compared to our implementation. For example, in their implementation they use AppromixateTime to synchronize the timestamp between the two encoder messages so that the message from one encoder is close to the timestamp of the other. The code reference is linked below.

Part 2

In this part we are tasked to create a multi-state program where our robot will move about in the lab’s Duckietown environment following a pre-programed route and setting the LED light pattern to indicate the robot’s current state.

Architecture

Below is the ROS computation graph for implementing this exercise. It is comprised with two nodes:

  • The state_control_node is responsible for maintaining the current state of the robot as well as its state transitions and setting the LED patterns. It also give commands to the motor_control_node to move the robot to a specified location in world frame.
  • The motor_control_node is responsible for moving the robot to a specific location based on commands received from state_control_node. It handles all odometry calculations, dead reckoning, error corrections, and motor control for the robot. Once the current command is successfully completed an acknowledgement is sent back to the state_control_node indicating that the robot finished the current task.

ROS node setup

The State Control Node

The state control node as stated before is responsible for maintaining the current state, state transition, and setting of the LEDs for the robot. In our implementation states are executed in a sequential fashion. Once a command in published to the motor_control_node the state_control_node is blocked until it receives a confirmation that the command was successfully completed. This confirmation comes from the motor_control_node. Once the confirmation message is received it can then move onto publishing the next command to the motor_control_node. The commands themselves are fairly simple, due to the simple tasks that our robot needed to do. The commands are a formatted string that is published to the motor_control_node using a String type message. An example command could be "forward:2.3" meaning move forward 2.3 meters from the current world frame. Another could be "right:80" meaning rotate right (clockwise) 80 degrees. Parsing is done at the motor_control_node.

LED light pattern

For setting the light patterns for different stages of the task, we first run this command to launch the led_emitter_node:

dts duckiebot demo --demo_name led_emitter_node --duckiebot_name $BOT --package_name led_emitter --image duckietown/dt-core:daffy-arm64v8

We then use the service <VEHICLE_NAME>/led_emitter_node/set_custom_pattern to set the different LED patterns to their corresponding state. Since we need to call this service multiple times, we keep the connection persistent.

The colour pattern for each state is defined below:

  1. Red
  2. Blue
  3. Green
  4. Purple

The Motor Control Node

The motor control node as stated before is responsible for all movement command executions, odometry calculations, dead reckoning, error corrections, and motor control for the robot. The motor_control_node is more of a listener to the state_control_node, it doesn’t do anything until it receives a command from the controller that is the state_control_node. Once it receives a command it then executes that command to the best of it’s ability. There are three functions that implement the movements required for the lab exercise. One moves the robot forward by a specified amount in meters one rotates the robot by a specified amount in degrees, and one moves the robot in an arcing motion that is kinda hacky.

Commands received are appended to a list where it is used to update and correct the robot’s pose. It also provides a useful debugging tool to see any errors in the robot’s odometry. For forward movement there is a vector that connects the robot’s pre-movement position to its target position. The robot then follows that vector to the target position. For rotational movement the robot will rotate in place until the robot’s theta odometry is close to the target’s direction. The rotation in-place is done by having one motor spin in one direction and the other motor spin in the other direction at the same speed so that there isn’t any translational movement during rotation. For the arc movement one of the motor is spinning faster than the other so that it can create both a translational and rotational movement.

Corrections

Due to manufacturing defects, loose tolerances, the unpredictable nature of reality, and the fact that we can’t assume a spherical duck. There will be some error or drift between the target pose and the actual pose. In this exercise we were not required to implement close-loop control but I found it easier to implement some kind of control loop feedback that made it easier to do all the required tasks without driving off Duckietown. There are many process variables (PV) that we could have use to have our robot drive in a straight line:

  • The difference in distances each wheel has traveled
  • The drift between the target track and the robot’s position to that track

But the one that works best for us, was the angle between the robot vector and the target vector. The diagram below shows a visual representation of the two vectors.

robot vector diagram

  • $\vec{R}$ is the robot vector which describes where the robot is heading. This is derived from the robot’s odometry.
  • $\vec{T}$ is the target vector which describes the heading to the target position from the robot’s position.

Minimizing the angle between the robot vector and the target vector while driving forward should get our robot to the desired location. To get the angle between the robot vector and the target vector we can use the dot product divided by the product magnitude of the two vectors to get the cosine value where taking the inverse gives us the angle.

$$ \frac{\vec{R} \cdot \vec{T}}{||\vec{R} || \cdot || \vec{T}||} = \cos(\theta) \rightarrow \arccos(\cos(\theta)) = \theta $$

However, this does not gives us the direction on where the target vector lies in relation to the robot vector. For that we need the cross product of the two vectors to find the sin value that will gives the direction. If the value is less than 0 then the target is right of the robot, if greater than 0 then the target is left of the robot.

$$ \frac{\vec{R}\times\vec{T}}{||\vec{R} || \cdot || \vec{T}|| \cdot u} = \sin(\theta) $$

Now that we have the magnitude and direction of the error we can add this into our close-loop feedback system which in this case is PID control.

PID Control

A PID (proportional-integral-derivative) controller is a commonly used feedback control-loop mechanism that gives corrections to a process or a plant such that its output or process value matches the desired set-point [2]. It uses three tunable parameters that takes into account the present error, past errors, and an estimate of future errors to provide a correction value such that it minimizes over-corrective oscillation and unnecessary delay. The equation below is the overall control function:

$$ u(t) = K_pe(t) + K_i \int_0^te(\tau) d\tau + K_d\frac{de(t)}{dt} $$

Where:

  • $K_p$ is the proportional gain, a tuning parameter,
  • $K_i$ is the integral gain, a tuning parameter,
  • $K_d$ is the derivative gain, a tuning parameter,
  • $e(t)$ is the error between the set-point or target point and process variable at time $t$ ,
  • $t$ is the time,
  • $\tau$ is the variable of integration (takes on values from time 0 to the present $t$ ).

Now tuning these parameters could be a course all in itself and I had a limited amount of time so I guessed and checked by tuning each parameter separately and settled on the parameters listed below:

  • $K_p = 0.4$
  • $K_i = 0.075$
  • $K_d = 0.0$

We used someone elses PID controller [3] for implementing PID control.

  1. What is the final location of your robot as shown in your odometry reading?

The final location of the robot is: 0.39, 0.53, ~86.7 degrees for x, y, and theta respectively

  1. Is it close to your robot’s actual physical location in the mat world frame?

Using Euclidean distance the difference was 22.14 centimeters.

Video

The video below shows the robot performing some basic pre-planned maneuvers in Duckietown:

The video below shows the robot’s odometry over time while performing the same basic pre-planned maneuvers from the video above:

ROS Bag

Bag file

Repo Link

Exercise 2 repository link

References

This is a list of references that I used to do this exercise.

  1. Deadreckoning: https://github.com/duckietown/dt-core/blob/daffy/packages/deadreckoning/src/deadreckoning_node.py
  2. PID Controller: https://en.wikipedia.org/wiki/PID_controller
  3. PID controller code: https://github.com/jellevos/simple-ros-pid/blob/master/simple_pid/PID.py
Jihoon Og
MSc in Computing Science

My research interests include wireless systems, performance modeling and heterogenous computing.