Demo Applications

Demo details:

Bringup: This launches all components and sensors that connected to the robot in a single configuration file

Tele-op: This node controls robot either using Game Pad Controller or keyboard from HOST pc SLAM: This demo uses 2D Lidar sensor and generates map and localizes robot within map

Navigation: This demo uses generated map navigates robot autonomously by avoiding obstacles using laser data within map

Simulator: This demo simulates real behavior of robot in virtual world using Gazebo simulation.

Tele-op

This step will guide you to control RIA E100 manually. RIA E100 can be controlled either using joystick comes with robot or using keyboard from host PC.

To control RIA with joystick follow these steps

  • During configuration RIA E100 it should be mentioned in bashrc file as
  • Open the terminal in host PC and connect to robot via ssh, if prompts for password type “gaitechedu”
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_bringup minimal.launch
  • Controls:
  • Linear Slow Motion: Holding slow mode switch + throttle rise
  • Angular Slow Motion: Holding slow mode switch + throttle side
  • Linear Fast Motion: Holding fast mode button + throttle rise
  • Angular Fast Motion: Holding fast mode button + throttle side
../_images/ria_r100_quick_start_fig01.png

Fig 1. Game Pad – Logitech f710

To control RIA with keyboard follow these steps

  • Open the terminal in host PC and connect to robot via ssh, if prompts for password type “gaitechedu”
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_bringup minimal.launch
  • In another terminal connect to robot via ssh and run following command in same terminal
roslaunch e100_teleop keyboard.launch

Now you are able to control robot using keyboard, controls will be displayed in same terminal

SLAM

  • This step will guide to create map for navigating RIA by exploring environment.
  • Note that Working environment is INDOOR
  • In separate terminal launch following files in host PC

launch bringup file

  • Open the terminal in host PC and connect to robot via ssh, if prompts for password type “gaitechedu”
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_bringup minimal.launch
  • To create map open another terminal and type following
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_navigation slam_gmapping.launch

Note: Can use either gmapping or hector slam

  • To view robot and mapping in rviz open terminal in host pc and launch by sourcing workspace.
Source ~/catkin_ws/devel/setup.bash
roslaunch e100_description view.launch

Control robot all around using either joystick or keyboard. Once finishing building map save it in robot’s PC under folder e100_navigation/maps.

Ball follower

In this demo, we will make RIA follow any tennis ball appears on the camera video. This demo gives an example of how we can integrate OpenCV programming with ROS robots to come up with applications that makes our robot interactive with its environment.

Code Installation This application is included in gaitech_edu reposotory on github

Note

If you have installed this reposotroy before no need to go throgh this step

To install and compile gaitech edu package open terminal and type

cd ~/catkin_ws/src
git clone https://github.com/aniskoubaa/gaitech_edu.git
cd ../
catkin_make

Then run the bringup launch file

roslaunch e100_bringup minimal.launch

In another terminal run ball follower node

rosrun gaitech_edu ball_follower.py

Now everything is ready, put a tennis ball in front of the robot camera and enjoy.

Code Explanation

After we learned how to run the code let see how it works

import cv2
import numpy as np
import rospy
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image

from cv_bridge import CvBridge, CvBridgeError
bridge = CvBridge()

This part of the code is responsible for importing needed packages that we will need it in this tutorial so you can not touch it or change it

yellowLower =(31, 90, 7)
yellowUpper = (64, 255, 255)

integrated_angular_speed=0
integrated_angular_factor = 0.007;
linear_speed_factor = 200
angular_speed_factor = -0.005

These parameters are used in our demo, and it affects the performance of the robot following behavior the first two are the bounds of the ball color, in fact the robot will look for any color that is in between these bounds and follow it, these bounds can be changed due to the ball color, the brightness of the environment and so.

The integrated_angular_speed parameter is used in the Kalman filter, basically, it is used to make the robot rotating faster when the ball is escaping faster, however, if it gets increased more than the needed value it will make the robot less smooth and may lose the ball.

The last three factors control the relation between the ball’s size and location, and the speed of the robot, these variables are changeable to adapt the environment, it will changes if the robot is running indoor or outdoor, the size of the ball itself, and so.

Now let’s see the image callback function

global integrated_angular_speed,integrated_angular_factor
frame = bridge.imgmsg_to_cv2(message, "bgr8")

cv2.namedWindow("Frame", cv2.WINDOW_NORMAL)
cv2.imshow("Frame", frame)


if cv2.waitKey(1) & 0xFF == ord('q'):
    cv2.destroyAllWindows()


hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, yellowLower, yellowUpper)
mask = cv2.erode(mask, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
cv2.imshow("Mask", mask)

_, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

objects = np.zeros([frame.shape[0], frame.shape[1], 3], 'uint8')

This part of the code is responsible for extracting the ball from the image itself, actually, it extracts anything with a color between the two boundaries yellowLower, yellowUpper and put it in the contours array

max_c= None
max_c_area= 0
x=0;
y=0;
for c in contours:
    area = cv2.contourArea(c)
    if area > 30:
        if area>max_c_area:
            max_c = c
            max_c_area = area
        perimeter = cv2.arcLength(c, True)
        # now we want to draw the centroid, use image moment

        # get centers on x and y
        ((x, y), radius) = cv2.minEnclosingCircle(c)
        x = int(x)
        y = int (y)
        radius = int(radius)
        cv2.circle(objects, (x,y), radius, (0,0,255), 10)

Since we are interested in the ball only (Not anything that has its color) so we will take the max object that matches our color and get its dimensions, to use it to make the robot moves.

    if (max_c_area>40):
    velocity_message.linear.x= linear_speed_factor/max_c_area
    Az = (x-frame.shape[1]/2)* angular_speed_factor ;
    integrated_angular_speed +=Az
    if abs(Az)>0.1:
        velocity_message.angular.z= Az + integrated_angular_factor * integrated_angular_speed
    else:
        velocity_message.angular.z =0
    pub.publish(velocity_message)
else:
    velocity_message.linear.x=0
    velocity_message.angular.z=0
    pub.publish(velocity_message)

cv2.imshow("Countors", objects)

Now we get the ball dimensions (x, y, and area) we will make the robot moves according to it, so we will make it goes faster if the ball is far, and lower if the ball is near to it, and we will use the area of ​​the ball itself to approximate the speed that the robot should move with. Since the relation between the ball size and the speed is Inverse relationship we will use (/) division instead of multiplication (*).

This will make the robot moves forward and stops, we need to make it rotate to follow the ball. In this demo, we will take the position of the ball and use it to approximate the need angular speed that the robot should move with. We get the difference between the center of the frame and position of the ball center and then we will know if the ball is on the right or on the left of the robot and how much far is it. This value will be assigned to the Angular speed (Az) variable this value could be positive or negative and that will determine the direction of the rotation. The last point to be mentioned is that the ball cannot be in the center exactly it will be on the left or right with a very small difference and this will make robot goes left and right always so we have to discard and difference that is smaller to a threshold value and it is our case it is 0.1 else the angular speed will be zero.

Follower App

In this application, we will integrate deep learning with our robot, and make our robot intelligent.

Note

Prerequisites: This demo requires a GPU machine and a fast router to make sure the performance is acceptable

  • GPU machine Installation

This app uses tensorflow and faster rcnn network to detect human legs and follow them, and due to that you have to install them on the GPU machine, this tutorial shows how to install them.

. NOTE::
It is NOT needed to follow the tutorial to the end, any part related to the training is not needed, the .weight files already exist, we just need to prepare the environment.

After finishing environment installation, copy this pb file to the /object_detection/inference_graph folder

Then copy this pbtxt file to training folder

And the last thing is copy this python script to the object_detection folder

  • **Configuration and Running **

1- In the python script go to line 130 and change the IP address to the IP address of the robot (make sure that robot and GPU machine are on the same network).

2- In the robot open the terminal and run this command

roslaunch e100_bringup minimal.launch

3- In a new terminal run this command

rosrun roslink-ba roslink_tensorflow_beidge.py

Then the robot will follow any human legs appears in front of it with range 0.35-1.5m

This video shows a small demo