Demo Applications

Demo details:

Bringup: This launches all components and sensors that connected to the robot in a single configuration file

roslaunch e100_bringup minimal.launch

This launches following nodes:

  • e100_base : e100_base, core which is the driver node that subscribes to velocity commands from topic /cmd_vel and publishes odom values to the topic /odom, voltage values to /e100_voltage, battery percentages to e100_battery and other motor controller states in ros_topics
  • robot_localization: To ensure precise in localization robot_localization node subscribes to odom values and IMU values and provide filtered odom values considering linear values of odometer from odom and angular values from IMU. It is to avoid any skew or skid in robot’s base which occurs error in localization.
  • IMU: mavros: This node brings up IMU controller and publish IMU values to /e100_imu topic
  • ros_rplidar: This node brings up 2D lidar sensor and publish laser values to /scan topic
  • astra_rgbd: This node brings up RGBD sensor and publishes variant camera information with different images in numerous topics chategorized into color image, IR image and depth image.
  • Tele-op: It launches joystick node that brings up joystick device and publish velocity topics over topic /cmd_vel

Tele-op: This node controls robot either using Game Pad Controller or keyboard from HOST pc SLAM: This demo uses 2D Lidar sensor and generates map and localizes robot within map

Navigation: This demo uses generated map navigates robot autonomously by avoiding obstacles using laser data within map

Simulator: This demo simulates real behavior of robot in virtual world using Gazebo simulation.

Tele-op

This step will guide you to control RIA E100 manually. RIA E100 can be controlled either using joystick comes with robot or using keyboard from host PC.

To control RIA with joystick follow these steps

  • During configuration RIA E100 it should be mentioned in bashrc file as
  • Open the terminal in host PC and connect to robot via ssh, if prompts for password type “gaitechedu”
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_bringup minimal.launch

you should get some thing like this

../_images/e100_bringup1.png ../_images/e100_bringup2.png
  • Controls:
  • Linear Slow Motion: Holding slow mode switch + throttle rise
  • Angular Slow Motion: Holding slow mode switch + throttle side
  • Linear Fast Motion: Holding fast mode button + throttle rise
  • Angular Fast Motion: Holding fast mode button + throttle side
../_images/ria_r100_quick_start_fig01.png

Fig 1. Game Pad – Logitech f710

To control RIA with keyboard follow these steps

  • Open the terminal in host PC and connect to robot via ssh, if prompts for password type “gaitechedu”
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_bringup minimal.launch
  • In another terminal connect to robot via ssh and run following command in same terminal
roslaunch e100_teleop keyboard.launch

Now you are able to control robot using keyboard, controls will be displayed in same terminal

SLAM

  • This step will guide to create map for navigating RIA by exploring environment.
  • Note that Working environment is INDOOR
  • In separate terminal launch following files in host PC

launch bringup file

  • Open the terminal in host PC and connect to robot via ssh, if prompts for password type “gaitechedu”
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_bringup minimal.launch
  • To create map open another terminal and type following
ssh ria@RIA_IP_ADDRESS
  • In the same terminal type following command
roslaunch e100_navigation slam_gmapping.launch

Note: Can use either gmapping or hector slam

  • To view robot and mapping in rviz open terminal in host pc and launch by sourcing workspace.
Source ~/catkin_ws/devel/setup.bash
roslaunch e100_description view.launch

Control robot all around using either joystick or keyboard. Once finishing building map save it in robot’s PC under folder e100_navigation/maps.

Ball follower

In this demo, we will make RIA follow any tennis ball appears on the camera video. This demo gives an example of how we can integrate OpenCV programming with ROS robots to come up with applications that makes our robot interactive with its environment.

Code Installation This application is included in gaitech_edu reposotory on github

Note

If you have installed this reposotroy before no need to go throgh this step

To install and compile gaitech edu package open terminal and type

cd ~/catkin_ws/src
git clone https://github.com/gaitech-robotics/gaitech_edu.git
cd ../
catkin_make

Note

All this command should be done on the robot it self.

Then run the bringup launch file

roslaunch e100_bringup minimal.launch

In another terminal run ball follower node

rosrun gaitech_edu ball_follower.py

Now everything is ready, remove any wired connected device to the robot, then put a tennis ball in front of the robot camera and enjoy.

Code Explanation

After we learned how to run the code let see how it works

import cv2
import numpy as np
import rospy
from geometry_msgs.msg import Twist
from sensor_msgs.msg import Image

from cv_bridge import CvBridge, CvBridgeError
bridge = CvBridge()

This part of the code is responsible for importing needed packages that we will need it in this tutorial so you can not touch it or change it

yellowLower =(31, 90, 7)
yellowUpper = (64, 255, 255)

integrated_angular_speed=0
integrated_angular_factor = 0.007;
linear_speed_factor = 200
angular_speed_factor = -0.005

These parameters are used in our demo, and it affects the performance of the robot following behavior the first two are the bounds of the ball color, in fact the robot will look for any color that is in between these bounds and follow it, these bounds can be changed due to the ball color, the brightness of the environment and so.

The integrated_angular_speed parameter is used in the Kalman filter, basically, it is used to make the robot rotating faster when the ball is escaping faster, however, if it gets increased more than the needed value it will make the robot less smooth and may lose the ball.

The last three factors control the relation between the ball’s size and location, and the speed of the robot, these variables are changeable to adapt the environment, it will changes if the robot is running indoor or outdoor, the size of the ball itself, and so.

Now let’s see the image callback function

global integrated_angular_speed,integrated_angular_factor
frame = bridge.imgmsg_to_cv2(message, "bgr8")

cv2.namedWindow("Frame", cv2.WINDOW_NORMAL)
cv2.imshow("Frame", frame)


if cv2.waitKey(1) & 0xFF == ord('q'):
    cv2.destroyAllWindows()


hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, yellowLower, yellowUpper)
mask = cv2.erode(mask, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
cv2.imshow("Mask", mask)

_, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

objects = np.zeros([frame.shape[0], frame.shape[1], 3], 'uint8')

This part of the code is responsible for extracting the ball from the image itself, actually, it extracts anything with a color between the two boundaries yellowLower, yellowUpper and put it in the contours array

max_c= None
max_c_area= 0
x=0;
y=0;
for c in contours:
    area = cv2.contourArea(c)
    if area > 30:
        if area>max_c_area:
            max_c = c
            max_c_area = area
        perimeter = cv2.arcLength(c, True)
        # now we want to draw the centroid, use image moment

        # get centers on x and y
        ((x, y), radius) = cv2.minEnclosingCircle(c)
        x = int(x)
        y = int (y)
        radius = int(radius)
        cv2.circle(objects, (x,y), radius, (0,0,255), 10)

Since we are interested in the ball only (Not anything that has its color) so we will take the max object that matches our color and get its dimensions, to use it to make the robot moves.

    if (max_c_area>40):
    velocity_message.linear.x= linear_speed_factor/max_c_area
    Az = (x-frame.shape[1]/2)* angular_speed_factor ;
    integrated_angular_speed +=Az
    if abs(Az)>0.1:
        velocity_message.angular.z= Az + integrated_angular_factor * integrated_angular_speed
    else:
        velocity_message.angular.z =0
    pub.publish(velocity_message)
else:
    velocity_message.linear.x=0
    velocity_message.angular.z=0
    pub.publish(velocity_message)

cv2.imshow("Countors", objects)

Now we get the ball dimensions (x, y, and area) we will make the robot moves according to it, so we will make it goes faster if the ball is far, and lower if the ball is near to it, and we will use the area of ​​the ball itself to approximate the speed that the robot should move with. Since the relation between the ball size and the speed is Inverse relationship we will use (/) division instead of multiplication (*).

This will make the robot moves forward and stops, we need to make it rotate to follow the ball. In this demo, we will take the position of the ball and use it to approximate the need angular speed that the robot should move with. We get the difference between the center of the frame and position of the ball center and then we will know if the ball is on the right or on the left of the robot and how much far is it. This value will be assigned to the Angular speed (Az) variable this value could be positive or negative and that will determine the direction of the rotation. The last point to be mentioned is that the ball cannot be in the center exactly it will be on the left or right with a very small difference and this will make robot goes left and right always so we have to discard and difference that is smaller to a threshold value and it is our case it is 0.1 else the angular speed will be zero.

Deep Learning and AI apps

The following apps are integrating the robotics field with deep learning and AI technology. This integration gives many more applications for robot and may open endless ideas and projects in the academic and real-life applications.

When it comes to RIA-E100 we used the camera and lidar to take the data from the environment and one of the most famous AI fields is image processing. When a robot can see and recognize things (this car and that is person, etc) it gives the robot a good ability to do many applications that could be very difficult to be done before this technology.

There are many platforms that are being used for deep learning, and one of the most famous ones is TensorFlow, it is easy to be integrated with other systems, also it is straight forward to be installed, and as let’s go and install it first.

Note

Prerequisites: tensorflow and deepl learing applications requires a GPU machine and a fast router to make sure the performance is acceptable.

  • TensorFlow Installation

In the deep learning applications we are using TensorFlow and faster rcnn network and the easiest way to install them is to follow this tutorial.

or

Follower App

This app makes the robot see and recognize and human legs in front of him in range 2.5m-0.65m and follow them. In case if there is more than one person he will follow the nearest one to him. At the end, you will be able to apply the same video at the end of this part.

  • GPU machine Installation

After finishing environment installation (TensorFlow), do the following steps: * copy this pb file to the /object_detection/inference_graph folder

1- In the python script go to line 130 and change the IP address to the IP address of the robot (make sure that robot and GPU machine are on the same network).

2- In the robot open the terminal and run this command

roslaunch e100_bringup minimal.launch

3- In a new terminal run this command

rosrun roslink-ba roslink_tensorflow_beidge_follower_app.py

Then the robot will follow any human legs appears in front of it with range 0.65-2.5m

This video shows a small demo

Find and Search App

This app makes the robot see and recognize a pre-defined object in front of him in range 6.5m-0.65m and follow it. There are 90 different objects the robot can recognize and search for, In case if the pre goal object does not appear in the robot’s camera the robot will go randomly, avoiding the obstacles and search for it, when it finds the object it will stop. If the object moves, the robot will follow it.

  • GPU machine Installation

After finishing environment installation (TensorFlow), do the following steps: * copy this pb file to the /object_detection/inference_graph folder

1- In the python script go to line 130 and change the IP address to the IP address of the robot (make sure that robot and GPU machine are on the same network).

goal_id = 62 # this is the Id of chair you can fined other id for 90 object in the labelmap.pbtxt file

2- In the python script go to line 38 and change the goal_id variable and put the Id of the object you need the robot to follow, you have 90 objects and they are detailed in the .pbtxt file above.

3- In the robot open the terminal and run this command

roslaunch e100_bringup minimal.launch

4- In a new terminal run this command

rosrun roslink-ba roslink_tensorflow_beidge_find_and_search.py

Then the robot will start to search for the pre-defined object and when it finds it, the robot will follow it with range 0.65-6.5m

Arduino and ROSserail

I this tutorial we will learn how to use Arduino board and sensors and integrate them with RIA-E100 and ros.

First of all, we have to get Arduion bord and some Arduino sensors from the market, you can find them on the internet easily (It will not be provided with the RIA-E100 box) after you get them lets start programming

Install Arduino IDE

  • First of all, let’s install the IDE on the RIA-E100 robot form this link chose Linux 64bit for RIA-E100
  • Then unzip the file (it is preferred to be in the home directory)

open the terminal, navigate to the unzipped folder and type the following

cd path/to/the/folder
./arduino

The IDE of arduino should start

Install rosserail

  • close the IDE and type in the terminal these commands to install ros needed packages
sudo apt-get update
sudo apt-get install ros-kinetic-rosserial
sudo apt-get install ros-kinetic-rosserial-arduino
roscore
  • in a new terminal type these commands to install ros libraries for arduino
cd path/to/Arduino/folder
cd libraries
rm -rf ros_lib
rosrun rosserial_arduino make_libraries.py .

Now you can run Arduino code as a rosnode.

running the “beep” application This application will be like a parking sensor and to do it you need an ultrasonic sensor

Note

If you have installed this reposotroy before no need to go throgh this step

  • Let’s install and compile gaitech edu package and get the ros code
open terminal and type
cd ~/catkin_ws/src
git clone https://github.com/gaitech-robotics/gaitech_edu.git
cd ../
catkin_make
  • open the Arduino IDE and copy past this code to the program
#include <ros.h>
#include <ros/time.h>
#include <sensor_msgs/Range.h>

ros::NodeHandle  nh;

sensor_msgs::Range range_msg;
ros::Publisher pub_range( "/ultrasound_range", &range_msg);
char frameid[] = "/ultrasound";

// this constant won't change. It's the pin number of the sensor's output:
const int pingPin = 7;
const boolean CENTIMETERS = true;
const boolean INCHES = false;

void setup() {
  // initialize serial communication:
  nh.initNode();
  nh.advertise(pub_range);

  range_msg.radiation_type = sensor_msgs::Range::ULTRASOUND;
  range_msg.header.frame_id =  frameid;
  range_msg.field_of_view = 0.1;  // fake
  range_msg.min_range = 0.002;  // 2 cm
  range_msg.max_range = 0.150;  // 150 cm


  //Serial.begin(9600);
}

long getRange(int pinNumber, boolean in_centimeters){

    // establish variables for duration of the ping, and the distance result
      // in inches and centimeters:
      long duration, distance, inches, cm;

      // The PING))) is triggered by a HIGH pulse of 2 or more microseconds.
      // Give a short LOW pulse beforehand to ensure a clean HIGH pulse:
      pinMode(pingPin, OUTPUT);
      digitalWrite(pingPin, LOW);
      delayMicroseconds(2);
      digitalWrite(pingPin, HIGH);
      delayMicroseconds(5);
      digitalWrite(pingPin, LOW);

      // The same pin is used to read the signal from the PING))): a HIGH pulse
      // whose duration is the time (in microseconds) from the sending of the ping
      // to the reception of its echo off of an object.
      pinMode(pingPin, INPUT);
      duration = pulseIn(pingPin, HIGH);

      // convert the time into a distance
      inches = microsecondsToInches(duration);
      cm = microsecondsToCentimeters(duration);

      if (in_centimeters)
         distance = cm;
      else
        distance = inches;
      //Serial.print(inches);
      //Serial.print("in, ");
      Serial.print(cm);
      Serial.print("cm");
      //Serial.println();

      return distance;


}

void loop() {
  range_msg.range=getRange(pingPin, CENTIMETERS);
  range_msg.header.stamp = nh.now();
  pub_range.publish(&range_msg);
  nh.spinOnce();
  delay(500);
}

long microsecondsToInches(long microseconds) {
  // According to Parallax's datasheet for the PING))), there are 73.746
  // microseconds per inch (i.e. sound travels at 1130 feet per second).
  // This gives the distance travelled by the ping, outbound and return,
  // so we divide by 2 to get the distance of the obstacle.
  // See: http://www.parallax.com/dl/docs/prod/acc/28015-PING-v1.3.pdf
  return microseconds / 74 / 2;
}

long microsecondsToCentimeters(long microseconds) {
  // The speed of sound is 340 m/s or 29 microseconds per centimeter.
  // The ping travels out and back, so to find the distance of the object we
  // take half of the distance travelled.
  return microseconds / 29 / 2;
}
  • Make sure that the arduino is connected correctly to the robot and upload the code to the arduino
  • In a new terminal type this command
rosrun rosserial_python serial_node.py /dev/ttyACM0 # make sure you are using the same port for the Arduino
  • Now to make the robot able to move we have to run the bringup file in a new terminal
roslaunch e100_bringup minimal.launch
  • Now everything is ready, make sure that your robot in free space and In a new terminal type this command to run the beep node.
rosrun beep beep.py