Mobile robots have emerged in the recent years as solutions to many real-life situations which occur in an unknown environment. In such cases, the mobile robot must be able to navigate through the unstructured workspace. The purpose of this project is to study and implement stimulus and response based control for real-time navigation of a mobile robot.
A fuzzy logic control algorithm has been taken as a proposed behavioural control scheme and successful simulations on MobotSim® are presented before the design of the robot. The design is a compact differentially driven circular mobile robot which fits a 25 x 25 x 30 cm3 box. Moreover, a detachable ball-potting mechanism has also been designed, fabricated, and placed on the robot.
The robot was designed in Autodesk Inventor® and the fabrication procedures including casting and machining were used to fabricate different parts. Embedded control system is designed and implemented using PIC18F4520 as master, multiple PIC18F2431 based motor control cards as slaves. IR range sensors have been used to take measurements from the environment. Simulations for PIC micro-controllers have been performed on Proteus® and implemented on hardware. Finally, successful experiments were conducted to verify basic navigation. The platform can further be used to study and implement other control techniques for future research.
Table of Contents
Chapter 1. Introduction
1.1 Mobile Robot Systems
1.2 Autonomous Mobile Robot Control Systems
1.2.1 Deliberative Navigation Control
1.2.2 Reactive Navigation Control
Chapter 2. Behavior Based Control
2.1 Fuzzy Logic
2.2 Behaviors
2.2.1 Wall-Following
2.2.2 Corridor Following
2.3 Defuzzification
2.4 Simulations
Chapter 3. Design and Development of Mechanical Structure
3.1 Arena
3.2 Mechanical Structure
3.2.1 A Brief Overview of the Design and Fabrication Process
3.2.2 Overview of the Complete Mechanical Structure
Chapter 4. Design and Development of Related Electronics
4.1 Master Card
4.2 Slave Cards
4.3 Motor Drivers
4.4 Sensors
4.4.1 Short range and long range IR sensors
4.4.2 Color sensors
Chapter 5. Design and Development of the Control System
5.1 The DC Motor Control
5.1.1 PWM Generation
5.1.2 Position Control
5.1.3 Speed Control
Chapter 6. Future Work
Appendix
References
Table of Figures
Figure 1.1: Minerva - the second generation museum tour-guide robot.[5]
Figure 1.2: NavChair Prototype - Smart Wheelchair for Disabled People[9]
Figure 1.3: 110 FirstLook - iRobot Corporation[12]
Figure 2.1: Structure of a typical fuzzy logic model
Figure 2.2: Membership functions for ds
Figure 2.3: Membership functions for dis
Figure 2.4: Membership functions for left-wheel velocity
Figure 2.5: Membership functions for right-wheel velocity
Figure 2.6: Membership functions for diff
Figure 2.7: Ultrasonic range finders in MobotSim
Figure 2.8: Sensor arrangement for wall-following and corridor-following
Figure 2.9: (a) Left-Wall Following Simulation (b) Right Wall Following Simulation
Figure 2.10: Corridor Following Simulation
Figure 3.1: Developed environment for navigation
Figure 3.2: Overview of the process of casting
Figure 3.3: The complete mechanical structure of the robot
Figure 3.4: The parametric diagram of the chassis of the robot
Figure 3.5: The parametric diagram of the ball potting mechanism of the robot
Figure 3.6: The dissected diagram of the chassis of the robot
Figure 3.7: The dissected diagram of the ball potting mechanism of the robot
Figure 4.1: Overview of the related electronics
Figure 4.2: PCB layout for the master card
Figure 4.3: Fabricated master card
Figure 4.4: PCB Layout for the slave card
Figure 4.5: Fabricated Slave Card
Figure 4.6: Motor Driver Card
Figure 4.7: Long range IR sensors GP2Y0A02YK0F
Figure 4.8: Short range IR sensors GP2D120XJ00F
Figure 4.9: Color sensor evaluation board for ADJD-S311-CR999
Figure 5.1: Motor Position Control System where analog signal is the desired position..34
Figure A.1: CAD Drawing for Base Plate (Aluminum)
Figure A.2: CAD Drawing for Gear Holder (Aluminum)
Figure A.3: CAD Drawing for Top Plate (Perspex)
Figure A.4: CAD Drawing for 1:1 Spur Gears (Mild Steel)
Figure A.5: CAD Drawing for robot wheel (Aluminum)
Figure A.6: CAD Drawing for complete assembly
Abstract
Mobile robots have emerged in the recent years as solutions to many real-life situations which occur in an unknown environment. In such cases, the mobile robot must be able to navigate through the unstructured workspace. The purpose of this project is to study and implement stimulus and response based control for real-time navigation of a mobile robot. A fuzzy logic control algorithm has been taken as a proposed behavioral control scheme and successful simulations on MobotSim ® are presented before the design of the robot. The design is a compact differentially driven circular mobile robot which fits a 25 x 25 x 30 cm3 box. Moreover, a detachable ball-potting mechanism has also been designed, fabricated, and placed on the robot. The robot was designed in Autodesk Inventor ® and the fabrication procedures including casting and machining were used to fabricate different parts. Embedded control system is designed and implemented using PIC18F4520 as master, multiple PIC18F2431 based motor control cards as slaves. IR range sensors have been used to take measurements from the environment. Simulations for PIC microcontrollers have been performed on Proteus ® and implemented on hardware. Finally, successful experiments were conducted to verify basic navigation. The platform can further be used to study and implement other control techniques for future research.
Chapter 1. Introduction
Mobile robots are a major focus of the ongoing research in robotics and almost every major university of Pakistan has at least one or two labs dedicated to mobile robot research. The rapidly developing field of mobile robotics assists by taking the jobs of human beings working in unfavorable conditions like those found in nuclear installations, minesweeping, and bomb disposal etc. The importance of few strides that we have taken in the past few years towards the development of mobile robots could be seen by the help these robots prove to be when human beings are going through the natural disasters such as earth quakes, floods etc. Mobile robots are also found in industry, military and security environments; they are being used in space and underwater exploration and have proved to be of great assistance in gathering useful information that helped carry out further research in various other fields. Recent work in obstacle avoidance and navigation in 3D workspaces open up a completely new world to robot applications. Such robots can be used, although with few limitations, in unstructured and unsafe areas such as a disaster area under a collapsed house where the robot searches for victims[1].
The designers’ and researchers’ approach of concentrating their efforts towards the development of specialized machines equipped with all the necessary sensing equipment for specific tasks would reduce the chance of making incorrect decision based on sensory information. Some of the key issues and challenges the researchers of mobile robotics get to deal with are: reliability, real-time control, robustness, uncertainty in sensors’ readings.
A lot of research is being conducted towards the design and development of the tools and machines that could assist human beings in different situations. The involvement of mobile robots in human life is not limited to industrial applications but could also be seen in medical services, surveillance, bomb disposal, space exploration, nuclear installations, entertainment industry, education, military etc. Some of the reasons that the involvement of robots in human life has gone to such a large extent are: their effectiveness, efficiency, precision, repeatability, reliability and the capability to withstand extreme conditions.
The design requirements of a mobile robot are much of dependent on what kind of conditions and environment is it going to face. For instance, the robot that has been designed to be used in space explorations like mars explorations should be designed considering the specific terrain, the atmosphere, temperature, and other physical conditions present on the planet. The robot would be designed to include parts and assemblies that can perform the task of visual recognition, picking up small objects, avoiding small obstacles. If a robot is to be used on a smooth or a flat terrain, the appropriate choice for the locomotion of the robot would be wheels or tracks. However, for rough terrains, legged robots work better. In some cases, wheels may be implemented with legs in order to provide static and dynamic stability for the robot.[2]
With more advancement in the field, robots would attain a certain level of autonomy in the foreseeable future. Complicated and time consuming tasks in industry, such as maintenance, diagnostics, security etc., would become effectively automated. Robots would be able to help rehabilitate patients, provide healthcare services at home, provide education; all of course with interaction with human masters.[3]
Mobile robotics is not a stand-alone field but is the incorporation of mechanical engineering, electronics, programming and control, sensor fusion etc. Therefore the report shall be divided accordingly into first the discussion and analysis of recent and successful robot systems, then the mechanical structure design and development, related electronics and card(s) development, and finally future work.
1.1 Mobile Robot Systems
Design, development and implementation of robot systems into human environment has been successfully achieved for the past few decades. The remarkable step occurred when interactive mobile robot systems were released into popular human environment. Few of such success stories have been discussed here.
The first case discussed is of Joe Historybot, a mobile robot that was implemented in the Heinz History Centre.[4] The robot welcomed guests and provided both information and a tour of the permanent exhibits. Joe Historybot was also loaded to deliver historical information and knowledge using multimedia. The robot started ‘working’ on July 8, 1999 and covered distance more than 162 km during its time.
illustration not visible in this excerpt
Figure 1.1: Minerva - the second generation museum tour-guide robot.[5]
Figure 1.1 refers to the second-generation museum tour-guide robot which was exhibited at the Smithsonian museum. In its short life of two weeks, Minerva interacted with thousands of people.[5]
In the recent years, more and more work has been done on ‘sociable’ robot systems. These mobile robots must be able to detect and understand ‘rude’ human behavior. For tour-guide robots this means that it should get used to human not following the exact trajectory followed by the robot or unpredictable variations in the distance between the human and the robot. Depending upon the human’s will and desire, he can choose either to accompany or to follow the robot during the visit. To give more privilege to human, robot should also not expect that human will always support the guiding. The robot must be able to adapt to the changing human environment and deviate from its general line of action to accommodate the humans.[6]
More in the service department are smart wheelchairs[7] [8] which behave like autonomous robots. Some of these wheelchairs work on user entering a proposed destination and the wheelchair robot follows its own determined path, avoiding obstacles in the way, to reach the destination entered by the user. NavChair (Figure 1.2) is one such smart wheelchair robot that has the ability to maneuver around obstacles[9].
illustration not visible in this excerpt
Figure 1.2: NavChair Prototype - Smart Wheelchair for Disabled People[9]
NavChair, with its feature of independent mobility, has proved to be a very effective solution for the wide range of users that have previously been using powered wheel chair system. Powered wheel chair systems cannot be operated independently. The design has been made flexible enough to allow for different operating modes ranging from simple obstacle avoidance to full autonomous navigation. Additionally, it provides the user with the freedom of developing, testing and using “shared control” methods, where human intervention up to some extent is allowed, such a mode of operation is non as semiautonomous navigation..
Recently robots resembling human behavior have been developed to assist in the rehabilitation period for many disabled people.[10] [11]
In more complicated robot systems, the iRobot Corporation[12] introduced the 110 FirstLook ® robot for military and defense purposes (Figure 1.3). The robot has four builtin cameras with adjustable exposure and gain, and 8X zoom. The robot weighs around 2.45 kg, height 10.2 cm, length 25.4 cm, and width 22.9 cm. FirstLook ® uses radio communication to give user a visual feedback of its environment. It is used by soldiers to get awareness of the situation before hand especially building clearing missions, raids and other close-in scenarios. The robot is capable of surviving a 16-foot drop on concrete and is waterproof up to 3 feet.
illustration not visible in this excerpt
Figure 1.3: 110 FirstLook - iRobot Corporation[12]
Other developmental projects include the Person Following Shopping Cart Robot[13], Power Assist Robot for Lifting Objects[14], robots for rescue operation[15], assist robots for aged people[16], inspection robots[17], and robots used for educating[18].
In any case, all these robots shall be autonomous robots in nature and therefore present an intriguing question on what would be the suitable method for controlling such a robot.
Over the recent years, there has been growing interest in multiple mobile robots or multirobot systems. These systems are most applicable in exploring unknown environments where each robot explores sections of the workspace and returns either sets of maps or a single map which may then be assembled to map the environment.[19]
1.2 Autonomous Mobile Robot Control Systems
For mobile robots, the most fundamental and pressing issue is the navigation of the robot. The robot must first be able to achieve successful locomotion and navigation through its control algorithms and predefined or preinstalled programs. This successful navigation depends upon four vital blocks[20]:
1. Perception: the robot must extract meaningful data from physical measurement made by the sensors
2. Localization: the robot must be able to localize itself in the environment
3. Cognition: the robot must decide the set of actions to be performed in order to reach its destination
4. Motion Control: the robot must control its actuators by modulating its control cards’ voltage outputs in order to achieve the desired trajectory
To successfully implement the mentioned vitals, we can consider two popular controlling approaches for autonomous robots.
1.2.1 Deliberative Navigation Control
In deliberative navigation control, it is assumed that the environment of the robot is fully or partially known. This means that the global map of the environment has already been fed into the robot’s control algorithm and the robot shall pre-plan its desired path based on the objective or final destination provided. In some cases, the robot may develop a model of the environment based on ‘built-in’ data and the data acquisition from the sensors.
Another major assumption to employ deliberative navigation control is that the position of the mobile robot is known related to the real world coordinate system. This is not a relative position from the nearest object in the robot’s vicinity but the position of the robot in the global environment. Practically this is never accurately available.
Deliberate navigation control may work in small environments but fail in larger ones. This approach for control of mobile robot is highly unfavorable in unknown or rapidly changing environments (such as a maze, public places etc.). In such environments, the robot is better off by making decisions using reactive control.
1.2.2 Reactive Navigation Control
In this approach, the contemporary data from the sensors is used as input and the mobile robot makes decisions based on this data. Only the local environment and objects are mapped in this approach and the robot makes decision based on its relative position to nearby objects/obstacles. The advantage of this approach is that the robot does not need to be loaded with any prior knowledge of the environment. This is an of utmost importance when the robot is used for mapping or when the robot is first to enter the environment[1], in which case the map does not exist at all.
It is very effective and fast for unknown environments. Since the decision depends upon the current data only, there is less computation and quick decisions can be made.
Reactive control can be thought of as a reaction to each event in the environment of the mobile robot. This may be the arrival of an obstacle. For example, in the wall following mechanism, the robot may come too close to the wall. In reaction to this event, the reactive control generates a decision and appropriate action that helps overcome the event i.e. in this case, increase distance from the wall.
Reactive control has the advantage of timely response, and is essential in some domains. But the reactive control based mechanisms are highly specialized and more general world knowledge and reasoning ability is required to deal with a variety of situations.[21]
Chapter 2. Behavior Based Control
For a mobile robot, the workspace tends to be usually complex, challenging, and dynamic. For robots that are situated in static and unchanging environments, the motion and environment is highly predictable and stable. This stability and predictability of the workspace directly affects the complexity and control of the mobile robot.
Behavior-based control employs a set of distributed, interacting modules called behaviors that collectively help achieve the desired behavior. Each behavior receives inputs from sensors and/or other behaviors in the system, and provides outputs to the robot‟s actuators or to other behaviors. With reactive navigation control, rule-based methods can be used to involve minimal computation and no involvement of world-models. The minimal computation also means that the reactively controlled robots are able to respond to rapidly changing environments in timely manner.
2.1 Fuzzy Logic
Fuzzy logic control appears as an increasingly popular approach for recent mobile robots in unstructured and dynamic environments. Fuzzy systems deal with human reasoning in form of sets of “if-then” rules. Each rule represents an action when a certain condition is met. It is therefore very important for the programmer or the developer to understand the possible situations and the task.
illustration not visible in this excerpt
Figure 2.1: Structure of a typical fuzzy logic model
Fuzzy logic control is based on human description of the system and human based decisions to certain events or conditions in the environment. Fuzzy logic control is a nonlinear mapping between inputs and outputs.
2.2 Behaviors
Multiple behaviors are implemented in initially separate and then a single simulation. Each behavior is described in the following section.
2.2.1 Wall-Following
The mobile robot is designed primarily for navigation in a maze. In most situations, it becomes vital for the robot to follow a particular wall in a maze until a further decision for navigation can be made. An event that may change the wall-following behavior can be appearance of an obstacle. In any case, wall-following is an essential feature of any mobile robot in a maze.
For both left-wall-following and right-wall-following, the basic algorithm is the same. Readings are taken from the range finders on sides of the mobile robot platform. Two variables dis and ds are defined. “dis” is the current reading or the current distance from the wall, ds is the difference between the current and previous distance from the wall. Hence dis gives the measure of the position of the robot from the wall and ds gives the direction of motion and an idea of the speed of the mobile robot in the direction, which may either be away from the wall or towards the wall.
A negative value of ds represents motion towards the wall and a positive value represents away from the wall. The membership functions for both ds and dis are defined as shown in Figure 2.2 and Figure 2.3 respectively.
illustration not visible in this excerpt
Figure 2.2: Membership functions for ds
illustration not visible in this excerpt
Figure 2.3: Membership functions for dis
The left range finder values are read for the left-wall-following control and the right range finder value for the right-wall-following. The left wheel velocity is stored in variable lw_vel and the right wheel velocity is stored in rw_vel. The output membership functions i.e. velocity membership functions for left-wheel and right-wheel are shown in Figure 2.4 and Figure 2.5.
illustration not visible in this excerpt
Figure 2.4: Membership functions for left-wheel velocity
illustration not visible in this excerpt
Figure 2.5: Membership functions for right-wheel velocity
Left-Wall-Follow Rules: The left-wall-following algorithm is implemented using rules derived from basic human judgement. Three such rules are presented as an example and the rest can be derived similarly.
- If (dis = near And ds = neg_zero) Then (lw_vel = fast And rw_vel = slow )
- If (dis = medium And ds = pos_large) Then (lw_vel = slow And rw_vel = v_fast )
- If (dis = far And ds = pos_small) Then (lw_vel = slow And rw_vel = v_fast )
The rule-base consists of 15 basic rules that would be implemented in the fuzzy logic controller.
Right-Wall-Follow Rules: Similar to the left-wall-follow rules, a 15 rules rule-base is defined and derived from basic human judgment. Again three rules are presented as example and the rest can be derived.
- If (dis = near And ds = pos_small) Then (lw_vel = medium And rw_vel = fast)
- If (dis = medium And ds = neg_large) Then (lw_vel = slow And rw_vel = v_fast )
- If (dis = far And ds = zero) Then (lw_vel = fast And rw_vel = slow )
2.2.2 Corridor Following
In situations where a wall exists on both sides of the robot, corridor following is the most appropriate choice. In wall-followers, control is implemented by assuming a „safe‟ distance from the wall and keeping the robot at the same distance from the wall, while progressing forward. However, when there is a wall on both sides, it is much safer to keep equal distance from the wall and avoid collision with either side.
The corridor-following algorithm depends on the difference between the distance from the leftwall and the distance from the right wall. This difference is stored in the variable “diff”. For the robot to move in ideal corridor-following manner, diff would be zero. The membership functions for the variable diff are illustrated in Figure 2.6.
illustration not visible in this excerpt
Figure 2.6: Membership functions for diff
Corridor-Follow Rules: Based on the membership functions and using the same variables lw_vel and rw_vel to store the output velocities of left-wheel and right-wheel we can come up with the following basic rules.
- If (diff = neg_large) Then (lw_vel = v_fast And rw_vel = slow )
- If (diff = neg_small) Then (lw_vel = fast And rw_vel = slow )
- If (diff = zero) Then (lw_vel = medium And rw_vel = medium )
- If (diff = pos_small) Then (lw_vel = slow And rw_vel = fast )
- If (diff = pos_large) Then (lw_vel = slow And rw_vel = v_fast )
2.3 Defuzzification
Once the rules have been composed the solution there is the need for a single action or `crisp' solution. This will involve the `defuzzification' of the solution set to produce the crisp output. Among the various methods of defuzzification, the Centre-of-Gravity method is the most popular. The defuzzification process in COG method can be illustrated by the equation below.
illustration not visible in this excerpt
This allows for the incorporation of multiple active membership functions to place their effect on the outputs. For example, in case of the corridor-following, the neg_small and zero membership functions may simultaneously be active to some extent. The extent to which each membership function shall affect the output velocities is governed by the weights of the membership functions. These weights are then used to calculate the ‘active’ area of the membership function (the function µ(yi) in the equation). This weight assigning mechanism combines the active membership function in the defuzzification process.
2.4 Simulations
The fuzzy logic control for a differentially driven mobile robot is simulated in MobotSim. It is a configurable 2D simulator of mobile robots which features a graphical interface where robots and objects are easily configured and a built-in BASIC editor for simulation development. The software features ultrasonic range finders to interact with the environment which can be configured and altered to create multiple obstacles of any shape.
The mobile robot (termed as “mobot” in MobotSim) with typical range finders in a clear environment is shown below in Figure 2.7. The radiation cone, range, and reliability of the sensors can be customized.
illustration not visible in this excerpt
Figure 2.7: Ultrasonic range finders in MobotSim
For our purposes of wall-following and corridor following, the sensors are arranged at the sides of the robot as shown in Figure 2.8.
illustration not visible in this excerpt
Figure 2.8: Sensor arrangement for wall-following and corridor-following
For right-wall following, the average of S0 and S1 is taken as distance from the right wall. The average of S2 and S3 is used as distance for left-wall following. For corridor following, sensors S1 and S2 are utilized and it is found that the ‘anticipation’ of the sensors yields better results.
Simulations are carried out in variety of environments to test the proposed control. For the wall-follower, gradual turns and straight walls are the basic tests. However, for the corridor-follower, a harder workspace is designed. The corridor follower first encounters straight long corridor, then corridor with inclined walls, corridor with sudden change in width, corridor with right angled trajectory, and finally a circular corridor. All these environments are selected from[22].
Figure 2.9 shows the Left-Wall-Follower and the Right-Wall-Follower as simulated in MobotSim.
Figure 2.10 shows the different test situations described in[22] incorporated into a single environment used for testing the corridor-following control.
illustration not visible in this excerpt
Figure 2.9: (a) Left-Wall Following Simulation (b) Right Wall Following Simulation
illustration not visible in this excerpt
Figure 2.10: Corridor Following Simulation
Chapter 3. Design and Development of Mechanical Structure
The mobile robot platform used in this project is a three wheeled differentially driven robot capable of navigating through a partially unstructured environment and performing the task of ball potting based on the perception acquired through the sensors on board.
To develop the mechanical structure of the mobile robot of any kind, the designer needs to have certain specifications e.g. if the robot is being designed to navigate through a mazelike uncertain environment, the designer needs to know: What’s the passage width of such a maze? How much area is being allowed to the robot for turning by 360o? And similarly, what’s the maximum allowed weight of the robot? Answers to these questions not only solve a very big part of the problem but give way to more such questions.
3.1 Arena
To have the exact specifications for the first design of the mechanical structure of the robot, a maze-like unstructured environment was defined. The arena was designed as a maze shown in Figure 3.1. To make the arena dynamic, gates X1, X2, Y1, Y2, Z1, and Z2 are used. This creates an unstructured and unpredictable environment for the mobile robot. Moreover, the mobile robot shall not be preloaded with the map or a symbolic representation of any sort of the environment.
illustration not visible in this excerpt
Figure 3.1: Developed environment for navigation
Some of the specifications accompanying the floor plan as shown in Figure 3.1 were:
1. The maze is made of wooden walls of height 15cm. All walls will be painted white.
2. Passage width is 50 cm throughout the maze.
3. In total there are four different colored boxes placed at the predefined locations in the maze. The colored boxes will be of yellow, black, green, and golden color. The location of yellow and black box is interchangeable while the location of green and golden boxes is fixed. All boxes are made of wood. The dimension of the box is 15x15x15 cm.
4. The maze has seven gates. At any given time, one of the entrance gates either Z1 or Z2 will be opened. Similarly two of the gates from X1 or X2 and Y1 or Y2 will be opened. The final gate to the golden box will be opened once the robot has satisfactorily potted all the balls in their respective boxes.
5. Height of the gate will be the same as height of the walls i.e. 15 cm.
6. The maze ends up at the ‘Green Zone’ which is the only pathway to pot the golden ball. The robot shall only be allowed to access the green zone once it has successfully potted all balls in the correct sequence in their respective colored boxes.
Other specifications regarding the robot size and weight were that the robot must fit within 30x30x30 cm[3] at all times of operation and the weight shall not exceed 9 kg.
3.2 Mechanical Structure
3.2.1 A Brief Overview of the Design and Fabrication Process
The mechanical structure of the robot was designed on Autodesk Inventor 2012. Validation of the form, fit and function of the design becomes the least problem of all once its Inventor model has been created; it’s a 3D prototype that unites direct modelling and parametric work flows. Drawings for each of the mechanical parts used in the complete assembly of the robot could be found in the Appendix of this report.
The first step in fabrication is to create a wooden or Perspex made pattern for each of the mechanical parts designed. These patterns are then used to create sand molds (Figure 3.2). A sand mold is formed by filling sand into each half of the hollow container being used as a mold box. The sand is filled around the pattern, which is a wooden/Perspex made exact copy of the external shape of the cast. When the pattern is removed, the cavity that will form the cast remains. Now that the mold has been made, it must be prepared for the pouring. The surface of the mold cavity is dried using drying agents such as dry sand to facilitate the removal of the casting. The molten metal is maintained at a set temperature in a furnace. Pouring is a delicate procedure and would require skilful handling. The molten metal that has been poured would need some time to cool down and solidify. Once the entire cavity is filled and the molten metal has hardened, the final shape of the cast is formed. Care should be taken to not to disturb the mold until the cooling time has passed. The desired cooling time can be estimated based upon the wall thickness of the cast and the temperature of the metal. After the predetermined solidification time has passed, the sand mold can be broken, and the cast removed.
The castings obtained by the process of sand casting are then machined to get the desired shape and finishing. Machining involves use of machines such as milling machine, lathe machine and drill machine so that each of the mechanical part is ready for assembling.
illustration not visible in this excerpt
Figure 3.2: Overview of the process of casting
3.2.2 Overview of the Complete Mechanical Structure
The developed mechanical structure is designed by keeping in mind all the constraints and specifications imposed arena design in the previous section. Figure 3.3 shows the mechanical structure of the robot.
The drive mechanism implemented as shown in the figure is differential drive mechanism. Such a mechanism’s movement is based on two separately driven wheels placed on either side of the robot body. The differentially driven robot can thus change its direction by varying relative speeds of its wheels and hence does not require an additional steering mechanism. If both wheels are driven in the same direction and speed, the robot will follow the straight line. If both wheels are turned with the same speed but opposite direction, the robot will rotate about the central point of its axis.
illustration not visible in this excerpt
Figure 3.3: The complete mechanical structure of the robot.
The chassis of the robot is made up of a circular shaped base plate, stacked with another plate of almost the same size; the lower plate is attached to two actuated wheels. The two actuated wheels that are being driven by two motors are placed at such a position with respect to the base plate that while turning by 360o, the robot’s center of turning would coincide exactly with the robot’s geometric center. An omnidirectional ball caster is used as the third wheel and the only free wheel in the three wheeled chassis of the robot. The chassis of the robot is attached to a ball potting mechanism as shown in the Figure 3.3. The ball potting mechanism has been designed with the capacity to carry four balls of the size of golf ball and the capability to relieve them one by one with the help of motor attached based on the perception as acquired by the aid of sensors.
The overall mechanical characteristics of the structure of the robot are explained in the Figure 3.3. The robot’s mechanical structure could fit in a box of 25×25×30 cm[3] (L×W×H). The actuated wheels placed at the center of the chassis have a diameter of 15.4 cm while the caster wheel has the diameter of 3.5 cm. The distance from the center of the right wheel to the center of the left wheel (also known as wheel base) is 12.1 cm while the distance between the rear wheels and the caster wheel is 7.8 cm. Drawings generated using Autodesk Inventor for every part and assembly of the robot have been attached and can be found in Appendix.
The detailed parametric diagram of the mechanical structure of the robot is shown in Figure 3.4 and Figure 3.5 while the exploded diagram of the mechanical structure is shown in Figure 3.6 and Figure 3.7. Two geared motors that are being used for the actuation of the rear wheels of the robot are coupled with the wheel shafts through a pair of spur gears. These gears have a gear reduction ratio of 1:1, center distance of 50 mm and module of 1.5 mm.
illustration not visible in this excerpt
Figure 3.4: The parametric diagram of the chassis of the robot
illustration not visible in this excerpt
Figure 3.5: The parametric diagram of the ball potting mechanism of the robot
The speed and torque requirement is the main reason of choice of motor and spur gear ratio. Polulo’s motors are being used for the actuation of the wheels. These 2.22" x 1.45" x 1.45" geared motors are powerful, high-quality motors with 50:1 metal gearhead intended for operation at 12 V.
These units have a 0.61"-long, 6 mm-diameter D-shaped output shaft. Each of the motors that have been used for the driving of wheels is attached to a quadrature encoder that is providing a resolution of 64 counts per revolution of the motor shaft, which corresponds to 6400 (64*Gear Ratio) CPR of the gearbox’s output shaft.
illustration not visible in this excerpt
Figure 3.6: The dissected diagram of the chassis of the robot
illustration not visible in this excerpt
Figure 3.7: The dissected diagram of the ball potting mechanism of the robot
The ball potting mechanism is made of a cylindrical structure bolted to some other parts that are used to ensure that the colored ball that is released falls into the box of same color instead of falling somewhere else. The cylindrical structure that is being used to carry the four golf balls has a diameter of 12.1 cm; it is inclined at an angle of 71.210 with the vertical (z-axis) and is placed at a height of 21.4 cm with the help of a support. For actuation of ball potting mechanism, Maxon’s geared motor is being used.
As shown by the exploded presentation of the mechanical structure of the robot in Figure
3.6 and Figure 3.7, in order to have a compact structure, the actuators, couplings and transmission mechanisms are placed in such a way that the space available is utilized in a best possible way; thus, occupying minimum space and eventually reducing the size of the robot.
Chapter 4. Design and Development of Related Electronics
The related electronics (as shown in the Figure 4.1) of the mobile platform is divided into motor drivers, master control card, slave control cards and sensors.
illustration not visible in this excerpt
Figure 4.1: Overview of the related electronics
4.1 Master Card
The master control card as shown in the Figure 22 and Figure 23 is based on PIC18F4520, a 40-pin enhanced flash microcontroller with 32K of program memory, 1536 bytes of SRAM data memory, 256 bytes of EEPROM data memory and 10 bit A/D. The interfaces made available on the card are RS232 interface, PWM interface, ADC outputs, I/O interface, I2C interface and ICSP interface.
illustration not visible in this excerpt
Figure 4.2: PCB layout for the master card
illustration not visible in this excerpt
Figure 4.3: Fabricated master card
This family of microcontrollers offers the advantages of all PIC18 micro-controllers namely, high computational performance at an economical price - with the addition of high-endurance, Enhanced Flash program memory.
Some of the on-chip features of this PIC18F4520 are:
- One, two or four PWM outputs with:
- Selectable polarity,
- Programmable dead time and
- Auto-Shutdown and Auto-Restart
- 36 General Purpose I/O Ports
- Master Synchronous Serial Port (MSSP) module supports 3-wire SPI™ (all 4 modes) and I2C™ Master and Slave Modes
- 10-bit, up to 13-channel Analog-to-Digital Converter module (A/D) with:
- Auto-acquisition capability and
- Conversion availability during Sleep
- Flexible Oscillator Structure with:
- Four Crystal modes, up to 40 MHz,
- 4X Phase Lock Loop (available for crystal and internal oscillators),
- Two External RC modes, up to 4 MHz,
- Two External Clock modes, up to 40 MHz and
- Internal oscillator block
4.2 Slave Cards
The slave control card as shown in the Figure 4.4 and Figure 4.5 is based on PIC18F2431, a 28-pin enhanced flash microcontroller with 16384 bytes of program memory, 768 bytes of SRAM data memory, 256 bytes of EEPROM data memory and 10 bit A/D. The interfaces made available on the card are RS232 interface, PWM interface, ADC outputs, I/O interface, I2C interface and ICSP interface.
This microcontroller again has the advantages of PIC18 microcontrollers - namely, high computational performance at an economical price - with the addition of high-endurance, Enhanced Flash program memory.
Slave cards as shown in the overview (Figure 4.1) are being used to control the speed and direction of the motors. Position and speed measurements takes using QEI (Quadrature Encoder Interface) are fed back using I/O interface available on the cards to the microcontroller.
These measurements along with the control scheme implemented is then used to control the position, direction and speed of rotation of the motor.
illustration not visible in this excerpt
Figure 4.4: PCB Layout for the slave card
illustration not visible in this excerpt
Figure 4.5: Fabricated Slave Card
Some of the on-chip features of PIC18F2431 are:
- 14-bit Power Control PWM Module with:
- Up to 4 channels with complementary outputs,
- Edge- or center-aligned operation,
- Flexible dead-band generator and
- Simultaneous update of duty cycle and period
- Motion Feedback Module:
- Quadrature Encoder Interface with:
- 2 phase inputs and one index input from encoder,
- High and low position tracking with direction status and change of direction interrupt and
- Velocity measurement
- 24 General Purpose I/O Ports
- High-Speed, 10-bit A/D Converter with:
- Up to 9 channels and
- Auto-conversion capability
- Flexible Oscillator Structure with:
- Four crystal modes up to 40 MHz,
- Two external clock modes up to 40 MHz and
- Internal oscillator block:
- 8 user selectable frequencies: 31 kHz to 8 MHz
4.3 Motor Drivers
Motor driver card as shown in the Figure 4.6 is based on MOSFETs with upper half of it made up of PMOS IRF5210 and lower half made up of NMOS IRF540. The interfaces made available on the card are PWM interface, Power/GND pins and a 2-pin connector for motor. To isolate the power module from the controller, an opto-coupler 4N35 has been used for each of the four PWMs.
The purpose of the motor driver circuit is to control the direction of rotation and speed of the dc motor.
illustration not visible in this excerpt
Figure 4.6: Motor Driver Card
Four transistor switches and a motor are arranged in a shape of letter “H”. These four switches are denoted by Q1, Q2, Q3 and Q4. For Q1 and Q3, PNP based IRF5210 is used while for Q2 and Q4, NPN based IRF540 is used.
When Q1 and Q4 are closed, the motor moves in the clockwise direction while when Q2 and Q3 are closed, the motor moves in the counterclockwise direction. When Q1 and Q3 are closed, equal voltage gets applied to both the terminals of the motor, resulting in braking of the motor; same would be the case when Q2 and Q4 are closed.
The robot is being driven by two dc motors; to control the motion of the robot, the motion of these dc motors is to be controlled. These two DC motors are being separately controlled by two separate microcontrollers since each motor has an attached quadrature encoder with it and a single microcontroller can support only one quadrature encoder interface.
The purpose of quadrature encoder is to feedback the position, speed and direction of rotation of the motor’s output shaft to the microcontroller. One of the slave microcontrollers is used to control the motion of the right motor while the other one is used to control the motion of the left motor.
Slave microcontrollers are interfaced with the master microcontroller in I[2] C protocol. I[2] C is a bus interface connection incorporated into many devices such as sensors and EEPROM. It is ideal for attaching low-speed peripherals for reliable communication over shortdistances.I[2] C uses only 2 pins for data transfer: SCL (Serial Clock) and SDA (Serial Data). SCL (Serial Clock) synchronizes the data transfer between the two chips.
4.4 Sensors
Four types of sensors are being used in the robot for perception and assistance in the execution of action. It includes quadrature encoders, short range IR sensors, long range IR sensors and color sensors.
4.4.1 Short range and long range IR sensors
IR long range sensors (Figure 4.7) being used in this project are made by Sharp. They have an analog output that varies from 2.8V at 15cm to 0.4V at 150cm with a supply voltage between 4.5 and 5.5VDC. The sensor has a Japanese Solderless Terminal (JST) Connector. This sensor is great for sensing objects up to 5 feet away.
The schematic, table of absolute maximum ratings, table of electro-optical characteristics and the analog voltage vs. distance from the reflective object could be found in the datasheet of this sensor.
illustration not visible in this excerpt
Figure 4.7: Long range IR sensors GP2Y0A02YK0F
IR short range sensor (Figure 4.8) being used in this robot are made by Sharp. They have an analog output that varies from 3.1V at 3cm to 0.3V at 30cm with a supply voltage between 4.5 and 5.5VDC. The sensor has a Japanese Solderless Terminal (JST) Connector.
The schematic, table of absolute maximum ratings, table of electro-optical characteristics and the analog voltage vs. distance from the reflective object curve could be found in the datasheet of this sensor.
illustration not visible in this excerpt
Figure 4.8: Short range IR sensors GP2D120XJ00F
4.4.2 Color sensors
The ADJD-S311-CR999 (21.6x12.7mm (0.85x0.50")) is a color light sensor of very small size. For the detection of colored features in the surroundings of the robot, spark fun’s evaluation board (Figure 29) for ADJD-S311-CR999 is being used. It provides all the necessary support circuitry to discern the smallest differences between visible colors.
Figure 4.9: Color sensor evaluation board for ADJD-S311-CR999
Some of the features of this sensor are:
- 10 bit per channel resolution
- Independent gain selection for each channel
- Wide sensitivity
- Two wire serial communication (SDA and SCL pins)
- Built in oscillator/selectable external clock
- Low power mode (sleep mode)
Chapter 5. Design and Development of the Control System
First phase of the design and development of control system was mostly about testing the control cards developed. Neither any advanced control scheme was designed nor simulated or implemented; instead, basic motor control and sensor interfacing was achieved and understood. Basic motor control involved design, simulation and implementation of PIC18F2431 based proportional control for DC motors; sensor integration was about understanding the working of ADC and using the digital data obtained from IR sensors in very simple programs such as displaying the value obtained on the console. Few of the milestones achieved were design, simulation and implementation of:
- Simple PWM generation
- Position control of the motor where reference position is input using computer’s HyperTerminal and communication with the computer made using RS232 interface
- Speed control of the motor where reference speed is input using computer’s HyperTerminal and communication with the computer made using RS232 interface
- Conversion of analog voltage obtained from IR sensors into digital equivalent and use of RS232 interface to display the digitized data on HyperTerminal
- I[2] C protocol
5.1 The DC Motor Control
The motion of the robot in the maze is governed by the motion of the two dc motors as discussed in the previous chapter. To control the motion of the mobile platform requires control of the motion of these DC motors. Two types of controls involved in the DC motor control are: position control and speed control. Position control requires desired position as a reference input to the closed loop control system while speed control requires desired velocity as a reference input to the closed loop control system. This chapter discusses the design and development of the proportional control system for the control of the motors.
5.1.1 PWM Generation
PWM module operation is controlled by a total of 22 registers, 8 among them are used to configure the features of the module while the remaining 14 are used for the configuration of special features.
illustration not visible in this excerpt
Table 1: Configuration Registers for Power PWM Module
5.1.2 Position Control
Position control is employed if the mobile platform is to be stopped at a desired position. Let’s say the mobile platform has to be turned around a corner by 90o and after certain amount of observations it has been observed that 90o clockwise turning of the mobile platform requires one complete revolution of left wheel in clockwise direction and one complete revolution of right wheel in the opposite direction. Position control would require the desired number of revolutions as a reference and application of brakes once the desired position has been achieved.
To measure the number of revolution the motor has moved in any given time, QEI (Quadrature Encoder Interface) as mentioned earlier is used. Position information of the motors is obtained by decoding the output of these encoders. Since the encoders used produce 64 counts per revolution, no. of revolutions moved could be calculated using the equation given below:
illustration not visible in this excerpt
PIC18F2431 has 3 pins dedicated for QEI; namely, QEA, QEB and IDX. Quadrature encoder's output is fed back to the controller through these pins. Among some of the features that QEI implements are:
1. Direction of movement detection using direction change interrupt (IC3DRIF),
2. 16-bit up/down position counter register (POSCNT) and
3. Position counter interrupt (IC2QEIF)
The control logic block available in the QEI detects the leading edge on the QEA and QEB pins and then generates a count pulse. It also samples the index input signal and generates the direction of rotation signal. Once the direction of rotation is known, the position counter register (POSCNT) will be incremented/decremented using pulse count.
For proportional position control of the mobile platform’s motors, the microcontroller PIC18F2431 is programmed in CCS C compiler and simulated in Proteus ISIS Professional environment Figure 5.1 given below shows a block diagram of one such position control system.
illustration not visible in this excerpt
Figure 5.1: Motor Position Control System where analog signal is the desired position Position Control Algorithm
Step 1: Initialization of configuration bits such as oscillator’s configuration bit and watchdog timer’s configuration bit
Step 2: Initialization of different registers involved in the motor’s position control such as QEICON, PWMCON, DTCON, POSCNT etc. Details regarding these registers could be found in the Table 1 given in the previous section.
Step 3: Initialization of variables
Step 4: Number of counts entered by the user
Step 5: Setting up QEI and PWM as per the requirement
Step 6: Direction control. If direction entered by the user is clockwise, use one pair of PWM pins; else, use the other pair. Which pair of PWM pins to use and which pair not to use is decided by the values input to the duty cycle registers PDC0 and PDC1.
Step 7: Subtract the position counter’s value from the reference value while the difference is not equal to 0. Position counter (POSCNT) gets incremented for clockwise direction and decremented for anticlockwise direction.
illustration not visible in this excerpt
5.1.3 Speed Control
Speed control is employed when the mobile platform has to be moved at a desired speed. Let‟s say the mobile platform is moving at a speed of 20 cm/s and the desired speed is 15 cm/s. Velocity control would require the desired value of speed as a reference and would ensure that the speed is kept constant at the desired value until the new value of the desired speed is received.
The speed of the DC motor is directly proportional to the applied voltage; thus, the higher the voltage, the higher would be the speed of the motor. By changing the duty cycle of the PWM, the average voltage applied to the DC motor could be varied and thus the speed of the motor could be controlled as per the requirement.
The procedure for measurement of the number of revolutions the motor has moved in any given time and thus obtaining the position information of the motor has already been covered in the previous section. Now to keep track of the current velocity of the motor and control, CAP1BUF/VELR registers are constantly monitored and compared with the desired velocity entered. The desired value is entered by the user using HyperTerminal and RS232 interface.
For proportional velocity control of the mobile platform’s motors, the microcontroller PIC18F2431 is programmed in CCS C compiler and simulated in Proteus ISIS Professional environment.
Speed Control Algorithm
Step 1: Initialization of configuration bits such as oscillator’s configuration bit and watchdog timer's configuration bit
Step 2: Initialization of different registers involved in the motor’s position control such as QEICON, PWMCON, DTCON, POSCNT etc.
Step 3: Initialization of variables
Step 4: Desired speed entered by the user
Step 5: Setting up QEI and PWM as per the requirement Step 6: Initialization of interrupts
Step 7: Subtract the current speed from the desired speed while the difference is not equal to 0.
Difference = Desired Speed - Current Speed
Testing of hardware developed using these algorithms proved to be successful. The results obtained matched exactly those obtained using Proteus ISIS Professional environment except for the few differences owing to the limitations of the simulation environment.
Chapter 6. Future Work
While significant strides have been taken towards the development of the autonomous system as per the aims and requirements set by the supervisor, there’s still a lot to be done. To start with,
i. Path/Task planning and programming to complete the task in minimum time. This involves studying the unstructured environment to be faced by the robot, simulating different path planning algorithms and finding out the best path that if followed by the robot would let it reach its destination in minimum possible time.
ii. The mobile platform can be used for mapping purposes as well. Once equipped with the appropriate obstacle detection and avoidance, and mapping technique, the mobile robot can be used in an unstructured environment to construct a 2D map.
iii. The mobile robot can also be used as a research platform for multi-robot systems. The detachable ball-potting mechanism ensures that the robot can be altered to provide multiple purposes given appropriate structures are attached.
Appendix
illustration not visible in this excerpt
Figure A.1: CAD Drawing for Base Plate (Aluminum)
illustration not visible in this excerpt
Figure A.2: CAD Drawing for Gear Holder (Aluminum)
illustration not visible in this excerpt
Figure A.3: CAD Drawing for Top Plate (Perspex)
illustration not visible in this excerpt
Figure A.4: CAD Drawing for 1:1 Spur Gears (Mild Steel)
illustration not visible in this excerpt
Figure A.5: CAD Drawing for robot wheel (Aluminum)
illustration not visible in this excerpt
Figure A.6: CAD Drawing for complete assembly
A6
References
[1] J. Minguez and D. Vikenmark, "Reactive Obstacle Avoidance for Mobile Robots that Operate in Confined 3D Workspaces," Proc. IEEE Mediterranean Electro-technical Conference (MELECON), pp. 1246-1251, Málaga, Spain, May 2006.
[2] Ong Chun How and Shamsudin H.M. Amin, "A Biologically Inspired Hybrid Three Legged Mobile Robot," Proc. Student Conference on Research and Development, pp. 181-183, 2002.
[3] M. Xie, Fundamentals of Robotics: Linking Perception to Action, vol. 54, 2003, pp. 38-46, World Scientific Publishing Co. Pte. Ltd, Singapore, 2003.
[4] Illah R. Nourbakhsh, Clay Kunz and Thomas Willeke, "The Mobot Museum Robot Installations: A Five Year Experiment," Proc. International Conference on Intelligent RObots and Systems - IROS, vol. 3, pp. 3636-3641, Oct. 2003.
[5] S. Thrun, M. Bennewitz, W. Burgard, A. B. Cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte and D. Schulz, "MINERVA: A Second-Generation Museum Tour-Guide Robot," Proc. IEEE International Conference on Robotics and Automation, vol. 3, pp. 1999-2005, Detroit, MI, 1999.
[6] A. K. Pandey and R. Alami, "A Step towards a Sociable Robot Guide which Monitors and Adapts to the Person’s Activities," Proc. International Conference on Advanced Robotics, pp. 1-8, Munich, June 2009.
[7] R. Simpson and S. Levine, "Development and Evaluation of Voice Control for a Smart Wheelchair," Proc. 20th Annual Conference on Rehabilitation Engineering (RESNA), pp. 417-419, Pittsburgh, PA, June 1997.
[8] R. Simpson, D. Poirot and M. Baxter, "The Hephaestus Smart Wheelchair System," IEEE Trans. Neural Systems and Rehabilitation Engineering, vol. 10, issue 2, pp. 118-122, June 2002.
[9] S. P. Levine, D. A. Bell, L. A. Jaros, R. C. Simpson, Y. Koren and J. Borenstein, "The NavChair Assistive Wheelchair Navigation System," IEEE Trans. Rehabilitation Engineering, vol. 7, issue 4, pp. 443-451, Dec. 1999.
[10] P. Hoppenot and E. Colle, "Human-like Behavior Robot - Application to Disabled People Assistance," Proc. IEEE International Conference on Systems, Man, and Cybernetics, vol. 1, pp. 155-160, Nashville, TN, 2000.
[11] R. D. Jackson, "Robotics and Its Role in Helping Disabled People," Engineering Science and Education Journal, vol. 2, issue 6, pp. 267-272, Aug. 2002.
[12] iRobot, "iRobot® 110 FirstLook®," iRobot, 8 Crosby Drive, Bedford, MA 01730, Phone: 781.430.3000. URL: http://www.irobot.com/.
[13] S. Nishimura, H. Takemura and H. Mizoguchi, "Development of Attachable Modules for Robotizing Daily Items," Proc. IEEE International Conference on Robotics and Biomimetics, pp. 1506-1511, Sanya, Dec. 2007.
[14] S. Rahman, R. Ikeura, M. Nobe and H. Sawai, "Control of a Power Assist Robot for Lifting Objects Based on Human Operator’s Perception of Object Weight," Proc. 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 84-90, Toyama, Sept-Oct. 2009.
[15] K. Osuka and H. Kitajima, "Development of Mobile Inspection Robot for Rescue Activities: MOIRA," Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, pp. 3373-3377, Oct. 2003.
[16] Y. Takahashi, Y. Kilcuchi, T. Ibaralci, T. Oohara, Y. Ishibashi and S. Ogawa, "ManMachine Interface of Assist Robot For Aged Person," Proc. 25th Annual Conference of the IEEE Industrial Electronics Society, vol. 2, pp. 680-685, San Jose, CA, 1999.
[17] M. Bengel, K. Pfeiffer, B. Graf, A. Bubeck and A. Verl, "Mobile Robots for Offshore Inspection and Manipulation," Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3317-3322, St. Louis, MO, Oct. 2009.
[18] H. Kose and R. Yorganci, "Tale of a robot: Humanoid Robot Assisted Sign Language Tutoring," Proc. 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 105-111, Bled, Oct. 2011.
[19] T.-J. Li, G.-W. Yuan and F.-J. Wang, "Behavior Control of Multiple Robots Exploring Unknown Environment," Proc. 4th IEEE Conference on Industrial Electronics and Applications, pp. 1877-1882, Xi'an, May 2009.
[20] Siegwart, Roland and I. R. Nourbakhsh, Introduction to Autonomous Mobile Robots, The MIT Press, 2011.
[21] M. H. Soldo, "Reactive and Preplanned Control in a Mobile Robot," Proc. IEEE International Conference on Robotics and Automation, vol. 2, pp. 1128-1132, Cincinnati, OH, May 1990.
[22] U. Farooq, G. Abbas, S. O. Saleh and M. U. Asad, "Corridor Navigation with Fuzzy Logic Control for Sonar Based Mobile Robots," Proc. 7th IEEE Conference on Industrial Electronics and Applications, pp. 2087-2093, Singapore, July 2012.
[23] W. A. Joya, "Design and Development of Pan Tilt Motion Control System," BS Thesis, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, 2012.
- Arbeit zitieren
- Usman Ayub Sheikh (Autor:in), Muhammed Burhan Aslam (Autor:in), Mohammad Urf Maaz (Autor:in), 2013, Design, Development and Reactive Control of a Mobile Robot for Navigation in a Partially Unstructured Environment, München, GRIN Verlag, https://www.grin.com/document/311309
-
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen. -
Laden Sie Ihre eigenen Arbeiten hoch! Geld verdienen und iPhone X gewinnen.