Autonomo

UTPA Sun Logo

School Logo    School Logo

Autonomo: An Extra–Vehicular Activity Robotic Assistant with High Level Command and Fail–Safe Operations

 

Topic #14

Advanced Robotics Technology

 

Team Space Autonomia

Ana Castillo, Electrical Engineering, Senior, Team Leader

Gabriel Rodriguez, Electrical Engineering, Senior

Nelson Carrasquero, Electrical Engineering, Senior

Mario Contreras Jr., Electrical Engineering, Senior

 

The University of Texas – Pan American,

Electrical Engineering Department

1201 W. University Drive, Edinburg, TX  78541

 

Faculty Advisor:

Dr. Mounir Ben Ghalia, Electrical Engineering, benghalia@panam.edu

NASA Mentor:

Mr. Dave Cheuvront, Advanced Technology Development Office, david.cheuvront-1@nasa.gov

 

 

 

 

 

 

Table of Contents

List of Figures. - 6 -

1. Introduction. - 7 -

2. Mentor / Research Group. - 7 -

3. Collaboration. - 7 -

4. Team ID / Member Profiles. - 7 -

5. Team Patch Design. - 8 -

6. Background. - 9 -

6. Design Objective. - 9 -

7. Design Plan. - 10 -

7.1 High Level Command. - 10 -

7.1.2 Speech Recognition System... - 10 -

7.2 Design of Manipulator Arm... - 13 -

7.2.1 Dimensions of the Manipulator Arm... - 13 -

7.2.2 Kinematics of the arm... - 13 -

7.2.3 Failure Simulation of Manipulator Arm... - 15 -

7.3. Sensor Design. - 17 -

7.3.1 Brief Description of Robot Sensors. - 17 -

7.3.2 Fail Safe System... - 17 -

7.3.3 Fail-Safe System Applied to Sensors. - 18 -

7.3.4 Fault Detection / Isolation. - 18 -

7.3.5 Recovery. - 19 -

7.4 Robotic Processing Units. - 19 -

8. Illustrations of Robot Structure. - 21 -

9. Reviewer Section. - 22 -

9.1 In Defense of Speech Recognition. - 22 -

9.2 Programmable Logic Controller - 22 -

10. Customer needs as quantifiable requirements and constraints. - 22 -

11. Profiling Several Concepts. - 23 -

11.1 Speech Recognition. - 23 -

11.2 Sonar: Design Concept Development - 23 -

12. Feasibility and Down-selection. - 23 -

13. Visual Elements to Communicate Concepts. - 24 -

13.1 High Level Command. - 24 -

14. Team’s effort to conduct a field investigation to supplement textbook learning. - 25 -

15. Conclusion. - 25 -

References. - 26 -

Appendix: A.. - 28 -

........................................................................................................................................................................................................................................................... - 28 -

Appendix B: Budget - 29 -


List of Figures

 

 

Figure 1:Team Patch.. - 6 -

Figure 2: Block design approach of HLC.. - 8 -

Figure 3: Design approach of a VOX circuit [1] - 9 -

Figure 4: Schematic of speech recognition board [2] - 9 -

Figure 5: Pin layout of HM2007 chip [3] - 10 -

Figure 6: SRAM layout [4] - 10 -

Figure 7: Manipulator Arm Configuration.. - 11 -

Figure 8: Illustration of Arm Working Properly.. - 12 -

Figure 9: Illustration of Arm with Disabled Joint and Kinematics.. - 13 -

Figure 10: In-line (Shunt) Resistor and Meter [7] - 14 -

Figure 11: Diagram of Fail - Safe System... - 15 -

Figure 12: Different views of robot with mounted sonar sensors.. - 16 -

Figure 13: Fail-safe scenarios.. - 17 -

Figure 14: Picture of the Viper PC/104 Board [8] - 18 -

Figure 15: Block Diagram of HLC process.. - 19 -

Figure 16: Front View of Robot        Figure 17: Back View of Robot.. - 19 -

Figure 18: Walkie-talkies interfaced with VOX circuit. - 22 -

Figure 19: Walkie-Talkie acting as input to circuit. - 23 -

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1. Introduction

 

In this project, the team proposes to build a mobile Extra-Vehicular Activity Robotic Assistant equipped with a manipulator arm.  The two main characteristics of the proposed robot are its ability to interpret and execute high level commands given by a human operator, and its ability to detect, isolate and recover from the different types of failures that might affect its hardware. The types of hardware failures that the team will consider to illustrate the fail-safe operation are: (i) failure to one of the electric motors actuating the manipulator arm, and (ii) failure to one of the navigation sensors of the robot.

The high level commands will be used to initiate a multitude of tasks that the robot has to carry out autonomously. The fail-safe operations will allow the robot to continue its mission even in the event of a hardware failure. This proposal describes the proposed design and the method of approach to implement and test the high-level command (HLC) and the fail-safe operations (FSP) of the robot.

2. Mentor / Research Group

 

The JSC mentor for this project is Mr. David Cheuvront.  He is the Technology Integration Division Manager at the Lyndon B. Johnson Space Center. The Technology Integration Division oversees current designs and technologies involved in space exploration and makes decisions whether or not there is a need for technological advancements that need to be made to increase productivity in space exploration. Mr. Cheuvront worked for the Canadian Space Agency in Ottawa, Japan’s Space Agency, and the European Space Agency.

3. Collaboration

 

In addition to collaborating with Mr. Dave Chevront, the team will be advised by Dr. Mounir Ben Ghalia and Dr. Hamid Zarnani. The team will consult with other engineering students who have worked on previous robotics projects, and with past UTPA graduate students. 

4. Team ID / Member Profiles

 

The team members are currently enrolled in ELEE 4461, Senior Design Project I.  The name chosen for the team is Space Autonomia.  Originally, the team came up with the name Autonomy in Space.  The name was chosen due to the autonomous function of the robot.  Then, after some discussions among the team, a consensus was reached to select the name Space Autonomia.  Autonomia is the Spanish word for autonomy. The team has named the robot to be designed and built: Autonomo.

The faculty advisor is Dr. Mounir Ben Ghalia.  He is an Assistant Professor of Electrical Engineering Department who specializes in robotics and control systems theory.  Dr. Ben Ghalia can be contacted via email at benghalia@panam.edu . 

The team is lead by Ana Castillo, a Sr. Electrical Engineering student. The team also consists of Nelson Carrasquero, a Sr. Electrical Engineering student, Mario Contreras Jr., a Sr. Electrical Engineering student, and Gabriel Rodriguez, a Sr. Electrical Engineering student.  Anna and Nelson will be working on the Fail-Safe operations of the robot.  Mario and Gabriel will be working on the High Level Command system. All team members will contribute to the design and construction of the robot structure and the manipulator arm.

5. Team Patch Design

Figure 1:Team Patch

 

The team patch design was created to keep NASA’s spirit in robotic space missions, and as well the university’s spirit.  As it can be seen, the colors are green and orange, just like the team’s institutions, so that shows the institution’s colors.  On the top of the design, the name of the project is being mentioned.  Then, on the bottom the name of the team is being described, and then on the sides the names of the team members.  On the center of the design, the sun university’s logo is used to represent the team’s institution.  Finally, there are two robots and two space rockets.  Those mainly represent future robotic space missions. 

 

6. Background

 

The area of robotics plays an important role in space exploration.  Different types of robotic Technologies have been developed.  One of the technologies used in space exploration is tele-robotics.  In this technology, a robot is controlled from a remote and safe location by a human operator.

Fail-Safe operations are increasingly important in autonomous or industrial robots which are subject to failures.  In this project, a fail-safe system will be designed and built allowing the robot to detect, isolate, and recover from a hardware failure. The High-Level Command allows the robot to carryout a series of complex tasks upon receipt of a speech-command by a human operator.  This operative feature removes the need for giving detailed instructions to the robot. Once a high level command is received, the robot starts carrying out its tasks in an autonomous fashion.

6. Design Objective

 

Team Space Autonomia will demonstrate the effectiveness of a HLC and fail-safe system.  The design objects are:

  1. Design a 3-link manipulator
  2. Design a chassis for the robot
  3. Design a control system for the drive module
  4. Design a collision detection and avoidance system
  5. Design a hardware failure detection module
  6. Design a recovery module
  7. Design a voice-recognition system

 

The HLC will allow the operator to command the robot. The robot will then process the HLC into a series of low level instructions to carry out its main task.  The high level command consists of a word phrase that the robot will then process in order to identify the task it must perform.  For instance, the operator will give the command “Task 1” and the robot will respond accordingly. 

The fail-safe system will detect, isolate, and recover from any faults that may occur during the robot’s mission. 

7. Design Plan

7.1 High Level Command

 

The HLC will consist of a voice command that a human agent will issue to the robot.

The main reasons for adopting a speech-command as the form of HLC operation are:

1.       Security purposes

2.       High level command implementation

3.       Voice command is suitable technique for communication

The first reason ensures that the robot will respond to only authorized operators. This feature guarantees system operation security.  Although circumvention is possible, the effort needed to do so acts as a deterrent.  Sending other types of data (text) across is just as well; however, it has its caveat (third reason).  Single verbal HLC allows for complicated tasks to be carried out.  A caveat for using data (text) for HLC arises in the particular use of the system (robot).  Consider, for instance, the case where the robot has to carry out military tasks under heavy fire.  The Commanding Officer (CO) would only speak (at the very least) to the robot to perform its task. 

Using other types of HLCs requires for the CO to spend more time on sending data, either by keyboard, or using a stylus to select task from a PDA-like device.  Another example is that of an astronaut commanding a robot to assist him/her carrying out a mission.

7.1.2 Speech Recognition System

 

       Figure 2 shows the block diagram of the HLC module.

Figure 2: Block design approach of HLC

 The team proposes the use of a wireless microphone headset to transmit the voice signal to the robot.  The receiver will be a walkie-talkie in the same frequency as the headset.  The transmitter will be a redesigned walkie-talkie performing as a wireless microphone headset.  This transmitter will have an important feature: Voice Operated Transmitter (VOX).  The general circuit (Figure 3) below gives a general approach on how to design a VOX.

Figure 3: Design approach of a VOX circuit [1]

 

The speech recognition system will process the signal and store the command in a static RAM IC.  Figure 4 shows the schematic of the speech recognition board. 

Figure 4: Schematic of speech recognition board [2]

For a more detailed picture of the HM2007 CMOS chip, see Figure 5.

Figure 5: Pin layout of HM2007 chip [3]

Similarly, Figures 6 shows an improved detailed view of the SRAM IC.

Figure 6: SRAM layout [4]

 

This circuit allows the operator to speak a word in less than 0.92 seconds.  Using a speaker dependent and word isolation method, the speech recognition system has a maximum of 40 words, stored in an 8Kx8 static RAM.  In order to store these words in memory, the keypad is used [5].   For instance, pressing “0” then “1” programs the word as 01.  This works similarly up until word 40.

From Figure 2, the LED acts as status indicators for the operator.  If the circuit accepts the word, the LED flashes.  However, if the LED does not flash, the word was not correctly programmed, and so must be programmed again.  The 7-segment display shows the word programmed or word number currently being programmed [5]. 

For security purposes, the robot will acknowledge a high level command only if it is issued from one of the team members.  Hence, each command word has to be recorded by each team member in his or her own voice.  In addition, if the robot encounters problems interpreting high level commands, a back up system will be used to send individual low level commands to the robot (e.g. move forward, lift arm, etc).  Once the command is given and correctly processed, the robot will perform its task. 

7.2 Design of Manipulator Arm

7.2.1 Dimensions of the Manipulator Arm

 

The segment between the base of the arm and joint 1 will have a length of 15 cm. Link 1 corresponds to the segment between joint 1 and joint 2 (), and it will have a longitude of 53 cm. Link 2 and link 3 are the segments between joint 2 and joint 3 () and between joint 3 and joint 4 () respectively. Each of these links will have a longitude of 26.5 cm. The end effector will have a length of 20 cm. The sum of all the links between joint 1 and joint 4 will have a longitude of 1.06 m. Aluminum tubes with a diameter of 2.5 inches will be used to build the links of the arm. The base of the arm will be attached to the mobile platform at a height of 60 cm.  

Figure 7: Manipulator Arm Configuration

 

7.2.2 Kinematics of the arm

 

The arm configuration is composed of a base platform that acts as the base of the arm and connects the arm to the mobile platform. The base platform will have a servo motor that performs a yaw motion. This motion will allow the arm to reach an adequate position to store the sample in the containers placed in the back of the robot. Joint 1, 2, 3, and 4 will move in a pitch form to provide the desired degree of freedom needed to perform the task of sample collection. The actuator of joint 2 will illustrate the redundancy concept for the implementation of the fail – safe system. The actuator of joint 2 will act as a back up system in case the actuator of joint 3 fails. Therefore, the actuator of joint 2 will remain inactive if the actuator of joint 3 works properly.

Figure 8 is a scaled illustration of the actual dimensions of the arm links. The ratio of the diagram to the actual dimensions is 10:1.  The joint 1 displacement will be represented by θ1 as shown in Figure 7. Joint 1 will have a displacement range of 160 degrees. Joint 2 will be used as a back up system. The joint 2 displacement will be represented by θ2, and it will have a range of 180 degrees. The joint 3 displacement will be represented by θ3. Since joint 3 will be used to simulate the failure case, its displacement range is limited to the degree of freedom that the arm needs to have to complete the task by enabling the back up system (actuator of joint 2).  The displacement range of joint 3 will be 90 degrees. The joint 4 displacement will be represented by θ4, and it will have a range of 180 degrees.

Figure 8: Illustration of Arm Working Properly

 

In the absence of failure, only joints 1, 3, and 4 will be active to perform the task. Figure 7 illustrates the points that the end effector will have to reach in order to collect the sample of matter from the surface. These points form a straight line path that will be reached by actuating joint 1, 3, and 4. By parallel projection on the X and Y axes, the coordinates of the points shown in the figure above can be determined using the following equations [6]:

            Equation 1

          Equation 2

Since the coordinates of these points will be known, the joints displacement θ1, θ3, and θ4 needed to place the end effector to the desired positions will be determined by applying inverse kinematics.

7.2.3 Failure Simulation of Manipulator Arm

 

An object (see Figure 9) will be placed manually between link 2 and link 3. This will hinder the joint 3 motion.  In the scenario previously mentioned, when an object is placed between the two links connected by joint 3, the motor will require more power to produce a greater torque; therefore, the magnitude of the current drawn by the motor will increase. When the current drawn by the motor increases abnormally, this situation, will be flagged as an effector failure. In order to detect the failure, a system will be used to measure and monitor the current drawn by the electrical motor of joint 3. This system will consist of an in-line (shunt) resistor circuit that will measure the current drawn by the motor, and the interface of the current meter circuit with the microcontroller to monitor the behavior of the current drawn by the motor.

Figure 9: Illustration of Arm with Disabled Joint and Kinematics

 

 

Figure 10 illustrates the circuit that will be used to measure the current drawn by the motor.  It can be noticed from the figure that a resistor is placed between the power supply and one of the motor terminals. The current drawn by the motor will cause a voltage drop across the resistor. This signal will be amplified using a LM324 op-amp with non-inverting configuration as illustrated in the figure. The microcontroller will receive the resulting output signal and compare it to the ideal current ranges that the motor should draw under normal operation. If the actual output magnitude differs from the ideal output magnitude by a large percentage, then the microcontroller will determine that the actuator is not working properly. 

Figure 10: In-line (Shunt) Resistor and Meter [7]

Figure 11 represents the algorithm that the fail – safe system will follow to detect, isolate and recover from the failure.  The block diagram denoted as inline (shunt) resistor represents the circuit previously discussed. As it was mentioned before, this block will be connected to the servo motor controller block which will supply the power to the servo motor, and it will also be connected to one of the input terminals of the motor. The output of the inline resistor plus meter circuit will be connected to one of the inputs of the microcontroller. Once the failure simulation is executed, the microcontroller will execute a program to isolate the servo motor of joint 3 and it will determine the position of the joint using the signal sent by the encoder. Once the position of the encoder is determined, the microcontroller will activate joint 2 to perform a straight line path needed to collect the sample of matter. 

Figure 11: Diagram of Fail - Safe System

7.3. Sensor Design

    7.3.1 Brief Description of Robot Sensors

 

The EVA that the team will be constructing, named Autonomo, is about 91.44 cm wide, 121.92 cm long, and 121.92 cm tall. The robot will have sensors on all sides.  Sixteen sensors from the Polaroid 6500 and Devantech SRF04 series will be used.  The Polaroid 6500 series sensors come packaged with a Polaroid 6500 series ranging module and a transducer.  In this project, the team selected the 600 series transducer, which has a diameter of approximately 4.29 cm.  The Polaroid 600 series transducer can sense obstacles from 15 cm up to 11 m.  Meanwhile, the Devantech SRF04 series sensor has a distance of 4.45 cm and acts as an emitter and receiver, so this sensor does not need a transducer to transmit the ping. This type of sensor is used for short obstacle detection, in the ranges from 3 cm to 3 m. 

7.3.2 Fail Safe System

 

The team decided on the number of sensors used and how to arrange them based on the cone shape theory that the sensor produces [XXX].  The cone shape is related to the angular displacement, and that displacement determines the coverage area that the sensor gets to track.  The team determined how much area each sensor covers. The sensors are placed at different angles and a certain distance apart from each other to let them reach feasible range detection. The 600 series transducers are mounted at 25˚ from the horizon. The Devantech SRF04 sensors are placed at about 37˚.  The Polaroid 600 series transducers are placed 39 cm apart from each other, while the Devantech’s SRF04s are 24.5 cm apart. 

On the front side, the robot will have 4 Devantech SRF04 for obstacle short range detection, and 12 Polaroid 600 series transducers, 3 located on the  front top of the robot, 3 on the right side, 3 on the left side, and 3 on the back.  Figure 12 illustrates the front, back, right, and left sides of the robot. 

Figure 12: Different views of robot with mounted sonar sensors

7.3.3 Fail-Safe System Applied to Sensors

 

The sensors may become damaged with time as their internal circuitry is overused. Other causes of sensor failures may incur to external noise, or a power source problem. One of the most common types of problems with sensors is inaccurate readings in regards to obstacle detection. In this design, a sensor fail-safe system was developed because of the sensor failure problems. However, in this project, the team will only simulate the failure of Devantech’s sensors. To simulate the failure, the sensor’s plastic will be covered.

7.3.4 Fault Detection / Isolation

 

The sensors will be interfaced with the PIC 18F8680 microcontroller. The microcontroller will detect the readings given by the two front Devantech’s sensors.  The two Devantech’s ultrasonic sensors will be isolated by shutting off its power supply.  By doing this, the faulty Devantech’s sensor will not interfere with the function of the robot.     

 

 

7.3.5 Recovery

 

As it was stated previously, the robot has sensors on all sides. All but one of the Polaroid 600 series transducers are activated. That disabled sensor will act as the back up when the two front Devantech’s SRF04 sensors fail.  This scenario is illustrated in Figure 13.  The grey Polaroid 600 series transducer is off when the all of the sensors are working fine.  However, as soon as there is a problem with the two front Devantech SRF04 sensors, which are pictured in red, the yellow transducer will turn on to cover for them.  Now, to simulate this problem both supposedly damaged sensors will be covered up with two plastic covers. 

Figure 13: Fail-safe scenarios

7.4 Robotic Processing Units

 

     In the design of the control system, the team looked at many different options that were available to us. There are so many different systems and controllers that can be used for various applications. In our design of an Extra-Vehicular Activity Robot Assistant (EVARA), we have several components. The main components that we will work with are:

·         Speech Recognition System

·         Sensors

·         Manipulator Arm

·         Motor Control

To be able to process all the information, the PC/104 Board will process and send signals to the microcontrollers. We will use two PIC18F8680 to process low-level commands which determine how the manipulator arm and the motor perform.

     The team is in the process of selecting the appropriate PC/104 that will suit the processing needs of the EVARA. The Arcom Viper-M64-F16 processor board shown in Figure 14, is currently the best candidate.  It has a 400MHz Intel processor with 64Mb DRAM, 16Mb Fl and Red Boot OS, and 256K battery backed SRAM [8].  After getting feedback from other suppliers of the PC/104 Boards, the team will settle on one. 

      

Figure 14: Picture of the Viper PC/104 Board [8]

 

Microchip’s PIC18F8680 microcontroller has the following specifications [9]:

·         65536 bytes

·         68 I/O pins

·         1024 EEPROM Data Memory

·         3072 RAM

·         16/10 ADC

·         40 MHz max speed.

One of the microcontrollers (PIC1) functions is to control the motors for the robot. It will also be used to receive and send data to the sensors for collision avoidance. The other microcontroller (PIC2) will be controlling the signals to the manipulator arm.  Figure 15 shows a block diagram that shows the HLC process:

Figure 15: Block Diagram of HLC process

     Once processed, the PC/104 board will send low-level commands to the microcontrollers to allow signals to be sent to the DC motors and manipulator arm servo motors. Figure 15 also shows how the sensors provide feedback information to the microcontroller which in turn is processed by the PC/104 board. 

 

8. Illustrations of Robot Structure

 

Figures 16 and 17 show the front view of the robot and the back view of the robot respectively.

  

                                                            

         Figure 16: Front View of Robot                                               Figure 17: Back View of Robot

 

9. Reviewer Section

9.1 In Defense of Speech Recognition

 

Trying to defend Team Space Autonomia reasoning for speech recognition system in this project against NASA’s reviewers is quite a daunting task.  Nonetheless, the following text is in defense of such a system.  Being almost an EVA project, the inclusion of speech recognition system does add value.  It is not some fiat proposed by the team.  NASA currently employs such a system in its ERA [10].  Also, as the authors of Providing Robotic Assistance during Extra-Vehicular Activity state, “The ERA team is specifically interested in the issues of how to produce a robot that can assist someone in a spacesuit. Some of these issues include astronaut/robot communication, such as voice . . . “ [11].

     Security inherently designed in the speech recognition system also adds value.  From hackers trying to deny service [12] , to defacing the NASA main site [13], and to shutting down servers hours after Columbia tragedy [14], security was remotely compromised.  Obviously without security, a higher degree of harm would have resulted.  Likewise, without security for Autonomo, the robot is at the disposal of any one operator, malicious intent or not.  Having a security feature, the robot will not perform inadvertent tasks.

     Astronauts are highly competent and trained professionals.  They are trained to remain calm in undesirable situations, and to think reasonably under given situations.  Confidence and familiarity arises from training.  Therefore, having the astronaut press a button or utter a command is no of difference; it is merely another feature of an automated machine.

9.2 Programmable Logic Controller

 

The specifications of the controller that will be used for are now included in the proposal above. The microcontrollers for the motor control, sensors, and manipulator arm have been selected and are in the process of being interfaced with each individual component.

10. Customer needs as quantifiable requirements and constraints

 

For this project, NASA requires several objectives.  One of them is the fail-safe ability to detect and respond appropriately to damage sensors.  The needs are being satisfied because the Extra-Vehicular Activity Robotic Assistant (EVARA) that the team will be building will be responding appropriately to long range or short range damaged sensors.  Thus, the needs the customer quantifies are satisfied by all means since the team will be working on that part. 

Now, one of the constraints that NASA needs to consider is time and money.  First, this is a two semester project, and though the team will put all its effort to complete it on time, the project may be constrained to time limitations.  This semester, the team is taking senior level courses and time is needed to study and do homework, and as well, some team members also have jobs.  This may cause some problems since the team does not dedicate all its time to the project.  Another concern that may be a constraint is money. NASA only delivers $2000 for the entire project, and the construction of the EVARA may go over that. 

11. Profiling Several Concepts

11.1 Speech Recognition

In brainstorming for the speech recognition system, some contrived suggestions included:

  • Speaker independent
  • Letting PC/104 process speech
  • Voice activated
  • Unintelligible detection (LED-feedback)
  • Using pair of walkie-talkie
  • Using a headset microphone to transmit, with walkie-talkie as receiver

11.2 Sonar: Design Concept Development

 

The development of this project design has its bases on some ideas of the ERA Robotic Assistant that NASA developed in the past.  However, there are some differences in the ways the team will be constructing the EVA.  One of them is the way the robot will tracking for obstacles.  For example, Boudreaux, the EVA Robotic Assistant, uses sensors for tracking humans.  This robot uses the sick LMS-200 laser and a stereo vision using Firewire cameras.  In comparison to the team’s tracking system has sensors to check for obstacles and be able to make a decision on time. 

 

12. Feasibility and Down-selection

 

     Not all of the suggestions were implemented for the use speech recognition for HLC.  For example, making the system speaker independent would result in lax security.  However, the system is voice activated and has a rudimentary unintelligible detection technique. 

     Originally, the transmitter and receiver were only a pair of walkie-talkies.  This invalidates one of the reasons (ability to work in parallel fashion) for using speech as a HLC, requiring the operator to use his hands to communicate with the robot.  To circumvent this, a headset microphone with a voice activated feature came about.  Also, using hardware to implement speech recognition will increase performance of the PC/104’s processor (around 300 MHz) and so freeing more processing power for other tasks.

 

13. Visual Elements to Communicate Concepts

 

13.1 High Level Command   

 Of the reasons listed above for the HLC being based on speech recognition, the ability to do parallel tasks is paramount. For this reason, one of the walkie-talkies will be disassembled and revamped to have a voice activated circuit.  The Voice Operated Transmitter (VOX) will allow for the operator to merely speak and transmission will occur, without the need for the operator to momentarily stop his task.  Shown below is a basic VOX circuit having a pair of walkie-talkies incorporated with it (Figure 13).

 

Figure 18: Walkie-talkies interfaced with VOX circuit

     The second walkie-talkie acts as a receiver.  This allows for the frequency to be matched, and an easy straight-from-the-box implementation.  Figure 14 shows the modified speech recognition system (Figure 2) using a walkie-talkie as receiver (and input to system), rather than a microphone as an input to the HM2007.

Figure 19: Walkie-Talkie acting as input to circuit

 

14. Team’s effort to conduct a field investigation to supplement textbook learning

The team will highly be benefited from the field trip to NASA Johnson Space Center.  This trip will enhance the team with ideas in implementing the current project as well as future projects.  As well, this opportunity will give the team a chance to get a better understanding of the EVA technologies.  As for the tracking part, the team will be interested in getting to know some other tracking systems JSC has developed in the passed. 

 

 

15. Conclusion

Today, autonomously operated machines play a very important role in space exploration missions.  With these systems in place, tasks can be carried out by efficiently by astronauts or by other people.  The proposed design will isolate a conceptual robotic system.  The proposed design will illustrate a conceptual autonomous robotic system.  The integrated HLC and fail - safe operation will make future space robotics more robust. 

 

 

 

 

 

 

 

 

 

References

 

[1] VOX circuit: http://www.rason.org/Projects/basicvox/basicvox.htm

[2] Speech Recognition layout:

      http://www.imagesco.com/articles/hm2007/SpeechRecognitionTutorial02.html

[3] HM2007 chip: http://www.the4cs.com/~corin/cse477/toaster/datasheet.pdf

[4] SRAM IC: http://bszx.iinf.polsl.gliwice.pl/mikroiso/element/6264.html

[5] Iovine, John. (1998). Robots, Androids, and Animatrons: 12 Incredible Projects You

     Can Build. New York. McGraw-Hill.

[6] Duffy, Joseph. (1996). Statics and Kinematics with Applications to Robotics. New York. Cambridge University Press.

[7] Clark, Dennis. Owings, Michael. (2003). Building Robot Drive Trains. New York. McGraw-Hill

[8]  http://www.arcom.com/products/icp/pc104/processors/VIPER.htm

[9]  http://www.microchip.com/download/lit/pline/picmicro/families/18fxx8/30491b.pdf

[10] NASA using Voice-Command: http://vesuvius.jsc.nasa.gov/er_er/html/era/era.html

[11] Burridge, Robert. Graham, Jeffrey. “Providing robotic assistance during extra-

      vehicular activity” SPIE November 2001,

     [http://vesuvius.jsc.nasa.gov/er_er/html/era/Information/spie_v9.pdf]

[12] Festa, Paul. “Hackers attack NASA, Navy”. CNET.  4 Mar. 1998

     <http://news.com.com/2100-1001-208692.html?legacy=cnet>

[13] Friel, Brian. “NASA Web Site Hacked” Zeitfenster.  7 Mar. 1997.

      <http://www.zeitfenster.de/firewalls/nasa-hacked.html>

[14] Roberts, Paul. “NASA Servers Hacked” PCWORD.  3 Feb. 2003.

      <http://www.pcworld.com/news/article/0,aid,109174,00.asp>

 

 

 

 

 

 

 

 

 

 


Appendix: A


Appendix B: Budget

 

 

 

 

 

ITEM NAME

COST

Sensors

 

 

$700.00

Heavy Torque Electric Motors

 

$500.00

PC 104 Board

 

 

 

$700.00

Chassis and Wheels

 

$400.00

PIC (microcontrollers)

 

 

 

$20.00

PIC Pro Starter Kit

 

$200.00

Supplies

 

 

 

   $100.00

Total

 

 

 

$2,620.00