You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Lab Documentation

Software

C Programs

ManualDrive

  1. Takes keyboard inputs
    • Has steering, throttle, maintain velocity, and braking inputs
  2. Interfaces with the positioning system
  3. Useful for testing vehicle dynamics and tea code, and for general manual driving
  4. Program does not time out like UltraSteer
  5. Outputs data to file

Ultrasteer

  1. Takes keyboard inputs
    • Has steering and throttle inputs
  2. Interfaces with the positioning system
  3. Outputs which ceiling units it’s received data from and what it received
  4. Turning on DEBUG_OUTPUTS and/or DEBUG_RECEIPTS in positioning.c provides extra outputs for debugging positioning issues
  5. Useful for testing the positioning system and also for general manual driving (doesn’t output speed or steering data though, see PathPlan)

PathPlan

  1. Has one input argument: the path to be followed
    • This must be a text file located in the same directory as the executable
    • Use any random text file if PATHPLAN_ON is set to 0
  2. Takes keyboard inputs
    • Has steering, throttle, and torque inputs
  3. Interfaces with the positioning system
  4. If PATHPLAN_ON in pathplan.c is set to 1:
    • Runs the initialization function, estimator, and path follower functions
    • Steering input is disabled
    • Outputs initialization data followed by speed, steering, estimated x/y/heading, and new position data (if any)
      • Outputs can are separated by commas and can be easily pasted into Excel (paste as text and select comma as the column delimiter)
    • Useful for running the path follower and making sure the estimator and steering controller are working correctly
  5. If PATHPLAN_ON in pathplan.c is set to 0:
    • Does NOT run the initialization function, estimator, or path follower
    • Outputs speed, throttle/torque (depending on current mode, see main()), and PWM, but can be modified easily.
      • Outputs can are separated by “@” symbols and can be easily pasted into Excel (paste as text and use “@” as the column delimiter)
    • Useful for logging data while running the car manually for dynamics testing, etc.

DataTrans

  1. Takes keyboard inputs
    • Has steering and throttle inputs
  2. Receives positioning data but doesn’t use it
  3. Sends and receives data to/from other cars
  4. Can be modified to output data received from specific vehicles
  5. Useful for testing out the inter-vehicle communication system and making sure data is being transmitted correctly

ACC

WIP.

ca2

WIP.

C Functions

This section contains documentation for all important and/or unusual functions.

main()

Note: This description of main() should encompass all the features that the final main() will have. Any program-specific additions/modifications to main() that will not be in the final version should be documented in that program’s section above.

  1. Sets up communication with the positioning system.
  2. Sets up threads for inter-vehicle communication system.
  3. Sets up the SSH keyboard interface by changing the default terminal I/O settings
    • Settings are restored before the main() ends.
    • Up/Down arrows to accelerate/decelerate
    • Left/Right to steer,
    • Enter to set 0 throttle/torque (via ovrd_throttle)
    • ‘/’ to re-center steering
    • ‘m’ to drop a marker in the output (useful for counting # of loops, marking other events during a run)
    • ‘t’ toggles between throttle input and torque input modes
    • Spacebar sets throttle/torque to 0 and stops program execution (via ovrd_throttle)
    • Keyboard inputs should have highest priority (so that the car can be stopped). Make sure nothing modifies ovrd_throttle after keyboard inputs are processed, otherwise the car MAY NOT STOP.
  4. Enters a while-loop that only ends when the argument signal is set to 0 (i.e. by pressing the spacebar)
  5. The while-loop contains an if-statement that is triggered every 100ms by the microprocessor (Brainstem). All the important functions only execute when this if-statement is triggered.
  6. Calls init_heading() only once. Calls run_estimator() and run_controller() every triggered iteration.
    • Setting PATHPLAN_ON to 0 disables these three function calls and sets it up to just use keyboard inputs so it works like Ultrasteer, only with different print statements.

WIP.

init_heading()

  1. Located in path_fns.c
  2. Takes multiple (NUM_MEAS) position measurements at the initial rest position.
  3. Moves the vehicle forward slightly and takes multiple measurements at the new rest position.
  4. Selects the median x- and y-coordinates at each position and uses those to compute the initial heading angle (z_zeta[0]) via arctan. Also sets the median coordinates of the second rest position as the initial position for the estimator (xx[0] and yy[0]).

  5. Sets the zeta_set flag to indicate that initialization is complete. This prevents init_heading() from being run during future iterations.
  6. Uses ovrd_throttle to control vehicle motion during initialization (0 = no override, -1 = stop, 1 = high throttle)

run_estimator()

To be added.

run_controller()

To be added.

atan_smart()

  1. Builds off of atan2() (in math.h) to provide an angle calculation that takes into account the number of laps completed. Prevents the angle from “resetting” after 360° (especially helpful for maintaining consistent heading angles).
  2. Takes in three arguments: x, y, and reference. x and y are the same as in atan2(). The reference is used to determine the number of laps to account for.
  3. Outputs the angle in degrees. For example if x =1 and y = 1, atan2(1, 1) = π/4 = 45°:

If current heading = 0°: atan_smart(1, 1, 0) = 45°
If current heading = 340°: atan_smart(1, 1, 340) = 405°
If current heading = -250°: atan_smart(1, 1, -250) = -315°

maintain_velocity()

This is the version of maintain_velocity() that must be used on cars 1, 2, and 3, which uses a PID controller to keep the cars moving at a target velocity.

maintain_velocity_PWM()

This is the the function used on cars 4, 5, and 6 to maintain a target velocity. It uses a simple linear mapping of desired speed to PWM. It takes as arguments a pointer to an integer PWM value, and an integer that gives the target speed. The pwm integer will be set to the PWM necessary to maintain that speed. This function must be used somewhat differently than the maintain_velocity() function used for cars 1-3. We need to let the .tea code know that we are giving a PWM value directly, rather than some throttle value that needs to be converted to a PWM. We do this by setting the throttle value to -1000 before setting the PWM value. The .tea code in cars 4-6 is written such that a throttle value of -1000 will cause it to read from the PWM slot in the brainstem (slot 2) and give that value directly to the motor (after scaling it by a factor of 3). Thus, to use this function, first you must set throttle == -1000 [set(THROTTLE, -1000, stemRef)], call maintain_velocity_PWM() to find the correct PWM value for the speed you want, and then set that value into the PWM slot [set(PWM, pwm, stemRef)].

Also note that in some of the older code, the setget.c file has the PWM section of the set() function commented out with a note that it can't be set. Simply un-comment the line and make sure that the correct slot (2) is being written to, and you should be able to set PWM without any trouble.

Tea Language

General note: Make sure you follow the constraints of the .tea language.

Supplied Programs

console.exe

This is one of the two main programs used to manipulate the Brainstem. It can be accessed from C:\Program Files\brainstem\ or from Start->Programs->Acroname. The green dot blinks (along with the green LED on the Brainstem) if it detects a Brainstem unit connected via the serial interface. Use this to compile and load tea files.

Moto.exe

This is the other main program used to manipulate the Brainstem. It can also be accessed from C:\Program Files\brainstem\ or from Start->Programs->Acroname. This program can be used to modify the PWM being sent along two different channels: one to the steering servo and the other to the drive motor. It bypasses all the code on the Brainstem, which means being able to run the motors via Moto.exe does NOT guarantee that the motors will respond to the tea files and/or C-code. Any modifications made to the PWM(s) can be written to EEPROM if so desired. If neither the tea files nor Moto.exe cannot run the motors, there is either a loose connection somewhere or the Brainstem is in need of a reset.

Compiling and Loading Tea Files

To compile a .tea file, open console.exe and use the command steep “file_name.tea”. Then to load the resulting .cup file to the Brainstem, connect the serial cable to the PC and attach the Brainstem serial attachment to the other end of the cable. Then disconnect the serial cable from the matching attachment on the car and connect it to the PC one. The green light in console.exe should be flashing. Finally type load “file_name.cup” 4 slot# to load the file to the Brainstem. For example, steep “final_program.tea” will compile final_program to a .cup file and load “final_program.cup” 4 11 will load it to slot 11 (the large slot on the Brainstem). Reconnect the serial cable to the car and it should use the updated tea files when run.

Tea File Constraints

Basic information on TEA files (taken from http://www.acroname.com/brainstem/ref/h/Hardware/Moto.html#files):A BrainStem module stores TEA files in an EEPROM. The Moto 1.0 module contains 12 file slots numbered 0-11.
Files slots 0-10 are 1K in size. File slot 11 is 16K in size. Programs can run in any of 3 "virtual machine"
(VM) process slots numbered 0-2. Each process has a private stack space of 112 bytes. These processes can run
concurrently. A 32 byte scratchpad RAM buffer may be used for sharing data between processes.

There is additional space on the EEPROM dedicated to storing reflexes. The Moto 1.0 stores 128 reflex vectors
and 128 reflex commands. For simple tasks, it may be possible to use a reflex instead of a TEA program and conserve
process slots and/or file slots.
Each process on the Brainstem has a private stack of 112 bytes, which in our case is shared by final_program and all its libraries. If there are too many variables to fit on the stack, the files will still compile but won't be able to run (it'll launch the process and quit almost immediately).

A way to verify whether the files being loaded will execute is to type launch 4 11 into console.exe. If you see anything unusual, especially a message along the lines of vm exited: 4,2,5,void (for example, final_program.tea should not exit on its own since it has an infinite loop), it may indicate a run-time error. Having too many variables is one such run-time error, so this gives you one way to catch that problem in advance.

To solve this problem: delete any unused variables, optimize your variable types (use char instead of int if a char is sufficient, for example),

Variable types in TEA (taken from http://www.acroname.com/brainstem/ref/h/TEA/types.html)The TEA language currently supports the following types:

void
The void type represents no data. It is typically used to show explicitly that there is no return value from
a routine or no parameters for a routine.

char
The char type represents a signed byte. It has a range of -128 to 127.

unsigned char
The unsigned char type represents an unsigned byte. It has a range of 0 to 255.

int
The int type represents a signed integer comprised of 2 bytes. It has a range of -32768 to 32767.

unsigned int
The unsigned int type represents an unsigned integer comprised of 2 bytes. It has a range of 0 to 65535.

string
The string type is a fixed series of ASCII-encoded text characters.
Warning: The string type is fairly limited and quite expensive in terms of storage in both the program
code space as well as stack space. Care should be taken when using this type.

[edit]

Tea Programs

[edit]

Reset_Reflex.tea

  1. Enables the Push Button override to stop the car
  2. Load to 4 0

[edit]

timer.tea

  1. Runs the timer for the dynamics
  2. Load to 4 1

[edit]

timer_HV.tea

  1. use for cars 4, 5, 6. Runs the timer for the dynamics
  2. Load to 4 1

[edit]

final_program.tea

  1. Contains all the calls to the dynamics functions
  2. Computes and applies the PWM
  3. Load to 4 11
  4. Compiles in the car libraries

[edit]

final_program_HV.tea

  1. use for cars 4, 5, 6
  2. Contains all the calls to the dynamics functions
  3. Computes and applies the PWM
  4. Load to 4 11
  5. Compiles in the car libraries

[edit]

final_program_contMapC1.tea

  1. Same as final_program.tea but contains braking functionality and slightly different coefficients in motor map
  2. Breaking uses brake values set in scratch pad, but uses them as negative torque values in a continuous motor map
  3. Breaking uses a different filter from other dynamics
  4. Load to 4 11, use on cars 1, 2, 3
  5. Currently the most-up to-date tea containing breaking functionality
  6. Breaking filter still under construction but works fine for car at speeds <=1000 mm/s

[edit]

RoboCar_Lib.tea

To be added.

[edit]

RoboCar_Lib_d6.tea

To be added.

[edit]

final_program_2.tea

  1. Contains all the calls to the dynamics functions with the braking dynamics added as well.
  2. Computes and applies the PWM for both positive and negative torque-maps (braking added)
  3. Load to 4 11
  4. Compiles in the car libraries

[edit]

General FAQ

  1. What commands can be used to look at or edit code in the SSH terminal?
    • You can use "cat" to read only, or any of the text editors to edit (vim/emacs).
  2. How do you move your code into the SSH terminal so that you can compile it?
    • Go to the transfer window instead of the client window (the one that has the file manager type interfrace).
  3. Do you have to be linked to the lab's internet network to use the SSH terminal via your laptop, or can you use it on CAEN?
    • The way it's setup right now, the IP addresses you connect to are local to the lab network. You can forward the port and access it from the Internet, but I don't know the details of that. Read up on port forwarding if you think the time spent doing this is worth the convenience of working from home. Note: it is very easy to link to the lab's internet network. Connect like you would to CAEN and use the password given in the wireless troubleshooting section below.
  4. Is the positioning data stored locally on the cars or is it stored on the computer running the simulation?
  5. How do you create a new project on the cars?
    • Copy the entire directory of a working project. Change the project name in makefile and makefile.Linux. Delete all .d files in the aUnix_<project name>/aDebug and aUnix_<project name>/aRelease folders.
    • Note: The .d files specify the source files used to compile the object files. The way make_program.Linux is written now, poorly formatted .d files will cause the make to fail with the "missing seperator" error, instead of overwriting the .d files.
  6. Where is all the software located on the cars?
    • The source code for all of the projects are located at either /root/projects, /root/project, or root/Project depending on the particular car. After compiling a program, the executable will be located at /root/Desktop/brainstem/aDebug/aUnix/i686. Alternatively, on cars 3-6 you can just go to /root/i686 and find the the compiled program there.

[edit]

Hardware

[edit]

Car

[edit]

Car Power-Up Procedure

Be sure to follow the power up/down procedure posted on the wall (reproduced below):

Power Up – Method 1

  1. Charge Capacitor
    • Leave vehicle power switch OFF
    • Connect bench power supply at 16.5V
    • Wait until you hear a click from the relay before proceeding
  2. Turn on MiniITX
    • Leave power switch OFF – power supply powering MiniITX
    • Run tests on bench as necessary using power supply
  3. Enable battery power
    • Vehicle power switch ON (MiniITX should stay on)
    • Turn off and disconnect bench power supply

Power Up – Method 2

  1. Start without power supply (assumes the battery is charged enough of course)
    • Switch power ON
    • WAIT 60+ seconds to turn on miniITX

Shut Down

  1. Power vehicle down
    • Turn off MiniITX
    • Turn vehicle power switch OFF
  2. Charge batteries
    • Connect charger to vehicle

[edit]

New car motor settings (cars 4,5,6)

In Moto.exe, the following settings should be set for cars 4,5,6 according to Jeff Lovell:

  • Use PWM_ENC on motor channel 1 and check 'servo mode'
  • Input offset = 0
  • PWM rail = 32736
  • Period = 1500 (150 ms for encoder sampling)
  • PWM freq = 39062 (overrided by servo mode anyway)
  • Disable InvertIn, InvertOut, MonoEnc (if applicable)
  • The slide bar should be set to about 23600 for the car to be stopped, although you probably will not be able to set this number exactly-TO SET THIS SETTING SEE INSTRUCTIONS BELOW! This should give us a very high resolution for the acceleration, but not so much for breaking. You shouldn't have to move the bar up very far at all to accelerate and especially to break-so don't or the battery might short. If this happens, the brainstem settings will sometimes reset and you will have to repeat the process.

Setting the slide bar in Moto.exe using the touch programming features of the new motor controller:

  1. Shut off the brainstem via the small orange box on the bottom of the right side of the car near wheel.
  2. Press and hold the leftmost button on the motor micro controller. The micro controller that controls the motor is located behind the small box--it is a larger orange box with wires coming out and several black buttons.
  3. Turn on the brainstem and when a red light appears on the micro controller release the button.
  4. Move the slide bar on moto.exe all the way to the top and leave it there until a green light appears on the micro controller.
  5. Now move the slide bar all the way to the bottom and leave it there until a green light flashes on the micro controller.
  6. Now move the slide bar to the point where you want the car to be stopped (currently about 23600) and wait for a red light. When this happens the micro controller has been configured and the slide bar has been set.
  7. If you get confused with these instructions, there is a instructions manual for this in the orange Novak motor box. The directions are in Step 5: 1 touch programming.

Note: The motor controller has 3 different throttle profiles, and we use profile #2 (the one without reverse) in cars 4-6. When the one touch programming is performed, the controller reverts to profile #1, so this must be fixed before the motor can be used. To change motor profiles:

  1. Hold down the button on the motor controller described in the previous one touch programming instructions (but start with the controller already on).
  2. Keep holding the button while the 4 lights on the controller cycle through different patterns until all 4 lights are lit simultaneously, at which point you should release the button.
  3. The number of times that the lights now flash indicates the current throttle profile.
  4. Use the button to toggle through the profiles until #2 is selected (if #2 is already selected, you can just wait a few seconds and the controller will return to normal operation).
  5. In case of confusion, these directions can be found under "Throttle Profile Selection" on the "HV Pro Custom Programming & Proper Gear Selection Sheet" in the orange Novak motor box.

[edit]

Troubleshooting

If the car doesn’t behave correctly (e.g. turns off suddenly, wireless goes down, etc), try shutting it down and then disconnecting the power (make sure it’s not running on battery). Then wait a bit to make sure everything is discharged (a minute should be plenty) before following the power-up procedure again.

When connected to the bench power supply, the cars run on ~16.5V. After connecting the power, you should wait to hear the click of the relay before pressing the main power button. Not waiting for this click could result in a loss of power when transitioning from supply to battery power. If no click is heard, check that the voltage on the supply is around 16.5V since voltages < 16V may not be enough for the relay.

If the motors stop responding, try these options (in order of ease):

  • Check all connections
  • Reconnect the Brainstem serial cable
  • Reconnect the Moto1.0 power cable
  • Reload the .tea files
  • Connect the Brainstem to the PC and try running the motors using Moto.exe** If they don’t spin, something’s probably misconfigured or fried…
      • Do the hardware reset and hope nothing’s actually fried. Good luck.
    • If they do spin, it’s just a software configuration error
      • Double check the .tea files and try running old ones
        • Note that each process on the Brainstem can only have so many variables. If you have too many variables, the .tea files will still compile but will not execute correctly. Test this as described in the .tea file constraints section to make sure the files you are loading will execute correctly.
      • If that doesn’t work, try the hardware reset.
  • If all else fails, do a hardware reset as described below… but keep this as a last resort.

[edit]

Hardware Reset Procedure

(If configuring a brand new Brainstem, skip the resets and go to step 1. below.)

Being able to connect to the Brainstem via serial is quite helpful for doing the reset. If console indicates that the connection is down, try opening console.config in brainstem\aBinary\ and deleting the line that says baudrate = 115200. Reopen console.exe and see if you are able to connect now. Deleting this value resets the console's baudrate, which may help if the Brainstem somehow reset its own baudrate to the factory default.

Perform the hardware reset as described on http://www.acroname.com/brainstem/ref/h/Hardware/physresetmo.html (reproduced below).
Note: there are two ways to do the reset. Try the software option first if you are able to connect to the Brainstem via console.exe.Software Hardware Reset:
A packet command can issue a hardware reset in place of performing the steps outlined in the physical hardware reset.
For example, to issue a hardware reset to a BrainStem Moto 1.0 one would issue a cmdRESET_HARDWARE packet followed
by a cmdRESET packet from the Console:
4 16 0 0 255
4 24
When successful, the heartbeat status LED (the green one) will flicker rapidly until power is cycled to the BrainStem.
If you can not connect via serial, do the physical reset instead (requires some disassembly of the car to access the specified pins on the Brainstem).Physical Hardware Reset
It is possible to corrupt the system settings in a BrainStem module with erroneous commands.
When this occurs, it may no longer be possible to communicate with the module through the serial link,
or through the IIC link if it is in a network. To regain control of the BrainStem module, it is necessary
to perform a hardware reset by connecting a jumper resistor between two pins on the Moto 1.0 board and
turning the power on and off. The steps for doing this are as follows:
1. Turn off power to the module.
2. Disconnect the serial cable from the module (if attached). Disconnect the module from the IIC bus (if attached).
3. Connect a 1K-10K Ohm resistor between the SRX pin of the module's serial port and the SDA pin of the module's IIC port.
(Check the Acroname website for pinout diagram: http://www.acroname.com/brainstem/ref/h/Hardware/Moto.html)
4. Reapply power. The green LED on the module will blink rapidly for a moment, turn off briefly (almost unnoticeable),
then continue to blink rapidly. This indicates a successful hardware reset.
5. Turn off power to the module
After performing either of these resets follow these steps to reconfigure the Brainstem. This is also where to begin if configuring a new Brainstem unit.

  1. After power cycling the Brainstem, the baud rate will have reset to its default. So console.exe needs to be set to match. Open console.config in brainstem\aBinary\ and delete the line that says baudrate = 115200.
  2. Now console can access the Brainstem to change its baud rate. Run console.exe and use the command 4 18 4 6 followed by 4 19 to set and save the EEPROM.
  3. Power cycle the Brainstem and you should not be able to access it from console anymore (no flashing green dot in console). Open console.config again and add the line baudrate = 115200. Now console should be able to connect again.
  4. To load and configure the stop button reflex. Load the reflex vector using batch “reset_reflex.bag” (this file was generated from reset_reflex.leaf, fyi). CAR 4,5,6 USE "reset_reflex_HV.bag"
  5. Then configure brainstem to auto-launch tea slot 0 on startup using the command 4 18 15 0.
  6. Load button catch program in fileslot 0 using load “Reset_Button.cup” 4 0.
  7. Save the EEPROM using 4 19 and power cycle the Brainstem.
  8. Load the timer file to slot 1 using load “timer.cup” 4 1. CAR 4,5,6 USE "timer_HV.cup"
  9. Load the main tea program to slot 11 using load “final_program.cup” 4 11. Make sure final_program.tea is up to date and recompiled before loading if any changes have been made to it.
  10. You can verify the reset by using launch 4 11. This should display something like launch 4 11
    vm launch: 4, 1
    < 04:15,02
    Then pressing the stop button on the car should displayvm exited: 4,1,1,void
    vm exited: 4,2,1,void
    4:cmd index error
    Which means processes 1 and 2 (timer and final_program) have been stopped and no third process was found. Now try running some of the C-files (make sure to reconnect the serial cable to the car first). If the resulting error messages don't quite match, try running the C-programs anyway and see if they work. If not, redo the reset.

If all else fails, it may indicate that the Brainstem has gone bad (unlikely, but it is a possibility). See if a different Brainstem unit works in its place, and if so, the original unit has probably gone bad.

[edit]

Some Notes About Batteries

The old blue car batteries can be charged using the 14.8V DC Tenergy chargers. These batteries can also be charged using the Thunder Power chargers. When using a Thunder Power charger for one of the blue batteries it should be set to 4 cells, 1 amp.

As of this writing, cars 5 and 6 (as well as all the old cars) are both using the blue batteries. According to Jeff Lovell, when these batteries are switched out in the new cars they should be replaced with the new 7.4V Thunder Power batteries. Since the Thunder Power batteries are only half the voltage, they should be installed in series to provide a total voltage of 14.8V (the old blue batteries are 14.8V but are installed in parallel). The old cars may continue to use the blue batteries.

[edit]

Positioning system

Currently runs on ~11.6V, but must be between 7V and 16V. On the ceiling, all the rows are in parallel and all the units in a given row are in series. So there is a voltage drop down each row. 10-12V should be enough to maintain a reasonably strong signal at the end. Increasing the voltage also increases the strength of reflections, which can cause more interference. This is less of an issue with the foam pads on the ceiling, but still should not be ignored.

[edit]

Power Up Procedure

The positioning system can be damaged if the voltage being supplied to it drops below 7V for even a fraction of a second. To prevent this, we are using alligator clips to provide essentially instantaneous changes between 0V and the operating voltage. The procedure is as follows:

Power-on:

  1. Disconnect the alligator clips from the power supply
  2. Turn on the supply and make sure the voltage is between 7V – 16V (usually around 11V)
  3. Connect alligator clips once the supply is showing the desired voltage
    • Make sure not to short the clips!

Shut Down:

  1. Disconnect the alligator clips from the power supply
  2. Turn off the supply when done using it

[edit]

Troubleshooting

If the positioning system doesn’t seem to be working well, try reconnecting power to it before you try anything else. If you hear a buzzing sound from any of the units, this should resolve that as well.

Check/set configuration of positioning units by connecting the laptop to the unit via a serial cable and running config232. Press “Find Devices” and wait until the ID shows up (may take 20 seconds or so), then “Read EEPROM”. Make necessary changes to the configuration and “Write EEPROM” to save. It’s a good idea to double check the changes by reading the EEPROM again before disconnecting. Configuration details are as follows (in config232): CtlByte txDelay xpDelay IdOvride

Callers (cars): [ 3 see below 0 15 ]

Transponders (ceiling): [ 0 255 see below 33 ]
The txDelay and xpDelay should be changed for each car to prevent conflicts. Currently, we’re using values around 20 and 30 for txDelay. The xpDelay values range from 0 – 5 and are spaced to prevent collisions of transponder signals. The diagram below illustrates the setup:http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:XpDelay.PNG
To check ceiling units without removing them, connect the laptop to the open slot on the end unit. Find Devices should bring up all units in that row, and Read EEPROM will get all the configurations at once. The diagram below shows the grid positions corresponding to the device IDs, with IDs in red indicating devices that are prone to error. As of 2/5/09 12180 is out of commission, while the red "X" in the back row is unit 11424.http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:DeviceID_11032008.jpg
Reconnecting power to the unit(s) may help if they don’t seem to be responding correctly after changing the configuration. I’ve mainly noticed this with the units on the cars after changing txDelay.

[edit]

Wireless

[edit]

Installing Drivers for the New D-link Routers

The linux driver can be downloaded by finding it at the D-link website or clicking here (_ftp://ftp.dlink.com/Wireless/wua1340/Drivers/wua1340_drivers_1040.zip_). After downloading it, go via command line to the 'Module' folder contained in the download. Make sure the Configure file is executable-if it isn't, use chmod to make it executable. Type './Configure'. It will ask you for the location of the kernel source directories. This will be something like /usr/src/kernels/2.6.15-1.2054_FC5-i586. The last directory's name may be slightly different depending on the car-you may want to check it. After the configure is finished, type 'make all'. Once this is completed, type 'make install'.

At this point you should be able to go to System->Administration->Network and activate the 'rausb' device. The wireless adapter should now be working. If it isn't working or it can't be enabled, try restarting the car.

[edit]

Troubleshooting

If the cars are not connected to the wireless network (LNK LED not lit on the D-link USB wireless adapater), perform the following steps in order.

1. Reset the wireless router by detaching and reattaching the power cable.

2. Access the wireless router configuration at 192.168.1.1 using a web browser and ensure: In Wireless > Basic Wireless Settings:
<SSID> = eecs4315
<Wireless SSID Broadcast> = Enable

In Wireless > Wireless Security:
<Security Mode> = WEP
<WEP Encryption> = 64 bits 10 hex digits
<Default transmit key> = 1
<Key 1> = FAB9000FAB
3. Run iwconfig on each car and make sure rausb0 has the following settings: <ESSID> = "eecs4315"
<Mode> = Managed
<Frequency> = <broadcast frequency of wireless router>
<Bit Rate> = 54 Mb/s (Note: This is automatically set by the Network Manager)
<Encryption Key> = FAB9-000F-AB
4. If the settings on the cars have to be manually set after each system restart, then edit the following files (see fix for error 8 below if a car's IP address is changing): /etc/rc.d/rc.local - Post initialization configuration for iwconfig
/etc/sysconfig/ifcfg-rausb0 - Initialization configuration for the D-link wireless adapter

Note: The network adapters can be started/stopped using /etc/rc.d/init.d/network
5.If there is some network delay try using different channels,in fact is been demostrated that some network delay was caused by many wireless interferences(channels crowded)

[edit]

Electronics

Circuitry schematics, etc will go here.

[edit]

Problems

List Recurring Problems that currently cannot be solved with possible hypotheses or fixes when discovered.

[edit]

1. Network Failures (by Mads)

It often happens that wireless goes during a run. This has been going on for quite some time now and troubleshooting is described above. However, the fixes above often do not apply as the network seemingly goes down for no obvious reason. Personally, I think some experiment in EECS cause the trouble but that is not easy to verify. 2/5/09

(Updated by Dan 7/31/09): The network seems to have stopped having these failures as of a few weeks ago (I don't know why), but it is experiencing periodic lag spikes rather frequently. These are often long enough to cause path following to fail. At one point I updated the firmware of the router and that seemed to improve things for the rest of that day...but when I came back the next day the lag was back. Re-updating the firmware had no effect.

[edit]

2. SOLVED: Segmentation Fault (by Mads)

I have run car 3 for many weeks without any issues and now it is giving me "Segmentation Fault", "Brainstem Failures", or "ovrd_throttle=-1" upon program execution. I have checked the brain-stem and reloaded the TEA files with no errors. Car 2 has the same TEA file as Car 3 and Car 2 does not have the any problems. 2/5/09 Solution: Turned out that a wire had come loose causing shorts and confusion on board the computer. I Re-stripped the wires and put them back in the conductor and crimped the wires together.

[edit]

3. Car 2 compilation error (by Jeff D.)

Somehow the clock on car 2 got reset. Because of this, sometimes a compilation error saying " 'file name' had modification time 'time' in the future" because we are uploading files with timestamps from 2009 but the computer thinks its 2004 or 2005. I have changed the clock on the Desktop back to 2009, but for some reason this error will often pop up when a new file is uploaded and compiled. If the error comes up when you try to compile, you must use 'touch *' command to change the timestamps on the new files. The command 'touch *' affects all the files in the folder you are using the command, but it doesn't descend, this you must do yourself (unless you figured out how to make it descend, then post it). For example: if you upload UltraSteer, go into the Ultrasteer folder if the compilation error comes up, use 'touch *', then descend into all the folders in UltraSteer and use 'touch *'. Then it should compile fine. I suspect the reason that the problem still persists is that there is another clock on the computer that is still set a few years back. If anybody has any experience with this, please help.

[edit]

4. SOLVED: Car 3's motor not working (by Jeff D.)

The fuse on the archbridge blew on Car 3, making the motor not receive a voltage. I fixed this by replacing the fuse.

[edit]

5. Vehicle-to-Computer Delay (by Mads and Jeff D.)

Using Ca2.c code is now causing a delay in the system in excess of 1 second. This is undesirable for human control. Possible short-term fix is commenting out create_string(*) as used ca2.c. However, this is NOT a permanent fix as create_string from vehicle_communication.c is essential for our final demos since it is used for setting up stored_date in ca2.c and allowing the vehicles to recieve position and speed information about each other.

[edit]

6. SOLVED: UltraSteer wouldn't output speed data (by Iman)

I was using UltraSteer on one of the cars and I wanted it to output the speed. I used the same function to output speed in PathPlan "speed = get(SPEED, stemRef);" , but it didn't work in UltraSteer and would give me an error (SPEED variable not defined) everytime I would try to compile it. It turned out that in the setget.h file in the PathPlan source folder had the variable SPEED, but in the setget.h file in the UltraSteer source folder had the variable THETA (THETA was in the same place in UltraSteer as SPEED in PathPlan). So in UltraSteer use "speed = get(THETA, stemRef);" in the program to output speed data.

[edit]

7. SOLVED: New wireless card stopped working (by Matt)

One of the new (used on car 4,5,6) wireless cards stopped working. I plugged it into my computer, downloaded the drivers, and it reset itself automatically. It now works again on the car, apparently there was some reset that the car couldn't perform.

[edit]

8. SOLVED: Car 6 IP address was switching (by Matt)

Car 6's IP address was switching between its own and that of car 2, causing communication problems. Fixed by changing /etc/rc.d/rc.local on Car 6. The following should work as a correct file (note I have ignored the comments that appear at the top of the file, you should leave those alone). Each bullet point should be a separate line in the file and these are the only lines that should appear in the file other than any comments:

  • touch /var/lock/subsys/local
  • rmmod rt2570
  • modprobe rt2570
  • ifconfig rausb0 up
  • iwlist rausb0 scan
  • iwconfig rausb0 essid eecs4315
  • iwconfig rausb0 rate Auto
  • iwconfig rausb0 mode Managed
  • iwconfig rausb0 ap any
  • iwconfig rausb0 enc FAB9000FAB
  • ifconfig rausb0 192.168.1.106 (THIS LINE MUST BE CHANGED TO THE CORRECT IP ADDRESS FOR WHATEVER CAR YOU ARE WORKING ON)
  • ifconfig rausb0 255.255.255.0
  • route add default gw 192.168.1.1

[edit]

9. SOLVED: Car 5 wheels spin when turned on (by Iman)

When I turned car 5 on the wheels would spin for a quick second then the car would just stop and the small green light on front of the MiniITX would blink off and on. It turns out that the moto settings just needed to be reset, refer to section 2.1.2 New car motor settings (cars 4,5,6).

[edit]

10. SOLVED: New Servos wouldn't work when connected to brainstem

The new Servos: GWS S03N STD ahve to have the wires switched before they will work with the brainstem. Check with car 1, 4, 5, or 6 to see what orders the wires must be attached as they connect to the brainstem. DO NOT use car 2 or 3 as a reference because they have completed different servos.

[edit]

11. SOLVED: Estimator not working properly

Estimator gives false and/or very jumpy speed data. When used with Moto.exe, estimator may simply report a speed of 0 when the wheels are moving slowly.

Solution: The estimator may need to be repositioned. The glue is brittle and the estimator can easily be snapped off the car. After doing this, make sure the estimator and the wheel are clean and dust-free. Now, while using Moto.exe to slowly spin the wheels of the car, move the estimator around near where it was before until you find the spot where Moto.exe picks up a consistent velocity measurement. Use some epoxy glue to glue it into this place and either hold it or clamp it until the glue sets sufficiently (all while using Moto.exe to make sure that you've still got it in the right spot).

[edit]

12. SOLVED: Car 1 slowly comes to a stop

Car 1 slowly comes to a halt after several minutes despite: trying to output a PWM of 250 (the max), normal on-board computer operation, normal battery voltage reading, and normal voltage going to the H-bridge. The brainstem voltage written to the text file however dips down periodically during the run and eventually reads 0, at which time the car starts slowing down gradually.

Solution: The wire going to the analog 4 pin on the brainstem (where the brainstem gets its voltage reading from) was loose in the blue clamp that connects the wire to the power bus. Note that this is the clamp that connects the wire to the bus, not the clamp after that between the bus and the brainstem. I just took off the clamp and re-clamped the wires.

[edit]

Status of cars

Car 1: Working.

Car 2: Working

Car 3: Working.

Car 4: Working

Car 5: Seems to work well but cannot change period in moto.exe. If you figure out solution, please post.

Car 6: Working

[edit]

Overhead Vision System

[edit]

Using the Overhead Vision System

To use the vision system you will first need to tape a target to the top of the car(s) you want to use. Each car has its own unique target, with a number written on the back of it that matches the car's. Try to make the target as flat as possible when you attach it, and make sure that the orientation stripe is straight if you are running a program that gets orientation from the camera system.

In order to work properly, the camera program must be running on all three computers simultaneously. Thus, to start you will need to run the executable CPS.exe on all of them. There should be a Desktop shortcut for this on all computers. The programs on the different do not have to be started in any particular order or timing. Once the programs are running, they will take a few seconds to try and locate any cars that are on the track. Once they have finished, they will begin tracking and sending data to the cars. You should be able to see the boxes around the targets on the cars. If there are other stray tracking boxes on the screen (or possibly trailing your car) don't worry about it--they are for the other cars that you are not using, and they won't interfere.

In order to reset the tracking system (re-scan the entire area for targets and re-position the tracking boxes appropriately) simply press 'r' on either computer while a camera window is selected (doing so while the command prompt window is selected will not work). A reset can also be performed by running reset.exe from another computer. Note that a reset will not work if the patterns are moving. A reset is usually needed after a car is picked up and moved or the camera's view of it is obstructed at some point. In general, if a car isn't moving where you think it should, check the video window and make sure you didn't forget to reset.

If you close and then re-open CPS.exe on one computer while leaving it running on the other, you will need to hit reset on the one that was left running for them both to start working properly. Avoid running two copies of CPS.exe at once on the same computer--this tends to make the cameras angry (see Troubleshooting #4).

[edit]

A Note About Orientation and Filters

I've done quite a bit of experimentation with switching the estimator's calculated heading for the heading given by the camera system, and all I've been able to conclude is that it usually makes little difference which one you use. Graphical comparison shows that the data from both are fairly similar. Sometimes one seems to work slightly better than the other (the car will follow the path with slightly less weaving in and out), but which one works better is not consistent from program to program. At the moment (7/30/09) most of the programs on the cars are still using the heading from the estimator, and that is not really a problem--however, if a car/program is having trouble following a path, or if you are sick of having to wait for init_heading to run, then you can go to ca2.c and switch the 'z_zeta[est_count]' argument in run_controller to 'angle' to use the camera system's heading. You can also choose to use the positioning values straight from the camera instead of waiting until they have been filtered by run_estimator. To do so, switch 'xx[est_count]' to 'xc' and 'yy[est_count]' to 'yc' in the arguments to run_controller. In the tests I ran, doing so had no apparent affect on path following. If you switch over all three of these arguments, you can safely comment out init_heading. If you do so, make sure that you set zeta_set to 1 somewhere or the program will get stuck trying to initialize.

[edit]

Troubleshooting

This is a list of possible problems with the vision system and/or camera computers, along with their (usually very simple) solutions.

1. Problem: The position data the car is receiving seems to be wrong / the car isn't following the path. Solution: Check to see if the system needs a reset. It's easy to forget, but needs to be done whenever the car is
picked up and moved or the camera's view of it is blocked. If that doesn't fix the problem, make sure you are using the
correct pattern; the number on the pattern should match the car number. Another thing that can prevent path following
from working is if the car's IP address is switching, as in Problem #8.
2. Problem: The copies of CPS.exe on the three computers are not talking to each other. Solution: First check that COMP_NUM in CPS.h is correct on all computers. If it is, then check the IP addresses of the
computers. They have been known to change for no observable reason, and this will keep CPS from communicating properly. You will
need to (on all three camera computers) go to CPS.h, update the values of COMP0_IP_ADDRESS, COMP1_IP_ADDRESS, and/or COMP2_IP_ADDRESS, and recompile
the program (see Problem #3).
3. Problem: After changing something in the header file, an attempt to compile gives a bunch of errors in the code. Solution: In Dev C++, a normal compilation only re-configures the stuff from files you have just edited, which can be a
problem if the change to your header file would affect things in other files. You will need to do a full compilation
by hitting Ctrl+F11.
4. Problem: When trying to start the program, you get a message that there was an error setting up the sockets. This will most likely happen after accidentally trying to run two camera programs at once on the same computer. Solution: If you look in the task manager, it will probably show that a copy of CPS.exe is running even if there isn't.
This 'ghost' copy of CPS.exe can't seem to be stopped, and it prevents the program from being started normally. This must be
fixed by restarting the computer.
5. Problem: The tracking/pattern location isn't working very well. Solution: Make sure that all the lights in the lab are on and the blinds are closed--these were the conditions when the pictures were taken, so
having half of the lights off can else can keep the tracking from working properly. Also, make sure that the patterns on the
cars are as flat as possible.
6. Problem: Tracking fails when a car crosses from one camera frame to another. Solution: This can happen if a camera is bumped or shifted at all. The problem can be fixed by re-calculating the mapping between frames for the border causing problems.

[edit]

Hardware

The overhead vision system consists of six cameras mounted on the ceiling around the track. These cameras are connected via FireWire to three desktop PCs, which implement a tracking algorithm to determine the position of the cars. The positions are converted to a global coordinate system transmitted to the cars via the lab network. Each camera runs on its own PCI card because this allows us to run two cameras per computer without any loss in frame rate.

In the tracking program the computers are numbered from 0-2, from right to left in terms of their positions on the desk.

[edit]

Software

Rather than having a central computer that oversees all of the tracking, we simply run the tracking independently on each computers. Each computer knows at all times which cars it is responsible for tracking and sending data to. When a car is going to move outside of its frame of vision, that computer sends a message via the lab network to another other camera computer, which then takes over responsibility for that car.

The tracking is performed by doing a pixel-by-pixel comparison of the images captured by the camera with images of the patterns taken beforehand. Only central strips of the patterns are compared, in hopes of minimizing the effects of camera distortion.

[edit]

reset.exe

This is a small program that can be used to reset the vision system from another computer. It can be found in the camera_programs folder on either camera computer. When you run it from any Windows machine on the lab network, the camera system will reset as if you had hit 'r' on one of the vision system computers. The program will produce no output. If the IP addresses of the camera computers change, the program will have to be edited to account for this change.

[edit]

The Header File

The header file CPS.h defines many important constants for the Camera Positioning System. Many of these are explained elsewhere, but some miscellaneous (yet important) ones will be clarified here. With the exception of COMP_NUM, all of these constants should be same for the program on all computers. If you are having trouble compiling after changing something in CPS.h, see Troubleshooting #3.

COMP_NUM: This value tells the code which computer it is running on. This needs to be changed whenever the program is copied from one computer to another.

CAR_IP_ADDRESS_# where '#' is a target number 1-6: These values set the IP addresses that will receive position data corresponding to each target number (written on the back of each target). As of this writing, the target numbers match the car numbers. If this ever needs to be changed, or if a car needs to be prevented from receiving data for some reason, this can be done by changing these values. However, make sure that no car will be receiving positioning data from two targets at once.

[edit]

Setting up a new computer to run/compile vision system code

Note: At this point I have only set this up for Dev-C++, so those are the instructions that I am going to give here. The process to set it up in Visual Studio is probably similar.

The vision system C++ code utilizes a couple libraries that need to be set up. First is the OpenCV library, used here for pixel-by-pixel image analysis. This can be found at sourceforge (_http://sourceforge.net/projects/opencvlibrary/_). Download and install it.

You will also need the software development kit from Point Grey-this is used for actually grabbing the images from the cameras. You can get this at the Point Grey Website (_http://www.ptgrey.com/index.asp_). You will need to either create an account (you may need a camera serial number, which can be found on one of the camera boxes in the cabinet by the chalkboard), or you can login using an account I created-I used my other email, thedanclark@gmail.com, with the same password as for the cars in the lab. Download and install the "FlyCapture v1.X release XX" (we used v1.7 but if there's a newer version it will probably be fine).

Now you will need to set up Dev-Cpp to include these libraries. Open Dev-C++ and go to Tools->Compiler Options. Check the box by where it says "Add these commands to the linker command line" and then add the following commands: -lhighgui -lcv -lcxcore -lcvaux -lpgrflycapture -lwsock32 -lws2_32 (the first 4 are for OpenCV, the next is for the FlyCapture library, and the last two are for the libraries used to set up sockets used to send data to the cars).

Now click on the 'Directories' tab, and under the 'Binaries' sub-tab, add:

C:\Program Files\Point Grey Research\PGR FlyCapture\bin

Then click on the 'Libraries' sub-tab, and add: C:\Program Files\OpenCV\lib
C:\Program Files\Point Grey Research\PRG FlyCapture\lib
In case you ever want to use OpenCV in C-code, go to the 'C includes' sub-tab and add: C:\Program Files\OpenCV\cxcore\include
C:\Program Files\OpenCV\cv\include
C:\Program Files\OpenCV\otherlibs\highgui
C:\Program Files\OpenCV\cvaux\include
Finally, go to the 'C++ includes' sub-tab and add the four OpenCV ones just listed, as well as: C:\Program Files\Point Grey Research\PGR FlyCapture\lib
C:\Program Files\Point Grey Research\PGR FlyCapture\include
C:\Program Files\Point Grey Research\PGR FlyCapture\src
In order to get your code to compile, you may also need to add OpenCV to the System Path. Do this by going to 'My Computer->View System Information', click on the 'Advanced' tab, click on 'Environment Variables' and add C:\Program Files\OpenCV\bin to 'Path' under 'System Variables'.

On the second computer I set up for this, when trying to compile code using the FlyCapture Library it complained that '-lpgrflycapture could not be found' or something like that, though the OpenCV libraries were working just fine. I tried about a million things without success until I finally just copied the FlyCapture lib files over to the OpenCV lib folder. A messy solution, but it should work if you run into this problem and can't find a better way to deal with it.

If the computer you're on doesn't have Visual Studio on it (or maybe even if it does), then you're probably getting an error that sounds something like 'Failure to initialize' with error code 0xc01500002, when you try to run your compiled program. To fix this, download and install the Microsoft Visual C++ 2005 Redistributable Package (_http://www.microsoft.com/downloads/details.aspx?familyid=32bc1bee-a3f9-4c13-9c99-220b62a191ee&displaylang=en_). This installs necessary runtime components of some Visual C++ libraries. If you do this and still get the error, then the SP1 version (_http://www.microsoft.com/downloads/details.aspx?FamilyID=200b2fd9-ae1a-4a14-984d-389c36f85647&displaylang=en_) might work instead.

If you do all this and still get runtime errors, then try copying all the .dll files in C:\Program Files\OpenCV\bin straight into the System32 folder (C:\WINDOWS\system32). I didn't have to do this for the first computer I set this up on, but I did for the second, so this step may or may not be necessary. [Make sure you don't modify anything else in this folder].

Now you should be able to compile and run code for the Overhead Vision System. As previously mentioned, I had to do some things for the second computer that I didn't have to do for the first. So, if you have completed all of the above steps and it still isn't working, then I suggest doing some internet research and just playing around with with it until you get a setup that works.

[edit]

Camera Calibration

Before any meaningful position data can be obtained from the cameras, they must be calibrated both intrinsically and extrinsically. There is a lot of software to do this, but the best documented and apparently most reliable is the Caltech Camera Calibration Toolbox for Matlab (_http://www.vision.caltech.edu/bouguetj/calib_doc/_). Also necessary for calibration is a large (about 2ft by 2ft) checkerboard pattern--there should be one in the lab somewhere. The square size on that checkerboard is 34.925mm on each side. You should use the middle 15 by 15 squares for calibration.

For background information about camera calibration and parameters, refer to the "Multiple View Geometry" textbook in the lab. Particularly of interest is the information on radial distortion on pg. 189-193.

[edit]

Intrinsic

To do intrinsic calibration of a camera, a series of about 15-20 calibration images need to be obtained from that camera. This can be done using the FlyCap.exe software that came with the cameras. Alternatively, you can use a program in the camera_programs folder called pictureTaker which can be used to take a series of images with a customizable time delay between each shot (this time delay can be changed in the program's header file. Each image should be primarily taken up by the checkerboard. It is important that the checkerboard be rotated and held at many different angles in order to get a good calibration. Examples can be found in the previous calib folders. After obtaining these images, follow the instructions in the first calibration example on the Caltech page to obtain the calibration parameters. The calibration program will put the parameters into a file called 'Calib_Results.m'. These new parameters can be used in the tracking program by renaming the file appropriately ('Calib_Results_CamX.m') and placing it in the program's directory on both camera computers.

[edit]

Extrinsic

This extrinsic calibration only applies if all the tracked objects are of the same height,i.e., you have a common ground plane.

To perform extrinsic calibration for a camera:

  1. First you should understand the global coordinate system in the lab. The origin of our coordinate system is at the South-West corner of the room. The units are in millimeters. The X-direction is to the East ('down' on the computer screens) and the Y-direction is to the North ('right' on the computer screens).
  2. Place the checkerboard on the ground, at the same height where your tracking patterns will be when mounted on the cars (I prop it on some boxes to get it to the correct height). It's best to place the checkerboard fairly close to the center of the camera frame, however this is pretty flexible as long as you know its distance to the origin. Make sure the sides of the checkerboard are parallel to the x-y axis of your coordinate system.
  3. Choose one corner of the checkerboard to be a reference point and measure its (x,y) position in millimeters using the lab's tape measure. You will need this measurement later.
  4. After you figure out the position of your checkerboard and the orientation is correct, you can take a picture of it using FlyCap.exe or the pictureTaker program. You can find examples of extrinsic calibration images in the calib 7-1-10\extr folder.
  5. Run your MATLAB again and read in the intrinsic parameters for the camera you are calibrating using calibration toolbox function load (these parameters can be found in . Then click on Comp. Extrinsic in the Camera calibration tool, and follow the instruction on [[1]|http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html] (_http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html_). When clicking the positions of the corners, the one you click first should be the one who's position you measured.

  6. Copy the parameters that the calib_gui program gives you to the appropriate Extrinsic_Calib_Data_X.txt file in the CPS\calib_data folder. You will also need to enter the position of the checkerboard pattern that you measured earlier.
  7. You can check that the calibration is accurate by placing some objects on the floor at known positions and checking their positions from the camera system in RECORD_OBJECT_DATA mode, which outputs position data.

Additional miscellany about extrinsic calibration:

In the calib.cpp, we implement the pixel2position.m into two functions called normalize() and convert(). Also you will read in all the calibration parameters through getCalib() function before doing any calculation.

The structure struct calibData contains all the calibration data. fc, cc, kc and alpha_c are intrinsic calibration parameters while R_inv and T are from extrinsic calibration. R_inv corresponds to the inverse matrix of rotation martix and T is the translation vector.

Normalize() is nothing but the c++ version of MATLAB script normalize.m and convert() is the rest of the math in pixel2position.m. Note you may need to add/subtract the position of your checkerboard in the convert() function.

[edit]

Camera Positioning System (CPS)

[edit]

Pattern Recognition

Before you do anything about searching or tracking, you need to generate the patterns that can be searched or tracked. What I have been using for the project are patterns that are "target" like. There are currently six different patterns. The backgrounds are black or white, and the targets are black and white rings. Below is the list of all the patterns.

  1. B wwwwbb
  2. W bbwwwb
  3. B wwbbbb
  4. B bbwwbb
  5. B bwbwww
  6. W wbbbww

The capital letter for each pattern indicates the background color for each pattern. Generally the black background is slightly better than the white one since the floor is so bright that at some places it looks very much like the white paper [Note: this may change once the black flooring mats have been placed]. The six letters after the capital letter are the sequence of the color of the rings, from outer to inner. There is a template of the target ring (as well as pre-made images of the 6 patterns listed above) in the CPS folder. You can edit the template and then print it out to make new patterns. The patterns above were designed to minimize the intersymbol interference as much as possible. So far these patterns have been working successfully.

[edit]

Why so many sections

The total area of the lab covered by the vision system is divided into 36 "sections". The divisions between these sections are shown by the horizontal and vertical black lines that can be seen in the camera windows when running CPS. Each pattern has a calibration image for every section. When a car moves into a particular section, the image from that section is used for the tracking.

Theoretically we only need one picture for each pattern, so why we are taking 36 pictures per pattern? Because the brightness in the lab is not uniform and the camera images are distorted by the lenses. It would be very computationally expensive to undistort each frame so we decided to take a picture of the distorted images and let the computer to track those. However, the distortion at different locations in the images is different and the brightness changes from place to place. So we have six sections for each camera which results in 36 sections in total. How the sections are partitioned is related to the performance of the tracking algorithm. The boundaries are set so that the computer will still believe it's the same object when it travels from section to section. In other words, those sections make the object "look" the same in the computer's view.

The diagram below shows the section layout:

http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:New_section_layout_small.png

[edit]

Record picture

If a pattern stops working in a certain section or if one of the patterns is changed, you may need to re-take the picture of that pattern in the affected sections. The positioning system code contains a function called recordObjectData which allows you to take pictures of the patterns for the computer to use in its searches. To take the pictures:

  1. On the computer that is responsible for the section in question, set RECORD_OBJECT_DATA as defined in the header file CPS.h to 1. If you need to redo pictures for all of the patterns, you can also set RECORD_ALL_PATTERNS to 1. Otherwise it should be 0.
  2. Recompile the camera program and run CPS.exe.
  3. The program will ask you to specify the section number, and if RECORD_ALL_PATTERNS is 0, the car number.
  4. You will now need to specify which camera the section is located in.
  5. The pattern should be mounted on one of the cars (or placed on a box of similar height) so it is at the same height from the floor as it will be when it is being tracked.
  6. Place the pattern in the lab so that it is close to the middle of the correct section.
  7. Using the 'w', 's', 'a', and 'd' keys, position the black square in the camera window around the pattern.
  8. Hit 'r' to record the image. The data will be stored in the folder: .../CPS/calib_data
  9. If RECORD_ALL_PATTERNS is set, the program will cycle through allowing you to take images of all of the patterns for that section. Otherwise, you will be asked to input another section and another car number.
  10. When you are finished, close the program and recompile it with RECORD_OBJECT_DATA set to 0.
  11. Be sure to copy the new image data files from the local calib_data folder to the calib_data folders on the other camera computers, to ensure that all of the copies of CPS have consistent data.

Note that running the system in RECORD_OBJECT_DATA mode is also helpful when checking the accuracy of an extrinsic calibration, as the program will output the position of the upper-left corner of the 'box', in both pixel coordinates and the global coordinate system in millimeters.

[edit]

Algorithms

Once you have all the pictures you need, you can start searching and tracking. There is one very important parameter that is used throughout the program, BOX_SIZE. BOX_SIZE is the size (in pixels) of the square box that contains your tracking pattern. In the previous version of our algorithm, we were trying to compare everything in the box to the picture we took before and then find the best match. But the problem is, even though we have so many sections we still had trouble finding a good match because of the distortion. The updated algorithm only compares the middle stripe of the box since the middle part suffers the smallest effects of distortion. Since we are now only comparing the middle one fifth of the entire box, this reduced the complexity but increase the performance.

[edit]

Searching

The search algorithm is implemented in the function void findObject( IplImage *img[], int objData[][NUM_CARS][NUM_FILES], Position loc[], socketData &sendSocket, socketData &receiveSocket). You need to pass the image, object data that you recorded, locations of all your objects and socket data that is for computer talking. In this function, initial determines where you start to look at and r_x, r_y determines how far you will be looking. Once those parameters are specified, it will start searching for the objects. What the algorithm does is to search the whole region from initial and (r_x,r_y). It slides the box (of size BOX_SIZE) pixel by pixel, compares the image in the box with those taken previously and finds the best match. Still, it's only comparing the middle strip of the box. After the search for all the objects, two arrays will contain the location of the patterns and difference between the best matches and the pictures that are taken earlier. These will be passed to the searchExchange function, which communicates with the other camera computers to determine the best overall match.

[edit]

Exchange of search data

The searchExchange function is used to exchange search data between the computers and find out which computer has the best match. This function behaves differently depending on whether COMP_NUM is 0 or not.

The exchange of data works as follows:

  1. Comp 0 sends a message to Comp 1 indicating that it is ready to receive search data. It will repeat sending this message until it receives a response.
  2. Comp 1, after receiving this message, will reply with a message containing data from its own search.
  3. Steps 1 and 2 will repeat for all additional camera computers (as of this writing, there are 3 in total).
  4. Comp 0 uses the results to determine which computers had the best matches for each pattern.
  5. Comp 0 sends a message to each other camera computer to tell them which objects they should be tracking, and these computers begin running the normal tracking algorithm.
  6. Comp 0 notes which objects it should be tracking itself, and begins the normal tracking algorithm.

For more detail, see the code of the searchExchange function in the search.cpp file.

[edit]

Tracking

Tracking is very similar to searching. It's a local area search. The size of the local area is defined by the global variable ITERS. The value of ITERS should be determined by how fast your computer is and how fast the object you are trying the track moves. I wrote a MATLAB script to calculate the value for ITERS. The MATLAB function is defined as follows: function iters = iter_calculator(pixel_size,max_speed,fps,delay) overal_fps = 1000/(1000/fps + delay); speed_per_frame = max_speed / overal_fps; iters = speed_per_frame / pixel_size;

The input to this function is how big one pixel is in millimeters, the maximum speed of the object in millimeter/second, the frame rate in frame/second and the delay in milliseconds. The delay refers to unusual behavior of the computer, network, etc. It should be 0 for normal cases.

After you specify the ITERS, the program will do a search in the same way as the searching function. But the difference is that it will only search in the area of a box with length of 2*ITERS+BOX_SIZE and centered at its previous location. There needs to be a balance between ITERS and the frame rate. If you increase ITERS, you will lower your frame rate since you are doing more calculation for each frame. If your frame rate is low, then you need to increase ITERS because otherwise, you may not be able to track the object. So the overall goal is to maximize the product of frame rate and ITERS. This can be done experimentally.

[edit]

Orientation

At the end of the tracking function, we will run the search for the orientation. After you get the symmetric target patterns, you will need to put a "dot" on it somewhere outside the outer circle. The algorithm will search in a bigger box which is centered at the center of the box that contains the symmetric target and find the dot. The location if this dot will then be used to determine the orientation of the pattern.

To improve the program's ability to find the "dot", a stripe pointing away from the center of the pattern can actually be used instead of a dot. The algorithm will still search for a square, and will simply find some arbitrary point along the stripe--but it doesn't matter which part of the stripe is found since the entire stripe is along the same orientation. Note 1: The stripe should be in the opposite color of the background of the pattern. Note 2: The stripe should be pointing straight away from the center of the pattern. Note 3: The stripe should not be too close to the pattern since it may interfere with the tracking. After the program figures out where the dot/stripe is, it will convert the position of the dot/stripe to orientation and send it to the vehicle (More on this in later sections).

Note: At this point (8/27/2010) none of the demos (that I am aware of) actually use the orientation data given by the camera system.

[edit]

What to send to the vehicle

After the tracking and searching for the dot, we are ready to send data to the vehicle. Note that up to this point, all the pixel position mentioned corresponds to the upper left corner of the box you see on the screen. We will keep this convention unless otherwise mentioned. However, we will send the position of the center of the box, instead of the upper left corner, to the cars. After we get the center position, we will pass this into the convert() function to get its global position in millimeters. You may refer to Extrinsic to figure out what this function does.

Then we will send the message, in the form of a character string, using sockets. The string will contain the position of the object in millimeters and also its heading information.

[edit]

Switching Mechanism

Each camera has some overlap with other cameras so that when one object is leaving from that camera, it can be easily switched to another camera which has a better view of it. There are two cases, one is the switch within one computer, i.e., switches between camera 0 and 1, or between camera 4 and 5. The other case is the switch between computers. The latter case is slightly more complicated since we need to send a message from one computer to the other, which may involve transmission delay. There are a total of 7 borders between the cameras, and there is a mapping in both directions for each border. We name each of these transition mappings in the following way:

http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:Transition_numbers.png

The data for these mappings can be found in the CPS/calib_data/ folder. There are 14 different files (1 for each transition), named with the format compX_transitionY.txt.

[edit]

Mapping

The mapping between two cameras will have to be re-calculated whenever the cameras are moved or bumped. If the tracking messes up when a car travels from one camera frame to another, then the camera transition mapping is usually at fault and will have to be re-measured. We draw grids on the images shown on the screen and the lines at the edges between cameras indicate at which point we will switch the pattern from one camera to another. The following is an example of how to calculate this mapping:

1. Say you want to recalculate the mapping of the transition from camera 5 to camera 3. From the above diagram, we can see that this is comp 2, transition 3.

2. You need to place 3 reference objects on the floor at the point of the transition: 2 near each side, and one in the middle. The objects should be as close to the black transition line in the camera window as possible. See the below image for an example of where to place the objects.

http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:Mapping_measurement_new.png

Note that the objects are placed along the line in the camera frame that the object would be *leaving*. If we wanted to calculate the mapping from camera 3 to camera 5, we would place the reference objects along the transition line at the bottom of camera 3.

3. The objects should now be visible from both cameras involved in the transition. We need to determine their exact pixel positions in both frames in order to calculate the mapping. To do this, re-compile the camera system in RECORD_OBJECT_DATA mode. This will cause the program to output the position of a move-able box in the camera frame, in both pixel coordinates and in the global coordinate system. Move the box over the objects to get their positions.

4. Using the 6 pixel coordinate locations you have obtained (2 positions for each of the three objects--one from each camera), run the script border_mapping.m located in the CPS\matlab scripts\ folder.

5. After following the script's instructions, it will output the calculated mapping data. Simply copy/paste this data to the proper file in the CPS\calib_data\ folder. In order to ensure that all 3 computers have the same data, you should copy the new mapping data file to the other camera computers as well.

6. Be sure to test the new mapping by driving one of the cars through several points along the border in question.

[edit]

When to switch

In the program, the parameters that define when to switch are They are CAM0_VERT,CAM0_HOR,CAM1_VERT and CAM1_HOR, and CAM1_LOWER_HOR. These parameters are set in CPS.h, and they are different for each of the three camera computers. These parameters are chosen so that if the image is very distorted on one camera, it will be switched to the other one, where the distortion is less serious. So, when the object passes these lines it will be switched to the other camera. But oscillation between the two cameras might occur if you don't choose the parameters carefully. Also note that even if you calculate the mapping for one direction, you will still need to do the inverse mapping separately. Unfortunately, you cannot simply reverse the linear functions to get the inverse map. You will need to follow the procedure described above again for each switch.

[edit]

Switch between computers

At this point, you have everything that is needed for switching between the two cameras that are on the same computers. The switching between computers is slightly different because you will need to send a message to the other computer. We will use sockets to send the message. The message is a character string that has the following format: " Switch object_number camera_number projected_x_coordinate projected_y_coordinate " (note the whitespace at the beginning and end of the string). In this string, the first number will always be Switch, which is an enumerator and has the value of 1. Then is the object number. The camera number is a negative number and you need to increment it by one and then take the inverse to get the new camera number. The last two components are the projected pixel positions in the new camera. Each coordinate has two parts, the mapping part and the velocity projection part. The mapping part is as described above and the projection part is based on the object's current velocity in the previous camera. Since there will be transmission delay and processing delay, the projected part will take into the delay into account and project the new position. Note that if there are multiple objects switching between computers, one message will be sent for each of them. So we may have to send up to six messages in a single frame, but this won't cause any problems since sending a message through a socket is very fast.

[edit]

Other important functions in the program

[edit]

void getCalib(calibData data[])

This is the function you should call at the beginning of the program. data is an array of calibData type. This function reads in all the intrinsic and extrinsic calibration data for all four cameras. However, before you run this you need to change the directory since all those data are stored in the CPS/calib_data folder. This change can be done by simply calling the changeDirectory() function.

[edit]

int imgDiff(IplImage *img, int data[][NUM_CARS][NUM_FILES], int box, Position &loc, int k, int section)

This function is used in the searching and tracking algorithm. You need to pass in the image which is in openCv format, the picture data that was recorded and read in, the size of the box that contains the object(usually it's BOX_SIZE), the object's position structure, object index k and which section it's in. The function will return the difference of the image in the box at loc with the recorded image.

[edit]

setup(IplImage *img[],FlyCaptureContext contexts[])

This function will set up the connection between Fly Capture image and OpenCV image for the first time. This function will also set up the windows that are used for displaying images on your screen. Since we are using two cameras on each computer, we will also display two windows on each computer. That's why we pass an array of OpenCV images and Fly Capture images to the function. The camera will send images to the computer at a constant speed of 60 frames per second. This frame rate is set in the setup function but can also be modified manually by the FlyCapture software which can be accessed from "C:\Program Files\Point Grey Research\PGR FlyCapture\bin\FlyCap.exe". However, due to limited computation power and the complexity of the algorithm, usually the program iterates at a slower speed. The frame rate of the program (usually somewhere between 30 to 58 FPS) varies with the value of ITERS and number of objects it's currently tracking. So every time the program finishes processing the previous image, it will grab a new image from the camera. The computer will just drop the images that are not grabbed by the CPS program.

[edit]

void dotSearch (IplImage *img, int objData[][NUM_CARS][NUM_FILES], Position &loc, int k, int section, IplImage *img2)=

It's best to understand what this function does graphically. When you run the program, you will see several boxes drawn around each object you are tracking. The inner box tells you where the object is while the outer tells you the outer boundary of the range in which we will search for the dot/stripe that's around the pattern. The tiny box is the location of the orientation dot/stripe. This function will search in the unit of a small box. It will slide this small window through all possible positions inside the outer box but outside the inner box. It's essentially searching along four sides of the big box, and that's why you see 4 for-loops there. Because for different patterns the dots have different colors, we will need to distinguish this. For pattern 1,3,4, and 5, the dot is white while for pattern 2 and 6 the dot is black. So, we add a bias when we search for the dot. The function used here to do the searching is called int boxAvg(IplImage *img, int x, int y, int box). This function will do nothing but tell you average pixel brightness in the box that is located at (x,y). So for different colors of the dots, we have different bias values. After searching the entire area, dotSearch(...) will draw the dot at the position that looks most like our orientation dot/stripe.

[edit]

Where is all the stuff

On the desktop of the camera computers, the folder CPS contains all the stuff for the program:

  • calib 7-1-10 : Contains data from the most recent intrinsic and extrinsic camera calibrations.
  • CPS : The folder for the main tracking program.
    • CPS/calib_data : contains the text files that we recorded the pattern images in, as well as the data from the intrinsic and extrinsic calibrations.
  • matlab scripts : contains several useful matlab scripts for the camera system.
  • old calib : Contains intrinsic and extrinsic calibration data from older calibrations.
  • patterns&template : contains the patterns that can be printed for tracking and the pattern template.
  • pictureTaker : Contains a program for taking pictures with the camera system. Useful for doing camera calibration.
  • reset: contains the program to reset the camera system remotely.
  • test programs: contains various programs useful for testing the functionality of the cameras and the network.

[edit]

Everything you need to know about filtering

[edit]

Linear filters

Linear filtering is nothing but multiplication and addition. Go to matlab and type help filter, it will show you the algorithm to do filtering. Once you got the input data, you just need to multiply the data with corresponding coefficients of the filter, then add them up to get the output. Where do you get the coefficients? Go to matlab and type help firpm. This is a long help but you only need to pay attention to the example it shows at the bottom. Since our data is not changing very fast, so we may want to use a lowpass filter to reject the noise in the high frequencies. % Example of a length 31 lowpass filter:h = firpm(30,[0 .1 .2 .5]*2,[1 1 0 0]);
The first in put to this function is the length of the filter-1. Longer the filter, less the distortion of your signal, and longer the delay you will have. Since in our project, the general acceptable delay is ~100 ms. For the filter that is implemented on the computer, you can have a filter that's of length 10. For the filter on the car, it's impossible to have a filter within that delay. The delay of a lowpass filter can be roughly calculated as length of the filter divided by two, times your sample duration. However, the sample duration for the program on the car is already 100 ms, then you won't be able to get any useful filter that has delay less than 100 ms. The second and third inputs to the matlab function correspond to each other. The example above shows you want your gain is 1 between the interval of [0 .1]*2*pi. And the gain for [0.2 0.5]*2*pi is 0. You don't care about the transition band in between the two intervals above. If you know specifically the frequency of you true data, you can shrink the interval that has gain 1. Otherwise you may just keep that interval as is. Note if you have very small interval that has gain one while you don't want to have a long delay, the actual filtered signal will usually be distorted. Once you figure out what kind of linear filter you want to use. Just plug in the coefficients you got from firpm function to your filter. It's just one line of many multiplication-and-addition operations in C++.

[edit]

Nonlinear filters

The only nonlinear filter I have been using is the median filter. The median filter will apply a window of size N(N>=3) to your input data. It will pick the median one in the current window as the output and slide the window by 1. So the delay of the median filter is N-1. It can eliminate the salt and pepper noise in your data. We implement a median filter to the velocity calculation on the computer and it's working fine. So in the program, we just sort the data in the velocity history plus the newly calculated one, then find median of those and set it as the current value. Note every data in the history is newly calculated not the filtered one.

[edit]

Methods

[edit]

Finding Motor Maps

The following links are the matlab files I used to come up with the motor map for car 4:
car 4 motor map data (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car4motormapdata.m_)
car 4 motor map data analysis (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car4motormapdataanalysis.m_)
(These same files can be used for car 5 and 6 you just need to change the variables from "car4_PWM_#t#" to "car5_PWM#t#" or "car6_PWM#_t#" in both the data and data analysis files.)

1. Make sure the .tea code on the car takes an input and assigns it to the PWM

  • For example: thr=aPad_ReadInt(8); – this reads the value in memory slot 8 and assigns it to the variable thr
    SetMotorPWM(thr*x); – x is a number that when a thr value of 100 is given, it will make the car go at its maximum speed (~2.5m/s)
    2. From a C program give an input of 10 different PWM values (10, 20, 30, 40, 50, 60, 70, 80, 90, 100)
  • You should run at least 5 trials for each PWM value (50 trials total)
  • Make sure the C program being used is writing to the memory slot that the .tea code reads in, in the example above memory slot 8.

3. Record and analyze the data.

  • The following is only one way of how to analyze the data, but another method may be used:
  1. Plot speed v time data in matlab
    • plot((1:length(car5_PWM10_t1)),car5_PWM10_t1(1:length(car5_PWM10_t1)));
    • -- car5_PWM10_t1 is a variable that contains the speed data for the first trial at a PWM of 10 for car 5.
  2. Fit a polynomial to the data
    • data10_t1=car5_PWM10_t1(1:t);
    • time10_t1=1:(length(car5_PWM10_t1(1:t)));
    • -- t is a number that indicates the time at which the speed stopped increasing
    • [p,s]=polyfit(time10_t1',data10_t1,4);

    • -- the number 4 in the line above is the degree of the polynomial (4th degree)
    • -- A lower degree polynomial may be used if a 4th degree doesn't fit the data well
    • plot(time10_t1,y10_t1); -- this line plots the polynomial that fits the data
    • -- Plot the polynomial and the data on the same graph to confirm that it is a good fit
  3. Calculate and plot torque
    • plot(y10_t1(2:length(y10_t1)),diff(y10_t1)w.033/10); -- w is the weight of the vehicle in kg
    • [p10_t1,s]=polyfit((y10_t1(2:length(y10_t1))),diff(y10_t1)w.033/10,2); -- w is the weight of the vehicle in kg

    • y=((y10_t1(2:length(y10_t1))).^2).*p10_t1(1)+(y10_t1(2:length(y10_t1))).*p10_t1(2)+p10_t1(3);
    • plot(y10_t1(2:length(y10_t1)),y,'r'); -- this is the torque curve
  4. Average all the torque curves
    • x10=[1:m]; -- m is the maximum speed that the vehicle reaches with a PWM of 10 (this will be for different PWM values)

    • p10 = (p10_t1 + p10_t2 + p10_t3 + p10_t4 + p10_t5)/5;
    • y10 = (x10.^2).*p10(1)+x10.*p10(2)+p10(3);
    • plot(x10,y10,'k');
    • -- the 10 after x, y, and p represent the data for a PWM of 10. Do this for all the other PWM values (i.e. for PWM of 20 use x20,y20,p20)
  5. Come up with the PWM equation (motor map)
    • T10 = [x10; y10; ones(1,length(y10))]';

    • PWM10 = ones(1,length(y10))'*10;
    • PWM = [PWM10; PWM20; PWM30; PWM40; PWM50; PWM60; PWM70; PWM80; PWM90; PWM10];

    • T = [T10; T20; T30; T40; T50; T60; T70; T80; T90; T10];

    • A = inv(T' * T)*(T' * PWM); -- the three numbers in this matrix are the constants a, b, and c in the equation: PWM=av+bt+c
    • -- again the 10 after T and PWM represent the data for a PWM of 10. Do this for all the other PWM values.

note: be sure of the units in all the programs and calculations

[edit]

Finding dynamic parameters ("a" and "b" parameters)

The following links are the matlab files that I used to solve for the dynamic parameters for car 2:
car 2 parameters data (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car2_parameters_data.m_)
car 2 parameters data analysis (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car2_parameters_analysis.m_)
(These same files can be used for car 1 and 3 you just need to change the variables from "car2_trq##" to "car1_trq##" or "car3_trq#_#" in both the data and data analysis files.)

  1. The car that you are finding the "a" and "b" parameters for must first have the motor map implemented on it.
  2. Make sure you are using a program that gives a torque input and a speed output (i.e. PathPlan, ManualDrive).
  3. Run at least 5 trials at each torque input (10, 20, 30, 40, 50, 60, 70, 80, 90, 100).
  4. Each trial consists of:
    • Keeping the torque constant until the car reaches its maximum speed (approx. 2.5 m/s).
    • Let the car run at the max speed for a few seconds and then give a torque input of 0.
    • Wait for the car to come to a complete stop and then end the program and record the data.
  5. Analyze the data. The following is just an example of how to analyze the data, but it could be done a different way if desired:
    • Put the data into Matlab and assign a variable to each trial dataset(i.e.: trq20_1 represents a torque of 20 and trial #1).
    • Plot the data using the command: plot(1:length(trq20_1),trq20_1(:,1)) – trq20_1 is just an example of a variable name, another can be used in place of it
  • Fit a polynomial to the increasing linear portion of the graph using the command: P20_1 = polyfit(1:(y-x)+1',trq20_1(x:y),1) – x is the start of the linear part of the graph, y is the end of the linear part
  • Take the average of each of the linear fits of the trials using the following command: P20_ave=(P20_1+P20_2+P20_3+P20_4+P20_5)/5
  • After you get an average value for each torque input (10, 20, 30, 40, 50, 60, 70, 80, 90, 100), fit a line to them using the following commands: trq_val = 10 20 30 40 50 60 70 80 90 100
    P_ave = P10_ave(1) P20_ave(1) P30_ave(1) P40_ave(1) P50_ave(1) P60_ave(1) P70_ave(1) P80_ave(1) P90_ave(1) P100_ave(1)
    plot(trq_val,P_ave,'.') – plots the average values
    p = polyfit(trq_val,P_ave,1) – fits a line to these average values
  • The 1st number in the variable p is your "a" parameter.
  • To get the parameter "b" follow the same procedure starting from the 3rd bullet in step #5 but this time fit a line to the decreasing part of the graph.

note: "a" should be a positive number and "b" should be a negative number

[edit]

Finding Parameters for maintain_velocity_PWM()

The maintain_velocity_PWM() function for the new cars (cars 4-6) uses a linear mapping of desired speed to PWM. The parameters of this linear function will likely have to be re-measured after certain hardware changes to the cars. The process to find the parameters is fairly simple and can use data acquired from the motor map trials.

1. It is necessary to find the relationship between PWM and resultant speed for various speeds/PWMs of the car. The data used to calculate motor maps can be used for this. Use the data to estimate the speed that the cars level off to for each constant PWM value given to them in the trials (10, 20, 30...).

2. Now use Matlab to calculate a linear fit to your data. Begin by entering the data into Matlab. Your X data will be the speeds reached by the cars, and Y data will be the corresponding PWM values. Xdata = speed1 speed2 speed3 ...;
Ydata = PWM1 PWM2 PWM3 ...;
3. Now find a linear fit to the data: p = polyfit(Xdata, Ydata, 1)
The parameters given by Matlab here are the ones that can be used in maintain_velocity_PWM() for that car.

4. If you want to check these parameters graphically (recommended), this can be done in the following way: Xeval = linspace(0, 2500, 2501);
Yeval = polyval(p, Xeval);
plot(Xdata, Ydata, '.')
hold on
plot(Xeval, Yeval, 'r')

[edit]

Car Dynamics

Updated summer 2009

[edit]

Car Motor Maps

  • v = speed, t = torque
  • Car 1: PWM = .58v + 3000t + 93
  • Car 2: PWM = .63v + 3000t + 102
  • Car 3: PWM = .62v + 3000t + 107
  • Car 4: PWM = .45v + 444t - 9
  • Car 5: PWM = .39v + 399t - 7.4
  • Car 6: PWM = .42v + 450t - 6.5

[edit]

Car Dynamic Parameters

  • x = at + b
  • x = acceleration, t = torque
  • Car 1: N/A (4.00t - 30 seems to work decently in combination with feedback-Jeff)
  • Car 2: x = 3.77t - 55
  • Car 3: x = 5.07t - 122
  • Car 4: x = 6.43t - 133
  • Car 5: x = 6.81t - 114
  • Car 6: x = 6.95t - 109

note the unit for these equations are mm/s^2

[edit]

Demos

[edit]

3 Car Autonomous

Description: 3 cars traveling on different printed lab circles imitating traffic roundabout geometry. Utilizes collision avoidance and autonomous cruise control algorithms. Currently demo is set up to run with car 1 on the outermost circle, car 2 on the smallest circle, and car 3 on the intermediate circle.

Steps to run:

  1. Connect to cars via SSH
    • Open Secure File Transfer Client on Desktop. A window to connect to the cars will appear. On the toolbar click profiles and select car 1 (it may be labeled as MiniITX_101).
    • A window will appear as the computer tries to connect to the car. When connected it will prompt for a password, which is hal9000. If this does not happen after about 5 seconds, then there is a problem with the wireless--usually either the router is down, the car's wireless is down, or you are trying to connect to the wrong car. If the car's wireless isn't working (the D-Link flashes on and off when it is working and ready to connect), make sure the car is on. It takes several minutes for the D-Link to activate after the car has been turned on and the ITX powered up. If this doesn't do the trick then power cycle the car and try again.
  2. Open up a new file transfer and transfer files to cars
    • If the connection window is a SSH secure shell then go to window and select New File Transfer. A SSH file transfer window will appear.
    • In the right window click "projects," or some obvious variation of that name (Project, project, Projects) to access the correct file transfer location on the cars. Select "ca2" then click the Up arrow in the toolbar to transfer the file. It is located under Desktop/Project_2009/Final_demos/3_car_autonomous_roundabout. For car 1 enter "large_car" and select ca2 to upload. Note: for car 2 you would select "small_car" and for car 3 you would select "inter_car." The transfer progress can be seen on the bottom window.
  3. Compile the code
    • In the SSH secure file transfer window go to "window" and click New Terminal. Type "cd /root/Desktop/brainstem/aProject/ca2" to enter the location to compile. Type "make clean; make new" to compile. Once this is complete type "cd /root/Desktop/brainstem/aDebug/aUnix/i686" to enter the location to run the code. Repeat the last steps on each car before going further.
  4. Place cars
    • All the cars travel counter-clockwise around the circle. Place Car 1 on the furthest end from the door of the big circle, car 2 on the closest end to the door of the smallest circle, and car 3 on the closest end to the door of the intermediate circle (which intersects with the large circle).
  5. Start camera system
    • Double-click on "Shortcut to CPS" on the desktops of both computers furthest from the door to activate the camera positioning system.
    • Note: all the main lights in the lab must be turned on for the system to run properly.
  6. Run the program
    • Type "./ca2 " followed by the car's path file name, then press enter to start running the demo. The filename for car 1 is "bigcircle_9points.txt", car2 is "60percent.txt", and car 3 is "inter_11points.txt.
    • To stop the cars, highlight its corresponding terminal and press spacebar.
    • The car's data is saved until the next run under /root/Desktop/brainstem/aDebug/aUnix/i686/ca_output. The relevant files are ca, acc, and terminal_file.

[edit]

2 Car Semi-autonomous

Steps to Run Program:

  1. Set Up Autonomous Vehicle:
    • Turn on car 4.
    • Go to the project directory and then to the folder ca2_semiauto_demo
    • Compile using the command "make clean; make new"
    • Go to i686, the directory where the executable ca2 compiled
  2. Set Up Human Controlled Vehicle:
    • Turn on one of the old cars, numbers 1-3.
    • Go to the project directory and then to the folder ManualDrive_demo
    • Compile using the command "make clean; make new"
    • Go to i686 or debug_dir depending on car, where the executable ManualDrive compiled
  3. Set Up Human Interface:
    • Turn on the laptop.
    • Use the long ethernet cable to plug the laptop into our network - verify that it is connected to the internet through the router.
    • Plug the wheel/pedals into the laptop using the USB cable.
    • Check that the pedals are properly plugged into the back of the wheel - this connection comes loose very easily
    • Run the program called Wheel.exe from the laptop. A command prompt window should come up. Verify by turning the wheel and pushing the pedals that all input is being registered properly, the output should give the steering then acceleration then braking.
  4. Set Up Camera System:
    • Turn on both computers in the corner of the lab.
    • Verify that both are connected to the internet through the local router.
    • Run the camera program by double clicking Shortcut to CPS on each.
    • If both computers show two windows with overhead views of one quadrant of the lab, then everything is working properly.
  5. Run Demo:
    • On the autonomous car, run the program in the program directory using the command "./ca2 inter_11points.txt 4 <human car number>"
    • On the human controlled car, run the program in the program directory using the command "./ManualDrive"
    • The demo should be up and running and you should proceed until you're done or there is a crash or a car goes off course or anything else stops the experiment.
    • At the end of the experiment, download the folder "ca_output" from the program directories on both cars, appending -4 and -<human car num> to each folder. You will end up with ca_output-4 and ca_output-<human car num>

[edit]

New semi-autonomous

This demo will include one autonomous car that does no collision avoidance and one car that will be driven by a human. The human car will let the human have full control until there is danger of a collision at which point the car will warn the human to either accelerate or brake depending on what is required to avoid a crash. If the human does not react properly withing a given time frame, the car will take control and act to avoid the collision.

Some initial work on this was done using an older version 2-car semi-autonomous demo. The code for the human-driven car is located on car 4 in root/project/semiauto_demo_2.

The current version of the demo is very simplified. The autonomous car simply follows the outer circle without changing speed, heedless of what the human car is doing. The human car, rather than only taking control from the human when a crash is imminent, takes control of the throttle whenever the 2-car semi-autonomous demo that it is based on would normally exert control. So, the human never really gets a chance to avoid the collision on their own.

Note that this is based on an older version of the semi-autonomoous demo and may have some collision-avoidance issues.

  • No labels