WIP.
WIP.
This section contains documentation for all important and/or unusual functions.
Note: This description of main() should encompass all the features that the final main() will have. Any program-specific additions/modifications to main() that will not be in the final version should be documented in that program’s section above.
WIP.
To be added.
To be added.
If current heading = 0°: atan_smart(1, 1, 0) = 45°
If current heading = 340°: atan_smart(1, 1, 340) = 405°
If current heading = -250°: atan_smart(1, 1, -250) = -315°
This is the version of maintain_velocity() that must be used on cars 1, 2, and 3, which uses a PID controller to keep the cars moving at a target velocity.
This is the the function used on cars 4, 5, and 6 to maintain a target velocity. It uses a simple linear mapping of desired speed to PWM. It takes as arguments a pointer to an integer PWM value, and an integer that gives the target speed. The pwm integer will be set to the PWM necessary to maintain that speed. This function must be used somewhat differently than the maintain_velocity() function used for cars 1-3. We need to let the .tea code know that we are giving a PWM value directly, rather than some throttle value that needs to be converted to a PWM. We do this by setting the throttle value to -1000 before setting the PWM value. The .tea code in cars 4-6 is written such that a throttle value of -1000 will cause it to read from the PWM slot in the brainstem (slot 2) and give that value directly to the motor (after scaling it by a factor of 3). Thus, to use this function, first you must set throttle == -1000 [set(THROTTLE, -1000, stemRef)], call maintain_velocity_PWM() to find the correct PWM value for the speed you want, and then set that value into the PWM slot [set(PWM, pwm, stemRef)].
Also note that in some of the older code, the setget.c file has the PWM section of the set() function commented out with a note that it can't be set. Simply un-comment the line and make sure that the correct slot (2) is being written to, and you should be able to set PWM without any trouble.
General note: Make sure you follow the constraints of the .tea language.
This is one of the two main programs used to manipulate the Brainstem. It can be accessed from C:\Program Files\brainstem\ or from Start->Programs->Acroname. The green dot blinks (along with the green LED on the Brainstem) if it detects a Brainstem unit connected via the serial interface. Use this to compile and load tea files.
This is the other main program used to manipulate the Brainstem. It can also be accessed from C:\Program Files\brainstem\ or from Start->Programs->Acroname. This program can be used to modify the PWM being sent along two different channels: one to the steering servo and the other to the drive motor. It bypasses all the code on the Brainstem, which means being able to run the motors via Moto.exe does NOT guarantee that the motors will respond to the tea files and/or C-code. Any modifications made to the PWM(s) can be written to EEPROM if so desired. If neither the tea files nor Moto.exe cannot run the motors, there is either a loose connection somewhere or the Brainstem is in need of a reset.
To compile a .tea file, open console.exe and use the command steep “file_name.tea”. Then to load the resulting .cup file to the Brainstem, connect the serial cable to the PC and attach the Brainstem serial attachment to the other end of the cable. Then disconnect the serial cable from the matching attachment on the car and connect it to the PC one. The green light in console.exe should be flashing. Finally type load “file_name.cup” 4 slot# to load the file to the Brainstem. For example, steep “final_program.tea” will compile final_program to a .cup file and load “final_program.cup” 4 11 will load it to slot 11 (the large slot on the Brainstem). Reconnect the serial cable to the car and it should use the updated tea files when run.
Basic information on TEA files (taken from http://www.acroname.com/brainstem/ref/h/Hardware/Moto.html#files):A BrainStem module stores TEA files in an EEPROM. The Moto 1.0 module contains 12 file slots numbered 0-11.
Files slots 0-10 are 1K in size. File slot 11 is 16K in size. Programs can run in any of 3 "virtual machine"
(VM) process slots numbered 0-2. Each process has a private stack space of 112 bytes. These processes can run
concurrently. A 32 byte scratchpad RAM buffer may be used for sharing data between processes.
There is additional space on the EEPROM dedicated to storing reflexes. The Moto 1.0 stores 128 reflex vectors
and 128 reflex commands. For simple tasks, it may be possible to use a reflex instead of a TEA program and conserve
process slots and/or file slots.
Each process on the Brainstem has a private stack of 112 bytes, which in our case is shared by final_program and all its libraries. If there are too many variables to fit on the stack, the files will still compile but won't be able to run (it'll launch the process and quit almost immediately).
A way to verify whether the files being loaded will execute is to type launch 4 11 into console.exe. If you see anything unusual, especially a message along the lines of vm exited: 4,2,5,void (for example, final_program.tea should not exit on its own since it has an infinite loop), it may indicate a run-time error. Having too many variables is one such run-time error, so this gives you one way to catch that problem in advance.
To solve this problem: delete any unused variables, optimize your variable types (use char instead of int if a char is sufficient, for example),
Variable types in TEA (taken from http://www.acroname.com/brainstem/ref/h/TEA/types.html)The TEA language currently supports the following types:
void
The void type represents no data. It is typically used to show explicitly that there is no return value from
a routine or no parameters for a routine.
char
The char type represents a signed byte. It has a range of -128 to 127.
unsigned char
The unsigned char type represents an unsigned byte. It has a range of 0 to 255.
int
The int type represents a signed integer comprised of 2 bytes. It has a range of -32768 to 32767.
unsigned int
The unsigned int type represents an unsigned integer comprised of 2 bytes. It has a range of 0 to 65535.
string
The string type is a fixed series of ASCII-encoded text characters.
Warning: The string type is fairly limited and quite expensive in terms of storage in both the program
code space as well as stack space. Care should be taken when using this type.
[edit]
[edit]
[edit]
[edit]
[edit]
[edit]
[edit]
[edit]
[edit]
To be added.
[edit]
[edit]
[edit]
[edit]
[edit]
Be sure to follow the power up/down procedure posted on the wall (reproduced below):
Power Up – Method 1
Power Up – Method 2
Shut Down
[edit]
In Moto.exe, the following settings should be set for cars 4,5,6 according to Jeff Lovell:
Setting the slide bar in Moto.exe using the touch programming features of the new motor controller:
Note: The motor controller has 3 different throttle profiles, and we use profile #2 (the one without reverse) in cars 4-6. When the one touch programming is performed, the controller reverts to profile #1, so this must be fixed before the motor can be used. To change motor profiles:
[edit]
If the car doesn’t behave correctly (e.g. turns off suddenly, wireless goes down, etc), try shutting it down and then disconnecting the power (make sure it’s not running on battery). Then wait a bit to make sure everything is discharged (a minute should be plenty) before following the power-up procedure again.
When connected to the bench power supply, the cars run on ~16.5V. After connecting the power, you should wait to hear the click of the relay before pressing the main power button. Not waiting for this click could result in a loss of power when transitioning from supply to battery power. If no click is heard, check that the voltage on the supply is around 16.5V since voltages < 16V may not be enough for the relay.
If the motors stop responding, try these options (in order of ease):
[edit]
(If configuring a brand new Brainstem, skip the resets and go to step 1. below.)
Being able to connect to the Brainstem via serial is quite helpful for doing the reset. If console indicates that the connection is down, try opening console.config in brainstem\aBinary\ and deleting the line that says baudrate = 115200. Reopen console.exe and see if you are able to connect now. Deleting this value resets the console's baudrate, which may help if the Brainstem somehow reset its own baudrate to the factory default.
Perform the hardware reset as described on http://www.acroname.com/brainstem/ref/h/Hardware/physresetmo.html (reproduced below).
Note: there are two ways to do the reset. Try the software option first if you are able to connect to the Brainstem via console.exe.Software Hardware Reset:
A packet command can issue a hardware reset in place of performing the steps outlined in the physical hardware reset.
For example, to issue a hardware reset to a BrainStem Moto 1.0 one would issue a cmdRESET_HARDWARE packet followed
by a cmdRESET packet from the Console:
4 16 0 0 255
4 24
When successful, the heartbeat status LED (the green one) will flicker rapidly until power is cycled to the BrainStem.
If you can not connect via serial, do the physical reset instead (requires some disassembly of the car to access the specified pins on the Brainstem).Physical Hardware Reset
It is possible to corrupt the system settings in a BrainStem module with erroneous commands.
When this occurs, it may no longer be possible to communicate with the module through the serial link,
or through the IIC link if it is in a network. To regain control of the BrainStem module, it is necessary
to perform a hardware reset by connecting a jumper resistor between two pins on the Moto 1.0 board and
turning the power on and off. The steps for doing this are as follows:
1. Turn off power to the module.
2. Disconnect the serial cable from the module (if attached). Disconnect the module from the IIC bus (if attached).
3. Connect a 1K-10K Ohm resistor between the SRX pin of the module's serial port and the SDA pin of the module's IIC port.
(Check the Acroname website for pinout diagram: http://www.acroname.com/brainstem/ref/h/Hardware/Moto.html)
4. Reapply power. The green LED on the module will blink rapidly for a moment, turn off briefly (almost unnoticeable),
then continue to blink rapidly. This indicates a successful hardware reset.
5. Turn off power to the module
After performing either of these resets follow these steps to reconfigure the Brainstem. This is also where to begin if configuring a new Brainstem unit.
If all else fails, it may indicate that the Brainstem has gone bad (unlikely, but it is a possibility). See if a different Brainstem unit works in its place, and if so, the original unit has probably gone bad.
[edit]
The old blue car batteries can be charged using the 14.8V DC Tenergy chargers. These batteries can also be charged using the Thunder Power chargers. When using a Thunder Power charger for one of the blue batteries it should be set to 4 cells, 1 amp.
As of this writing, cars 5 and 6 (as well as all the old cars) are both using the blue batteries. According to Jeff Lovell, when these batteries are switched out in the new cars they should be replaced with the new 7.4V Thunder Power batteries. Since the Thunder Power batteries are only half the voltage, they should be installed in series to provide a total voltage of 14.8V (the old blue batteries are 14.8V but are installed in parallel). The old cars may continue to use the blue batteries.
[edit]
Currently runs on ~11.6V, but must be between 7V and 16V. On the ceiling, all the rows are in parallel and all the units in a given row are in series. So there is a voltage drop down each row. 10-12V should be enough to maintain a reasonably strong signal at the end. Increasing the voltage also increases the strength of reflections, which can cause more interference. This is less of an issue with the foam pads on the ceiling, but still should not be ignored.
[edit]
The positioning system can be damaged if the voltage being supplied to it drops below 7V for even a fraction of a second. To prevent this, we are using alligator clips to provide essentially instantaneous changes between 0V and the operating voltage. The procedure is as follows:
Power-on:
Shut Down:
[edit]
If the positioning system doesn’t seem to be working well, try reconnecting power to it before you try anything else. If you hear a buzzing sound from any of the units, this should resolve that as well.
Check/set configuration of positioning units by connecting the laptop to the unit via a serial cable and running config232. Press “Find Devices” and wait until the ID shows up (may take 20 seconds or so), then “Read EEPROM”. Make necessary changes to the configuration and “Write EEPROM” to save. It’s a good idea to double check the changes by reading the EEPROM again before disconnecting. Configuration details are as follows (in config232): CtlByte txDelay xpDelay IdOvride
Callers (cars): [ 3 see below 0 15 ]
Transponders (ceiling): [ 0 255 see below 33 ]
The txDelay and xpDelay should be changed for each car to prevent conflicts. Currently, we’re using values around 20 and 30 for txDelay. The xpDelay values range from 0 – 5 and are spaced to prevent collisions of transponder signals. The diagram below illustrates the setup:http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:XpDelay.PNG
To check ceiling units without removing them, connect the laptop to the open slot on the end unit. Find Devices should bring up all units in that row, and Read EEPROM will get all the configurations at once. The diagram below shows the grid positions corresponding to the device IDs, with IDs in red indicating devices that are prone to error. As of 2/5/09 12180 is out of commission, while the red "X" in the back row is unit 11424.http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:DeviceID_11032008.jpg
Reconnecting power to the unit(s) may help if they don’t seem to be responding correctly after changing the configuration. I’ve mainly noticed this with the units on the cars after changing txDelay.
[edit]
[edit]
The linux driver can be downloaded by finding it at the D-link website or clicking here (_ftp://ftp.dlink.com/Wireless/wua1340/Drivers/wua1340_drivers_1040.zip_). After downloading it, go via command line to the 'Module' folder contained in the download. Make sure the Configure file is executable-if it isn't, use chmod to make it executable. Type './Configure'. It will ask you for the location of the kernel source directories. This will be something like /usr/src/kernels/2.6.15-1.2054_FC5-i586. The last directory's name may be slightly different depending on the car-you may want to check it. After the configure is finished, type 'make all'. Once this is completed, type 'make install'.
At this point you should be able to go to System->Administration->Network and activate the 'rausb' device. The wireless adapter should now be working. If it isn't working or it can't be enabled, try restarting the car.
[edit]
If the cars are not connected to the wireless network (LNK LED not lit on the D-link USB wireless adapater), perform the following steps in order.
1. Reset the wireless router by detaching and reattaching the power cable.
2. Access the wireless router configuration at 192.168.1.1 using a web browser and ensure: In Wireless > Basic Wireless Settings:
<SSID> = eecs4315
<Wireless SSID Broadcast> = Enable
In Wireless > Wireless Security:
<Security Mode> = WEP
<WEP Encryption> = 64 bits 10 hex digits
<Default transmit key> = 1
<Key 1> = FAB9000FAB
3. Run iwconfig on each car and make sure rausb0 has the following settings: <ESSID> = "eecs4315"
<Mode> = Managed
<Frequency> = <broadcast frequency of wireless router>
<Bit Rate> = 54 Mb/s (Note: This is automatically set by the Network Manager)
<Encryption Key> = FAB9-000F-AB
4. If the settings on the cars have to be manually set after each system restart, then edit the following files (see fix for error 8 below if a car's IP address is changing): /etc/rc.d/rc.local - Post initialization configuration for iwconfig
/etc/sysconfig/ifcfg-rausb0 - Initialization configuration for the D-link wireless adapter
Note: The network adapters can be started/stopped using /etc/rc.d/init.d/network
5.If there is some network delay try using different channels,in fact is been demostrated that some network delay was caused by many wireless interferences(channels crowded)
[edit]
Circuitry schematics, etc will go here.
[edit]
List Recurring Problems that currently cannot be solved with possible hypotheses or fixes when discovered.
[edit]
It often happens that wireless goes during a run. This has been going on for quite some time now and troubleshooting is described above. However, the fixes above often do not apply as the network seemingly goes down for no obvious reason. Personally, I think some experiment in EECS cause the trouble but that is not easy to verify. 2/5/09
(Updated by Dan 7/31/09): The network seems to have stopped having these failures as of a few weeks ago (I don't know why), but it is experiencing periodic lag spikes rather frequently. These are often long enough to cause path following to fail. At one point I updated the firmware of the router and that seemed to improve things for the rest of that day...but when I came back the next day the lag was back. Re-updating the firmware had no effect.
[edit]
I have run car 3 for many weeks without any issues and now it is giving me "Segmentation Fault", "Brainstem Failures", or "ovrd_throttle=-1" upon program execution. I have checked the brain-stem and reloaded the TEA files with no errors. Car 2 has the same TEA file as Car 3 and Car 2 does not have the any problems. 2/5/09 Solution: Turned out that a wire had come loose causing shorts and confusion on board the computer. I Re-stripped the wires and put them back in the conductor and crimped the wires together.
[edit]
Somehow the clock on car 2 got reset. Because of this, sometimes a compilation error saying " 'file name' had modification time 'time' in the future" because we are uploading files with timestamps from 2009 but the computer thinks its 2004 or 2005. I have changed the clock on the Desktop back to 2009, but for some reason this error will often pop up when a new file is uploaded and compiled. If the error comes up when you try to compile, you must use 'touch *' command to change the timestamps on the new files. The command 'touch *' affects all the files in the folder you are using the command, but it doesn't descend, this you must do yourself (unless you figured out how to make it descend, then post it). For example: if you upload UltraSteer, go into the Ultrasteer folder if the compilation error comes up, use 'touch *', then descend into all the folders in UltraSteer and use 'touch *'. Then it should compile fine. I suspect the reason that the problem still persists is that there is another clock on the computer that is still set a few years back. If anybody has any experience with this, please help.
To fix this, use the date command in the SSH window on the car to change the date. The car does, in fact, have a different date than the desktop. If you don't know how to change this you can search "linux changing date" online and it will tell you the syntax. I have not gotten these errors since then.
[edit]
The fuse on the archbridge blew on Car 3, making the motor not receive a voltage. I fixed this by replacing the fuse.
[edit]
Using Ca2.c code is now causing a delay in the system in excess of 1 second. This is undesirable for human control. Possible short-term fix is commenting out create_string(*) as used ca2.c. However, this is NOT a permanent fix as create_string from vehicle_communication.c is essential for our final demos since it is used for setting up stored_date in ca2.c and allowing the vehicles to recieve position and speed information about each other.
[edit]
I was using UltraSteer on one of the cars and I wanted it to output the speed. I used the same function to output speed in PathPlan "speed = get(SPEED, stemRef);" , but it didn't work in UltraSteer and would give me an error (SPEED variable not defined) everytime I would try to compile it. It turned out that in the setget.h file in the PathPlan source folder had the variable SPEED, but in the setget.h file in the UltraSteer source folder had the variable THETA (THETA was in the same place in UltraSteer as SPEED in PathPlan). So in UltraSteer use "speed = get(THETA, stemRef);" in the program to output speed data.
[edit]
One of the new (used on car 4,5,6) wireless cards stopped working. I plugged it into my computer, downloaded the drivers, and it reset itself automatically. It now works again on the car, apparently there was some reset that the car couldn't perform.
[edit]
Car 6's IP address was switching between its own and that of car 2, causing communication problems. Fixed by changing /etc/rc.d/rc.local on Car 6. The following should work as a correct file (note I have ignored the comments that appear at the top of the file, you should leave those alone). Each bullet point should be a separate line in the file and these are the only lines that should appear in the file other than any comments:
[edit]
When I turned car 5 on the wheels would spin for a quick second then the car would just stop and the small green light on front of the MiniITX would blink off and on. It turns out that the moto settings just needed to be reset, refer to section 2.1.2 New car motor settings (cars 4,5,6).
[edit]
The new Servos: GWS S03N STD ahve to have the wires switched before they will work with the brainstem. Check with car 1, 4, 5, or 6 to see what orders the wires must be attached as they connect to the brainstem. DO NOT use car 2 or 3 as a reference because they have completed different servos.
[edit]
Estimator gives false and/or very jumpy speed data. When used with Moto.exe, estimator may simply report a speed of 0 when the wheels are moving slowly.
Solution: The estimator may need to be repositioned. The glue is brittle and the estimator can easily be snapped off the car. After doing this, make sure the estimator and the wheel are clean and dust-free. Now, while using Moto.exe to slowly spin the wheels of the car, move the estimator around near where it was before until you find the spot where Moto.exe picks up a consistent velocity measurement. Use some epoxy glue to glue it into this place and either hold it or clamp it until the glue sets sufficiently (all while using Moto.exe to make sure that you've still got it in the right spot).
[edit]
Car 1 slowly comes to a halt after several minutes despite: trying to output a PWM of 250 (the max), normal on-board computer operation, normal battery voltage reading, and normal voltage going to the H-bridge. The brainstem voltage written to the text file however dips down periodically during the run and eventually reads 0, at which time the car starts slowing down gradually.
Solution: The wire going to the analog 4 pin on the brainstem (where the brainstem gets its voltage reading from) was loose in the blue clamp that connects the wire to the power bus. Note that this is the clamp that connects the wire to the bus, not the clamp after that between the bus and the brainstem. I just took off the clamp and re-clamped the wires.
[edit]
Car 1: Working.
Car 2: Working
Car 3: Working.
Car 4: Working
Car 5: Seems to work well but cannot change period in moto.exe. If you figure out solution, please post.
Car 6: Working
[edit]
[edit]
To use the vision system you will first need to tape a target to the top of the car(s) you want to use. Each car has its own unique target, with a number written on the back of it that matches the car's. Try to make the target as flat as possible when you attach it, and make sure that the orientation stripe is straight if you are running a program that gets orientation from the camera system.
In order to work properly, the camera program must be running on all three computers simultaneously. Thus, to start you will need to run the executable CPS.exe on all of them. There should be a Desktop shortcut for this on all computers. The programs on the different do not have to be started in any particular order or timing. Once the programs are running, they will take a few seconds to try and locate any cars that are on the track. Once they have finished, they will begin tracking and sending data to the cars. You should be able to see the boxes around the targets on the cars. If there are other stray tracking boxes on the screen (or possibly trailing your car) don't worry about it--they are for the other cars that you are not using, and they won't interfere.
In order to reset the tracking system (re-scan the entire area for targets and re-position the tracking boxes appropriately) simply press 'r' on either computer while a camera window is selected (doing so while the command prompt window is selected will not work). A reset can also be performed by running reset.exe from another computer. Note that a reset will not work if the patterns are moving. A reset is usually needed after a car is picked up and moved or the camera's view of it is obstructed at some point. In general, if a car isn't moving where you think it should, check the video window and make sure you didn't forget to reset.
If you close and then re-open CPS.exe on one computer while leaving it running on the other, you will need to hit reset on the one that was left running for them both to start working properly. Avoid running two copies of CPS.exe at once on the same computer--this tends to make the cameras angry (see Troubleshooting #4).
[edit]
I've done quite a bit of experimentation with switching the estimator's calculated heading for the heading given by the camera system, and all I've been able to conclude is that it usually makes little difference which one you use. Graphical comparison shows that the data from both are fairly similar. Sometimes one seems to work slightly better than the other (the car will follow the path with slightly less weaving in and out), but which one works better is not consistent from program to program. At the moment (7/30/09) most of the programs on the cars are still using the heading from the estimator, and that is not really a problem--however, if a car/program is having trouble following a path, or if you are sick of having to wait for init_heading to run, then you can go to ca2.c and switch the 'z_zeta[est_count]' argument in run_controller to 'angle' to use the camera system's heading. You can also choose to use the positioning values straight from the camera instead of waiting until they have been filtered by run_estimator. To do so, switch 'xx[est_count]' to 'xc' and 'yy[est_count]' to 'yc' in the arguments to run_controller. In the tests I ran, doing so had no apparent affect on path following. If you switch over all three of these arguments, you can safely comment out init_heading. If you do so, make sure that you set zeta_set to 1 somewhere or the program will get stuck trying to initialize.
[edit]
This is a list of possible problems with the vision system and/or camera computers, along with their (usually very simple) solutions.
1. Problem: The position data the car is receiving seems to be wrong / the car isn't following the path. Solution: Check to see if the system needs a reset. It's easy to forget, but needs to be done whenever the car is
picked up and moved or the camera's view of it is blocked. If that doesn't fix the problem, make sure you are using the
correct pattern; the number on the pattern should match the car number. Another thing that can prevent path following
from working is if the car's IP address is switching, as in Problem #8.
2. Problem: The copies of CPS.exe on the three computers are not talking to each other. Solution: First check that COMP_NUM in CPS.h is correct on all computers. If it is, then check the IP addresses of the
computers. They have been known to change for no observable reason, and this will keep CPS from communicating properly. You will
need to (on all three camera computers) go to CPS.h, update the values of COMP0_IP_ADDRESS, COMP1_IP_ADDRESS, and/or COMP2_IP_ADDRESS, and recompile
the program (see Problem #3).
3. Problem: After changing something in the header file, an attempt to compile gives a bunch of errors in the code. Solution: In Dev C++, a normal compilation only re-configures the stuff from files you have just edited, which can be a
problem if the change to your header file would affect things in other files. You will need to do a full compilation
by hitting Ctrl+F11.
4. Problem: When trying to start the program, you get a message that there was an error setting up the sockets. This will most likely happen after accidentally trying to run two camera programs at once on the same computer. Solution: If you look in the task manager, it will probably show that a copy of CPS.exe is running even if there isn't.
This 'ghost' copy of CPS.exe can't seem to be stopped, and it prevents the program from being started normally. This must be
fixed by restarting the computer.
5. Problem: The tracking/pattern location isn't working very well. Solution: Make sure that all the lights in the lab are on and the blinds are closed--these were the conditions when the pictures were taken, so
having half of the lights off can else can keep the tracking from working properly. Also, make sure that the patterns on the
cars are as flat as possible.
6. Problem: Tracking fails when a car crosses from one camera frame to another. Solution: This can happen if a camera is bumped or shifted at all. The problem can be fixed by re-calculating the mapping between frames for the border causing problems.
[edit]
The overhead vision system consists of six cameras mounted on the ceiling around the track. These cameras are connected via FireWire to three desktop PCs, which implement a tracking algorithm to determine the position of the cars. The positions are converted to a global coordinate system transmitted to the cars via the lab network. Each camera runs on its own PCI card because this allows us to run two cameras per computer without any loss in frame rate.
In the tracking program the computers are numbered from 0-2, from right to left in terms of their positions on the desk.
[edit]
Rather than having a central computer that oversees all of the tracking, we simply run the tracking independently on each computers. Each computer knows at all times which cars it is responsible for tracking and sending data to. When a car is going to move outside of its frame of vision, that computer sends a message via the lab network to another other camera computer, which then takes over responsibility for that car.
The tracking is performed by doing a pixel-by-pixel comparison of the images captured by the camera with images of the patterns taken beforehand. Only central strips of the patterns are compared, in hopes of minimizing the effects of camera distortion.
[edit]
This is a small program that can be used to reset the vision system from another computer. It can be found in the camera_programs folder on either camera computer. When you run it from any Windows machine on the lab network, the camera system will reset as if you had hit 'r' on one of the vision system computers. The program will produce no output. If the IP addresses of the camera computers change, the program will have to be edited to account for this change.
[edit]
The header file CPS.h defines many important constants for the Camera Positioning System. Many of these are explained elsewhere, but some miscellaneous (yet important) ones will be clarified here. With the exception of COMP_NUM, all of these constants should be same for the program on all computers. If you are having trouble compiling after changing something in CPS.h, see Troubleshooting #3.
COMP_NUM: This value tells the code which computer it is running on. This needs to be changed whenever the program is copied from one computer to another.
CAR_IP_ADDRESS_# where '#' is a target number 1-6: These values set the IP addresses that will receive position data corresponding to each target number (written on the back of each target). As of this writing, the target numbers match the car numbers. If this ever needs to be changed, or if a car needs to be prevented from receiving data for some reason, this can be done by changing these values. However, make sure that no car will be receiving positioning data from two targets at once.
[edit]
Note: At this point I have only set this up for Dev-C++, so those are the instructions that I am going to give here. The process to set it up in Visual Studio is probably similar.
The vision system C++ code utilizes a couple libraries that need to be set up. First is the OpenCV library, used here for pixel-by-pixel image analysis. This can be found at sourceforge (_http://sourceforge.net/projects/opencvlibrary/_). Download and install it.
You will also need the software development kit from Point Grey-this is used for actually grabbing the images from the cameras. You can get this at the Point Grey Website (_http://www.ptgrey.com/index.asp_). You will need to either create an account (you may need a camera serial number, which can be found on one of the camera boxes in the cabinet by the chalkboard), or you can login using an account I created-I used my other email, thedanclark@gmail.com, with the same password as for the cars in the lab. Download and install the "FlyCapture v1.X release XX" (we used v1.7 but if there's a newer version it will probably be fine).
Now you will need to set up Dev-Cpp to include these libraries. Open Dev-C++ and go to Tools->Compiler Options. Check the box by where it says "Add these commands to the linker command line" and then add the following commands: -lhighgui -lcv -lcxcore -lcvaux -lpgrflycapture -lwsock32 -lws2_32 (the first 4 are for OpenCV, the next is for the FlyCapture library, and the last two are for the libraries used to set up sockets used to send data to the cars).
Now click on the 'Directories' tab, and under the 'Binaries' sub-tab, add:
C:\Program Files\Point Grey Research\PGR FlyCapture\bin
Then click on the 'Libraries' sub-tab, and add: C:\Program Files\OpenCV\lib
C:\Program Files\Point Grey Research\PRG FlyCapture\lib
In case you ever want to use OpenCV in C-code, go to the 'C includes' sub-tab and add: C:\Program Files\OpenCV\cxcore\include
C:\Program Files\OpenCV\cv\include
C:\Program Files\OpenCV\otherlibs\highgui
C:\Program Files\OpenCV\cvaux\include
Finally, go to the 'C++ includes' sub-tab and add the four OpenCV ones just listed, as well as: C:\Program Files\Point Grey Research\PGR FlyCapture\lib
C:\Program Files\Point Grey Research\PGR FlyCapture\include
C:\Program Files\Point Grey Research\PGR FlyCapture\src
In order to get your code to compile, you may also need to add OpenCV to the System Path. Do this by going to 'My Computer->View System Information', click on the 'Advanced' tab, click on 'Environment Variables' and add C:\Program Files\OpenCV\bin to 'Path' under 'System Variables'.
On the second computer I set up for this, when trying to compile code using the FlyCapture Library it complained that '-lpgrflycapture could not be found' or something like that, though the OpenCV libraries were working just fine. I tried about a million things without success until I finally just copied the FlyCapture lib files over to the OpenCV lib folder. A messy solution, but it should work if you run into this problem and can't find a better way to deal with it.
If the computer you're on doesn't have Visual Studio on it (or maybe even if it does), then you're probably getting an error that sounds something like 'Failure to initialize' with error code 0xc01500002, when you try to run your compiled program. To fix this, download and install the Microsoft Visual C++ 2005 Redistributable Package (_http://www.microsoft.com/downloads/details.aspx?familyid=32bc1bee-a3f9-4c13-9c99-220b62a191ee&displaylang=en_). This installs necessary runtime components of some Visual C++ libraries. If you do this and still get the error, then the SP1 version (_http://www.microsoft.com/downloads/details.aspx?FamilyID=200b2fd9-ae1a-4a14-984d-389c36f85647&displaylang=en_) might work instead.
If you do all this and still get runtime errors, then try copying all the .dll files in C:\Program Files\OpenCV\bin straight into the System32 folder (C:\WINDOWS\system32). I didn't have to do this for the first computer I set this up on, but I did for the second, so this step may or may not be necessary. [Make sure you don't modify anything else in this folder].
Now you should be able to compile and run code for the Overhead Vision System. As previously mentioned, I had to do some things for the second computer that I didn't have to do for the first. So, if you have completed all of the above steps and it still isn't working, then I suggest doing some internet research and just playing around with with it until you get a setup that works.
[edit]
Before any meaningful position data can be obtained from the cameras, they must be calibrated both intrinsically and extrinsically. There is a lot of software to do this, but the best documented and apparently most reliable is the Caltech Camera Calibration Toolbox for Matlab (_http://www.vision.caltech.edu/bouguetj/calib_doc/_). Also necessary for calibration is a large (about 2ft by 2ft) checkerboard pattern--there should be one in the lab somewhere. The square size on that checkerboard is 34.925mm on each side. You should use the middle 15 by 15 squares for calibration.
For background information about camera calibration and parameters, refer to the "Multiple View Geometry" textbook in the lab. Particularly of interest is the information on radial distortion on pg. 189-193.
[edit]
To do intrinsic calibration of a camera, a series of about 15-20 calibration images need to be obtained from that camera. This can be done using the FlyCap.exe software that came with the cameras. Alternatively, you can use a program in the camera_programs folder called pictureTaker which can be used to take a series of images with a customizable time delay between each shot (this time delay can be changed in the program's header file. Each image should be primarily taken up by the checkerboard. It is important that the checkerboard be rotated and held at many different angles in order to get a good calibration. Examples can be found in the previous calib folders. After obtaining these images, follow the instructions in the first calibration example on the Caltech page to obtain the calibration parameters. The calibration program will put the parameters into a file called 'Calib_Results.m'. These new parameters can be used in the tracking program by renaming the file appropriately ('Calib_Results_CamX.m') and placing it in the program's directory on both camera computers.
[edit]
This extrinsic calibration only applies if all the tracked objects are of the same height,i.e., you have a common ground plane.
To perform extrinsic calibration for a camera:
Additional miscellany about extrinsic calibration:
In the calib.cpp, we implement the pixel2position.m into two functions called normalize() and convert(). Also you will read in all the calibration parameters through getCalib() function before doing any calculation.
The structure struct calibData contains all the calibration data. fc, cc, kc and alpha_c are intrinsic calibration parameters while R_inv and T are from extrinsic calibration. R_inv corresponds to the inverse matrix of rotation martix and T is the translation vector.
Normalize() is nothing but the c++ version of MATLAB script normalize.m and convert() is the rest of the math in pixel2position.m. Note you may need to add/subtract the position of your checkerboard in the convert() function.
[edit]
[edit]
Before you do anything about searching or tracking, you need to generate the patterns that can be searched or tracked. What I have been using for the project are patterns that are "target" like. There are currently six different patterns. The backgrounds are black or white, and the targets are black and white rings. Below is the list of all the patterns.
The capital letter for each pattern indicates the background color for each pattern. Generally the black background is slightly better than the white one since the floor is so bright that at some places it looks very much like the white paper [Note: this may change once the black flooring mats have been placed]. The six letters after the capital letter are the sequence of the color of the rings, from outer to inner. There is a template of the target ring (as well as pre-made images of the 6 patterns listed above) in the CPS folder. You can edit the template and then print it out to make new patterns. The patterns above were designed to minimize the intersymbol interference as much as possible. So far these patterns have been working successfully.
[edit]
The total area of the lab covered by the vision system is divided into 36 "sections". The divisions between these sections are shown by the horizontal and vertical black lines that can be seen in the camera windows when running CPS. Each pattern has a calibration image for every section. When a car moves into a particular section, the image from that section is used for the tracking.
Theoretically we only need one picture for each pattern, so why we are taking 36 pictures per pattern? Because the brightness in the lab is not uniform and the camera images are distorted by the lenses. It would be very computationally expensive to undistort each frame so we decided to take a picture of the distorted images and let the computer to track those. However, the distortion at different locations in the images is different and the brightness changes from place to place. So we have six sections for each camera which results in 36 sections in total. How the sections are partitioned is related to the performance of the tracking algorithm. The boundaries are set so that the computer will still believe it's the same object when it travels from section to section. In other words, those sections make the object "look" the same in the computer's view.
The diagram below shows the section layout:
http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:New_section_layout_small.png
[edit]
If a pattern stops working in a certain section or if one of the patterns is changed, you may need to re-take the picture of that pattern in the affected sections. The positioning system code contains a function called recordObjectData which allows you to take pictures of the patterns for the computer to use in its searches. To take the pictures:
Note that running the system in RECORD_OBJECT_DATA mode is also helpful when checking the accuracy of an extrinsic calibration, as the program will output the position of the upper-left corner of the 'box', in both pixel coordinates and the global coordinate system in millimeters.
[edit]
Once you have all the pictures you need, you can start searching and tracking. There is one very important parameter that is used throughout the program, BOX_SIZE. BOX_SIZE is the size (in pixels) of the square box that contains your tracking pattern. In the previous version of our algorithm, we were trying to compare everything in the box to the picture we took before and then find the best match. But the problem is, even though we have so many sections we still had trouble finding a good match because of the distortion. The updated algorithm only compares the middle stripe of the box since the middle part suffers the smallest effects of distortion. Since we are now only comparing the middle one fifth of the entire box, this reduced the complexity but increase the performance.
[edit]
The search algorithm is implemented in the function void findObject( IplImage *img[], int objData[][NUM_CARS][NUM_FILES], Position loc[], socketData &sendSocket, socketData &receiveSocket). You need to pass the image, object data that you recorded, locations of all your objects and socket data that is for computer talking. In this function, initial determines where you start to look at and r_x, r_y determines how far you will be looking. Once those parameters are specified, it will start searching for the objects. What the algorithm does is to search the whole region from initial and (r_x,r_y). It slides the box (of size BOX_SIZE) pixel by pixel, compares the image in the box with those taken previously and finds the best match. Still, it's only comparing the middle strip of the box. After the search for all the objects, two arrays will contain the location of the patterns and difference between the best matches and the pictures that are taken earlier. These will be passed to the searchExchange function, which communicates with the other camera computers to determine the best overall match.
[edit]
The searchExchange function is used to exchange search data between the computers and find out which computer has the best match. This function behaves differently depending on whether COMP_NUM is 0 or not.
The exchange of data works as follows:
For more detail, see the code of the searchExchange function in the search.cpp file.
[edit]
Tracking is very similar to searching. It's a local area search. The size of the local area is defined by the global variable ITERS. The value of ITERS should be determined by how fast your computer is and how fast the object you are trying the track moves. I wrote a MATLAB script to calculate the value for ITERS. The MATLAB function is defined as follows: function iters = iter_calculator(pixel_size,max_speed,fps,delay) overal_fps = 1000/(1000/fps + delay); speed_per_frame = max_speed / overal_fps; iters = speed_per_frame / pixel_size;
The input to this function is how big one pixel is in millimeters, the maximum speed of the object in millimeter/second, the frame rate in frame/second and the delay in milliseconds. The delay refers to unusual behavior of the computer, network, etc. It should be 0 for normal cases.
After you specify the ITERS, the program will do a search in the same way as the searching function. But the difference is that it will only search in the area of a box with length of 2*ITERS+BOX_SIZE and centered at its previous location. There needs to be a balance between ITERS and the frame rate. If you increase ITERS, you will lower your frame rate since you are doing more calculation for each frame. If your frame rate is low, then you need to increase ITERS because otherwise, you may not be able to track the object. So the overall goal is to maximize the product of frame rate and ITERS. This can be done experimentally.
[edit]
At the end of the tracking function, we will run the search for the orientation. After you get the symmetric target patterns, you will need to put a "dot" on it somewhere outside the outer circle. The algorithm will search in a bigger box which is centered at the center of the box that contains the symmetric target and find the dot. The location if this dot will then be used to determine the orientation of the pattern.
To improve the program's ability to find the "dot", a stripe pointing away from the center of the pattern can actually be used instead of a dot. The algorithm will still search for a square, and will simply find some arbitrary point along the stripe--but it doesn't matter which part of the stripe is found since the entire stripe is along the same orientation. Note 1: The stripe should be in the opposite color of the background of the pattern. Note 2: The stripe should be pointing straight away from the center of the pattern. Note 3: The stripe should not be too close to the pattern since it may interfere with the tracking. After the program figures out where the dot/stripe is, it will convert the position of the dot/stripe to orientation and send it to the vehicle (More on this in later sections).
Note: At this point (8/27/2010) none of the demos (that I am aware of) actually use the orientation data given by the camera system.
[edit]
After the tracking and searching for the dot, we are ready to send data to the vehicle. Note that up to this point, all the pixel position mentioned corresponds to the upper left corner of the box you see on the screen. We will keep this convention unless otherwise mentioned. However, we will send the position of the center of the box, instead of the upper left corner, to the cars. After we get the center position, we will pass this into the convert() function to get its global position in millimeters. You may refer to Extrinsic to figure out what this function does.
Then we will send the message, in the form of a character string, using sockets. The string will contain the position of the object in millimeters and also its heading information.
[edit]
Each camera has some overlap with other cameras so that when one object is leaving from that camera, it can be easily switched to another camera which has a better view of it. There are two cases, one is the switch within one computer, i.e., switches between camera 0 and 1, or between camera 4 and 5. The other case is the switch between computers. The latter case is slightly more complicated since we need to send a message from one computer to the other, which may involve transmission delay. There are a total of 7 borders between the cameras, and there is a mapping in both directions for each border. We name each of these transition mappings in the following way:
http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:Transition_numbers.png
The data for these mappings can be found in the CPS/calib_data/ folder. There are 14 different files (1 for each transition), named with the format compX_transitionY.txt.
[edit]
The mapping between two cameras will have to be re-calculated whenever the cameras are moved or bumped. If the tracking messes up when a car travels from one camera frame to another, then the camera transition mapping is usually at fault and will have to be re-measured. We draw grids on the images shown on the screen and the lines at the edges between cameras indicate at which point we will switch the pattern from one camera to another. The following is an example of how to calculate this mapping:
1. Say you want to recalculate the mapping of the transition from camera 5 to camera 3. From the above diagram, we can see that this is comp 2, transition 3.
2. You need to place 3 reference objects on the floor at the point of the transition: 2 near each side, and one in the middle. The objects should be as close to the black transition line in the camera window as possible. See the below image for an example of where to place the objects.
http://wiki.eecs.umich.edu/delvecchio/index.php?title=Image:Mapping_measurement_new.png
Note that the objects are placed along the line in the camera frame that the object would be *leaving*. If we wanted to calculate the mapping from camera 3 to camera 5, we would place the reference objects along the transition line at the bottom of camera 3.
3. The objects should now be visible from both cameras involved in the transition. We need to determine their exact pixel positions in both frames in order to calculate the mapping. To do this, re-compile the camera system in RECORD_OBJECT_DATA mode. This will cause the program to output the position of a move-able box in the camera frame, in both pixel coordinates and in the global coordinate system. Move the box over the objects to get their positions. Do not close the terminal without manually recording the box positions, as the 'record data' functionality is for lighting calibration purposes, and not mapping.
4. Using the 6 pixel coordinate locations you have obtained (2 positions for each of the three objects--one from each camera), run the script border_mapping.m located in the CPS\matlab scripts\ folder.
5. After following the script's instructions, it will output the calculated mapping data. Simply copy/paste this data to the proper file in the CPS\calib_data\ folder. In order to ensure that all 3 computers have the same data, you should copy the new mapping data file to the other camera computers as well.
6. Be sure to test the new mapping by driving one of the cars through several points along the border in question.
[edit]
In the program, the parameters that define when to switch are They are CAM0_VERT,CAM0_HOR,CAM1_VERT and CAM1_HOR, and CAM1_LOWER_HOR. These parameters are set in CPS.h, and they are different for each of the three camera computers. These parameters are chosen so that if the image is very distorted on one camera, it will be switched to the other one, where the distortion is less serious. So, when the object passes these lines it will be switched to the other camera. But oscillation between the two cameras might occur if you don't choose the parameters carefully. Also note that even if you calculate the mapping for one direction, you will still need to do the inverse mapping separately. Unfortunately, you cannot simply reverse the linear functions to get the inverse map. You will need to follow the procedure described above again for each switch.
[edit]
At this point, you have everything that is needed for switching between the two cameras that are on the same computers. The switching between computers is slightly different because you will need to send a message to the other computer. We will use sockets to send the message. The message is a character string that has the following format: " Switch object_number camera_number projected_x_coordinate projected_y_coordinate " (note the whitespace at the beginning and end of the string). In this string, the first number will always be Switch, which is an enumerator and has the value of 1. Then is the object number. The camera number is a negative number and you need to increment it by one and then take the inverse to get the new camera number. The last two components are the projected pixel positions in the new camera. Each coordinate has two parts, the mapping part and the velocity projection part. The mapping part is as described above and the projection part is based on the object's current velocity in the previous camera. Since there will be transmission delay and processing delay, the projected part will take into the delay into account and project the new position. Note that if there are multiple objects switching between computers, one message will be sent for each of them. So we may have to send up to six messages in a single frame, but this won't cause any problems since sending a message through a socket is very fast.
[edit]
[edit]
This is the function you should call at the beginning of the program. data is an array of calibData type. This function reads in all the intrinsic and extrinsic calibration data for all four cameras. However, before you run this you need to change the directory since all those data are stored in the CPS/calib_data folder. This change can be done by simply calling the changeDirectory() function.
[edit]
This function is used in the searching and tracking algorithm. You need to pass in the image which is in openCv format, the picture data that was recorded and read in, the size of the box that contains the object(usually it's BOX_SIZE), the object's position structure, object index k and which section it's in. The function will return the difference of the image in the box at loc with the recorded image.
[edit]
This function will set up the connection between Fly Capture image and OpenCV image for the first time. This function will also set up the windows that are used for displaying images on your screen. Since we are using two cameras on each computer, we will also display two windows on each computer. That's why we pass an array of OpenCV images and Fly Capture images to the function. The camera will send images to the computer at a constant speed of 60 frames per second. This frame rate is set in the setup function but can also be modified manually by the FlyCapture software which can be accessed from "C:\Program Files\Point Grey Research\PGR FlyCapture\bin\FlyCap.exe". However, due to limited computation power and the complexity of the algorithm, usually the program iterates at a slower speed. The frame rate of the program (usually somewhere between 30 to 58 FPS) varies with the value of ITERS and number of objects it's currently tracking. So every time the program finishes processing the previous image, it will grab a new image from the camera. The computer will just drop the images that are not grabbed by the CPS program.
[edit]
It's best to understand what this function does graphically. When you run the program, you will see several boxes drawn around each object you are tracking. The inner box tells you where the object is while the outer tells you the outer boundary of the range in which we will search for the dot/stripe that's around the pattern. The tiny box is the location of the orientation dot/stripe. This function will search in the unit of a small box. It will slide this small window through all possible positions inside the outer box but outside the inner box. It's essentially searching along four sides of the big box, and that's why you see 4 for-loops there. Because for different patterns the dots have different colors, we will need to distinguish this. For pattern 1,3,4, and 5, the dot is white while for pattern 2 and 6 the dot is black. So, we add a bias when we search for the dot. The function used here to do the searching is called int boxAvg(IplImage *img, int x, int y, int box). This function will do nothing but tell you average pixel brightness in the box that is located at (x,y). So for different colors of the dots, we have different bias values. After searching the entire area, dotSearch(...) will draw the dot at the position that looks most like our orientation dot/stripe.
[edit]
On the desktop of the camera computers, the folder CPS contains all the stuff for the program:
[edit]
[edit]
Linear filtering is nothing but multiplication and addition. Go to matlab and type help filter, it will show you the algorithm to do filtering. Once you got the input data, you just need to multiply the data with corresponding coefficients of the filter, then add them up to get the output. Where do you get the coefficients? Go to matlab and type help firpm. This is a long help but you only need to pay attention to the example it shows at the bottom. Since our data is not changing very fast, so we may want to use a lowpass filter to reject the noise in the high frequencies. % Example of a length 31 lowpass filter:h = firpm(30,0 .1 .2 .5*2,1 1 0 0);
The first in put to this function is the length of the filter-1. Longer the filter, less the distortion of your signal, and longer the delay you will have. Since in our project, the general acceptable delay is ~100 ms. For the filter that is implemented on the computer, you can have a filter that's of length 10. For the filter on the car, it's impossible to have a filter within that delay. The delay of a lowpass filter can be roughly calculated as length of the filter divided by two, times your sample duration. However, the sample duration for the program on the car is already 100 ms, then you won't be able to get any useful filter that has delay less than 100 ms. The second and third inputs to the matlab function correspond to each other. The example above shows you want your gain is 1 between the interval of [0 .1]*2*pi. And the gain for [0.2 0.5]*2*pi is 0. You don't care about the transition band in between the two intervals above. If you know specifically the frequency of you true data, you can shrink the interval that has gain 1. Otherwise you may just keep that interval as is. Note if you have very small interval that has gain one while you don't want to have a long delay, the actual filtered signal will usually be distorted. Once you figure out what kind of linear filter you want to use. Just plug in the coefficients you got from firpm function to your filter. It's just one line of many multiplication-and-addition operations in C++.
[edit]
The only nonlinear filter I have been using is the median filter. The median filter will apply a window of size N(N>=3) to your input data. It will pick the median one in the current window as the output and slide the window by 1. So the delay of the median filter is N-1. It can eliminate the salt and pepper noise in your data. We implement a median filter to the velocity calculation on the computer and it's working fine. So in the program, we just sort the data in the velocity history plus the newly calculated one, then find median of those and set it as the current value. Note every data in the history is newly calculated not the filtered one.
[edit]
[edit]
The following links are the matlab files I used to come up with the motor map for car 4:
car 4 motor map data (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car4motormapdata.m_)
car 4 motor map data analysis (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car4motormapdataanalysis.m_)
(These same files can be used for car 5 and 6 you just need to change the variables from "car4_PWM_#t#" to "car5_PWM#t#" or "car6_PWM#_t#" in both the data and data analysis files.)
1. Make sure the .tea code on the car takes an input and assigns it to the PWM
3. Record and analyze the data.
note: be sure of the units in all the programs and calculations
[edit]
The following links are the matlab files that I used to solve for the dynamic parameters for car 2:
car 2 parameters data (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car2_parameters_data.m_)
car 2 parameters data analysis (_https://mfile.umich.edu/download/?path=/afs/umich.edu/user/a/a/aaboutal/Public/car2_parameters_analysis.m_)
(These same files can be used for car 1 and 3 you just need to change the variables from "car2_trq##" to "car1_trq##" or "car3_trq#_#" in both the data and data analysis files.)
note: "a" should be a positive number and "b" should be a negative number
[edit]
The maintain_velocity_PWM() function for the new cars (cars 4-6) uses a linear mapping of desired speed to PWM. The parameters of this linear function will likely have to be re-measured after certain hardware changes to the cars. The process to find the parameters is fairly simple and can use data acquired from the motor map trials.
1. It is necessary to find the relationship between PWM and resultant speed for various speeds/PWMs of the car. The data used to calculate motor maps can be used for this. Use the data to estimate the speed that the cars level off to for each constant PWM value given to them in the trials (10, 20, 30...).
2. Now use Matlab to calculate a linear fit to your data. Begin by entering the data into Matlab. Your X data will be the speeds reached by the cars, and Y data will be the corresponding PWM values. Xdata = speed1 speed2 speed3 ...;
Ydata = PWM1 PWM2 PWM3 ...;
3. Now find a linear fit to the data: p = polyfit(Xdata, Ydata, 1)
The parameters given by Matlab here are the ones that can be used in maintain_velocity_PWM() for that car.
4. If you want to check these parameters graphically (recommended), this can be done in the following way: Xeval = linspace(0, 2500, 2501);
Yeval = polyval(p, Xeval);
plot(Xdata, Ydata, '.')
hold on
plot(Xeval, Yeval, 'r')
[edit]
Updated summer 2009
[edit]
[edit]
note the unit for these equations are mm/s^2
[edit]
[edit]
Description: 3 cars traveling on different printed lab circles imitating traffic roundabout geometry. Utilizes collision avoidance and autonomous cruise control algorithms. Currently demo is set up to run with car 1 on the outermost circle, car 2 on the smallest circle, and car 3 on the intermediate circle.
Steps to run:
[edit]
Steps to Run Program:
[edit]
This demo will include one autonomous car that does no collision avoidance and one car that will be driven by a human. The human car will let the human have full control until there is danger of a collision at which point the car will warn the human to either accelerate or brake depending on what is required to avoid a crash. If the human does not react properly withing a given time frame, the car will take control and act to avoid the collision.
Some initial work on this was done using an older version 2-car semi-autonomous demo. The code for the human-driven car is located on car 4 in root/project/semiauto_demo_2.
The current version of the demo is very simplified. The autonomous car simply follows the outer circle without changing speed, heedless of what the human car is doing. The human car, rather than only taking control from the human when a crash is imminent, takes control of the throttle whenever the 2-car semi-autonomous demo that it is based on would normally exert control. So, the human never really gets a chance to avoid the collision on their own.
Note that this is based on an older version of the semi-autonomoous demo and may have some collision-avoidance issues.