Now that we have the basic auto shooting working you might think we are done, but no!  What we have now still requires us to position the robot so that is roughly aligned with the tower.  The further the robot is off the center, the more likely the shot will fail.  What would really be cool is if we could make it so that we could position the robot at any position and the field, and have it automatically figure out how to properly align itself with the tower and make the shot.

To accomplish this feat, we are going to need some kind of navigation system which will allow the robot to keep track of it’s absolute position on the field in real time.

The robot can track it’s position using a gyro which tells it what direction it is heading and the wheel encoders which tell it how far it has moved in that direction.  There are two problems with this method, however.  The first is that the robot will need to know it’s starting point.  The second is that computing it location in this manner results in small errors which accumulate over time until the error becomes significant and the robot no longer knows where it is.

To solve these problems, we need a way for the robot to find it’s absolute position. Fortunately we have a way to do that.  By using the gyro and looking at the target it is possible to compute the absolute position of the robot on the field to within a few inches.

In principle it should be possible to use trigonometry to take the image of the target, and the current orientation of the robot (as measured by the gyro) and compute it’s location.  This would require knowing the exact position and angle of the camera as well as the exact position of the target, as well as some information about how the image is captured which might be difficult to obtain.  Even if we could get the required information, the calculations would be very complex.

Fortunately there is an easier way.  What we will do is drive the robot around and gather data about it’s distance an angle and derive equations that will relate those to it’s position on the field.

The first step is to find the relationship between the position of the top of the target image from the top of the frame and the distance of the robot from the tower. To this end, we will create a new command called the CalibrateDistanceCommand.  Go ahead and create the framework for this new command.  When you are done, your CalibrateDistanceCommand.java file should look like:

Now what we are going to do is to start our robot 55 inches from the tower, and have it drive forward slowly for 8000 encoder units. While we do this we will record both the encoder values, and the position of the top of the target with respect to the top of the image.  We will then try and determine the relationship.

Since we are going to be using the drive motors, we must declare that this command uses the DriveSubsystem:

In the initialize() function we need to reset the encoders to zero and then start the robot moving forward. We also want to start the log file:

Normally we would define the speed as a constant rather than using the 0.2 directly, but since this command is for testing purposes only, we really don’t need to bother with that.  Don’t get in the habit of not using constants when they are appropriate, however.

In the execute() function we need to log the current encoder values, and the position of the top left and right corners of the target:

In the isFinished() function we want to return true when one of the encoder exceeds 8000 units:

Then in the end() function we need to stop the robot and close the log file:

Your CalibrateDistanceCommand.java file should now look like:

Finally connect this new command to our test button 12 in the OI class.  Your OI.java file should look like

Note that in this case we are using the whenPressed function rather than the whileHeld function. This is because we want the command to run to completion when we press the button.

Place your robot 55 inches from the tower and run the program and press button 12 on the joystick. Now measure and record the distance, in inches the robot traveled.  In my case it moved 38.5 inches.  Since the robot traveled 38.5 inches when moving 8000 encoder units we can convert encoder units to inches by multiplying by 38.5 and dividing by 8000.

Retrieve the log file from the robot and load it into excel.  You should see something like: Now we want to insert a column which will display the distance the robot is from the tower.  We will average the left and right encoders and multiply then by the scale factor above.  Then we know when the encoder is zero, the distance is 55 inches, and as the encoders increase, the distance decreases.  Thus the formula we need is:

We also want to insert a column which records the average of Y1 and Y2.

After we insert these new columns we should see: Now plot the Distance vs the average Y and you should see something like: As you can see, the curve is decidedly non-linear.  However, using Excel we can fit it quite nicely to a quadratic equation as follows: The formula that you see is the actual formula which will convert the top position of the target into inches on the field.  Unfortunately, Excel is not giving us the precision we need, particularly for the x^2 term so we will multiply the Distance by 100 and plot it again: From this we can see the the equation to convert from the top position to inches is (remember we have to divide by 100):

Unfortunately, this is not the complete story.  This formula will only work if we are facing the tower head on.  If the robot is rotated away from the tower a bit, this formula will not work correctly.  To fix that we want to run a test where we rotate the robot and record the change in the top position as a function of the angle.  This will enable us to compute a correction value based on the angle the robot is facing.

While we are doing this, we also want to figure out the relationship of the image center line to how many degrees we are away from being aligned properly with the tower.