Back to the question: How can we determine where a tennis ball strikes a large surface? What if we use a couple of cameras to triangulate the position of the tennis ball? If necessary, we have cameras that stream as fast as 240fps, and we can monitor a wide region in front of the wall, rather than a narrow plane like we were monitoring with the AirScan. That means the ball can’t sneak through our detection area undetected!
I wrote a quick openFrameworks app to watch the input from two PS3 Eye cameras. All it does is do a simple difference calculation between frames, and where the difference surpasses a threshold it identified the tennis ball in motion. A bit of optical calculations and trigonometry and we can get the distance of the ball from the wall, and it’s x/y coordinates on the wall.
After a bit of experimentation with USB extenders (not all created equal for connecting web cams, by the way,) I got everything plugged in and roughly calibrated with the projector.
The cameras are in the bottom left and right corners of the wall. Currently, the calibration methodology is comedically rudimentary:
- Measure the FOV of the camera using a ruler.
- Align the cameras pointed up and at the wall such that the lower edge of each camera’s view is aligned with the lens of the opposing camera.
- The x/y coordinates that result from the calculations give us values scaled from x=0 (at the left camera lens) to x=1 (at the right camera lens) so we scale this into screen space.
That’s it. Fancy, right? For a production system, we’ll need to rework the math to allow for some actual calibration.
But this gets the idea across nicely.