I’m looking for someone who knows how to breadboard some stuff (I tried, but seems I’ve forgotten everything from my Electronics GCSE 15 years ago :)) to get an output from my simple Raspberry Pi based stereo-webcam project. I’m very much a software guy, struggle with hardware at the best of times :)
Input into Pi is Minoru stereo webcam (it’s super cheap), which is then processed quickly to a simple depth-map (about 12-15fps atm) by OpenCV in Python to generate a linear array, currently outputting via PiFace to the few LEDs on the board.
Some sample code for the image processor can be found here (it’s built with the Pi and Minoru in mind, but should work with minimal tweaks on any Linux setup with 2 webcams, though if not stereo cam you’ll have to fudge to calibration a bit for some idea how it works). I have some improvements specific to this usage (outputting middle row, scaled to LEDs on PiFace) on my Pi but I’m out on the laptop at the moment so can’t pull it to commit.
What I’d like to be able to do is output to some kind of haptic feedback from the GPIO to ‘display’ via vibration against the skin (thinking in hat-band or wrist-band) to blind people of potential obstacles/objects in their way (with potential later development to include OCR for reading signs and/or text). Like with LED output, only needs to be a 1 dimensional array. Depth map is not perfect, but my LED output suggests it’s enough for basic environment awareness (wouldn’t be useful to a PiBot, but should be useful to a human).
My intent is to keep the project open and simple, and not to expand to something like the Kinect4Blind or AuxDeco, which imho are needlessly complex and impractical. I just want to be able to publish some simple code, and a circuit diagram, so blind people can get themselves a Raspberry Pi, a stereo webcam, and build their own ‘guide’ tool that runs off a single battery, for at least a few hours.
Originally planned to use dampened piezo transducers, so I have a bunch of those lying around, but found that generating an alternating tone is trickier than I thought and don’t know enough about timers/crystals to get them doing their thing. Backup plan is to use small cellphone vibration motors connected in parallel but I can’t generate a reliable enough tone at a low enough voltage to run off the GPIO so I think it needs to some kind of power handler (not really sure) to run them off the battery, but to control the power they consume based off the depth map grey-value.
I’m liking the Pi for this project because of the very lower power consumption, the ability to run Python on the command-line (improves OpenCV processing speed a little) and the general portability, the accessibility, price and the ease of expansion with additional features later. Not looking to make a ‘product’, just to get the ball rolling on an free/open hardware/software project.
All input welcome. I’m on twitter @bobbigmac or email me: anything at this domain.com :)
Need a simple module connecting on GPIO so Python code on Raspberry Pi can push a vibration strength to each of a 1d array of (n) vibrator motors (ideally 15+ vibrators/piezos).