top of page
image001.jpg

dynamic duo

Spring 2019 - flex
Purpose

To independently design & build a robot greeting team for the front lobby to guide guests through the sign-in process and provide a novel first impression experience.

​

Flex's Silicon Valley Innovation Center is a small early-stage product consulting group within Flex focused on sensor technology applications and proof-of-concept product prototyping. The group frequently hosts potential clients who go on a guided tour of a demo area of the group's past projects. Clients come in through the lobby and need to sign in on an existing tablet before their host comes out to greet them.

design elements
  • HP MultiJet Fusion printed Nylon PA12 housings

  • OpenMV Cam M7 - Facial Detection & Tracking

  • Pixy2 Cam - Object Recognition

  • Servo-driven Head turning & Arm waving

  • Communicate via color-changing LED ring 'eyes'

  • Text-to-speech audio files stored on SD card emitted via speakers mounted to chests

  • Arduino Uno operating each robot

design narrative

This project started off very open-ended, as a prompt for selecting an independent co-op project. The prompt was "a team of robots accomplishing a task." In the 4-month scope of my project, I took the minimum number of members of a team to be two, and saw an opportunity in the lobby sign-in experience to create something fun and personally challenging.

​

I started off with 10 form factor sketches, and surveyed the team to come up with two final form factors that I fabricated in foam. This was done in parallel with forming design constraints and key functions of the two-robot team. I wrote a storyboard and script for the entire interaction, and block diagrammed the electrical components of the two robots. I planned & managed my project timeline in MS Project.

IMG_20190408_131047.jpg
Foam Prototypes in Lobby
blockdiagram.pptx.jpg
Block Diagram for Robot 1

Robot 1 "0.1134" can detect human faces with the OpenMV Cam M7 and track the detected face in 1 degree of freedom (neck rotation about vertical axis). It speaks through its chest-mounted speaker and changes the color of its LED eyes to trigger conversation cues in Robot 2.

​

Robot 2 "14" can detect the color change of Robot 1's eyes through a side-mounted Pixy2 Cam using colored object recognition, and in response to this cue play conversational audio files through its chest-mounted speaker. It changes its eye color to match Robot 1 and waves its right arm at the approaching guest.

Subassembly prototyping

I identified the critical-to-function subassemblies of the final two robots and created these prototypes first to ensure function before pulling the plug on final design decisions. These were: facial tracking (OpenMV-Arduino-Servo communication path), Pixy2 recognition of LED color changes, and the audio path (SD card reader-Arduino-Amp-Speaker path).

​

For the eye interface, I vacuum-formed & trimmed my own part using a mold I modeled and subsequently 3D-printed next door

Facial Tracking Prototype - w/ OpenMV Cam M7
IMG_20190520_145524.jpg
Neck Turning Interface w/ Servo
IMG_20190528_105227.jpg
Vacuum-Forming Mold
IMG_20190517_143647.jpg
3D-Printed Eye Interface
IMG_20190524_185410.jpg
Curved Eye Interface w/ LED rings and Vacuum formed part applied
Final modeling, printing, and assembly

Both robots were entirely CAD modeled in SolidWorks using both solid modeling and surfacing tools, with careful attention to design for assembly. A few selected views of the final models are shown below, housings transparent so you can see the guts.

robot11.png
robot22.png
Full Body Views of Both Robots 
robot2-2.JPG
Angled Vent Hole for Servo Screw Access
robot2-1.JPG
Set Screw to Secure Power Cable Running from Arduino to Back of Robot

Here is a video of the entire interaction from start to finish. The short robot activates from its set state of blue eyes when it sees a human face and activates the tall robot from there, who starts waving its arm. You can see the short robot facial tracking when the camera-person moves from side to side. 

challenges & Acknowledgements

This project was the most challenging project I'd worked on to date, and was not without speed bumps. But it was also not without smart people that I turned to for help with said speed bumps.

​

I vacuum-formed the face plate for the tall robot ("14") in house using a mold that I modeled and rushed to have 3D printed next door out of heat-resistant Nylon PA12. However, the surfacing technique I used to create the mold geometry from the curved part was wrong, and I printed a geometry that didn't fit the face of my robot. In a quick fix, I 3D-printed a PLA mold on our team's lab printer that ended up working fine, but it was a waste of resources to print the first and taught me the importance of close geometry review, and review again.

 

For the tall robot, the Pixy2 camera communicated over the SPI bus slave to the Arduino. However, the SD card reader that held the audio speech files also communicated over the SPI bus, and I found through testing with the oscilloscope with Glenn (Principal EE) that even when I told the Pixy2 slave select to go low, it always remained high, interfering with the audio signal that I wanted to call from the SD card reader. This was an unanticipated electrical challenge during the last two days of my project that I could not fix - the Arduino hardware to Pixy2 Cam link was not functioning as intended. I turned to my brilliant co-op friends Anthony and Ian to fix the problem on a software level, and they helped me move the tall robot's speech functions entirely to a Raspberry Pi, restoring the robot's functionality but departing from the idea that it could be run from only an Arduino Uno.

 

Many thanks to Glenn for electrical counsel, Brendan for the vacuum-forming tutorial, Anthony and Ian for lots of programming counsel and the last minute Raspberry Pi save, my desk neighbor Mukul for mechanical design feedback, and to my supervisor and mentor Charles for guiding the project from the beginning.

bottom of page