We want to capture all the best moments in our lives, the things that make us happy and smile. But some of the best are so fleeting that by the time you pull your phone out, they’re gone.  Fussing with a camera can also get in the way of properly enjoying a moment. It’s why you hire a wedding photographer to take care of that for you… but you can’t (and probably don’t want to) have one following you around all the time.

This is where intelligent automation comes in. With the help of the Emotiv EEG headset, we can know when you’re having a good experience and automatically capture a photo, with no human intervention required, no distraction, and no delay.

section1-epoc

What is happiness?
We’re not philosophers, we’re engineers, so we built a system that takes those EEG signals, filters, and processes them based on our metrics to create a happiness index.  Emotiv unfortunately doesn’t offer a built-in “happiness” channel, and other emotions, despite offering a potentially more nuanced understanding of the experience, are much trickier to distill into a happiness factor and integrate into the metric.  For our proof of concept, we decided to focus on the more traditional indicators of happiness: facial expressions.  Here is our data from two consecutive happy moments.

skitch

As you can see, this Raw Happiness Index is very jittery, so we ran a simple 5-element boxcar filter over it to smooth it out.  There are more elegant solutions, I’m sure, but this worked well enough for a weekend hackathon.  Improvements are planned for the next iteration. :)

When the happiness index rises above the trigger threshold, a signal is sent to the the camera to snap a photo.  Bouncing is often a concern when dealing with real world triggers, even with filtering, and you can see that the the orange line still crosses the 80% trigger threshold multiple times per instance.  To solve this problem, we put in a delay and rearming threshold at 20%.  After the delay and return to baseline levels, the system is then ready to accept another trigger event.  Here’s how the components fit together:

7CDD6F66-3F6F-406F-AA4D-CF412F6B1341

Where do the photos go?
For the purposes of the hackathon demo, we configured the system for minimum latency, with all photos automatically uploaded to Twitter.  They can be seen here on Nick’s account, created just for this hackathon: https://twitter.com/wallsrsolid.

Of course you don’t want everything in your life automatically posted and shared with the world, so we’ve included a non-posting setting for more personal moments or offline use. This delay means you can also add your own tags and captions before posting from your approval queue.  It can also be customized to use other services, public or private.

Why a webcam?
The EEG headset provided did not have bluetooth, requiring a USB connection instead, making the webcam the most wearable, most easily controlled, cheap peripheral we had lying around that’s also cross-platform.  This made getting our system working on both Mac and Windows much easier.

We knew that getting a good, clear photo can be difficult, especially when laughing, so we went with a chest mount instead of head. This way, it won’t interfere with the headset and the greater stability on the torso means you can have a slower shutter speed and better performance in low lighting conditions. Have a special date in a dimly lit restaurant? We’ve got you covered there too.

 Where are we going from here?

  • The Emotiv Insight, slated to be released this summer, is an ideal candidate for pairing with our ubiquitous mobile devices with built-in cameras.  Thus, the next iteration will not require any clunky extra hardware, making the system far more user-friendly.   Phones can be placed in a front shirt pocket, where they often go already.
  • The Muse headband has bluetooth and doesn’t require saline, but doesn’t offer the preprocessing for facial movement that we needed for a quick hackathon project.  I suspect it is still usable for this type of project in the future, but at present, would require us to develop our own algorithms to do what Emotiv already does out of the box.
  • We can refine our metric and expand functionality to include more channels from the headset for more nuanced output, like different hashtags that reflect your mood.  We discussed output differentiation during the hackathon but decided to revisit the idea after we take another look at the input options and algorithms that make up the “magic”.
  • We can also build a community website that likeminded people can opt into for sharing their best moments and see the best moments of others’ lives.  Maybe an interactive “Best Of” that incorporates data from community reactions.
  • We didn’t use a gopro but can integrate that for short videos. Continuous feeds have their place but we’re focusing on the best moments.

For more information and the source code, check out the repo on github.  And remember, the best camera is the one that’s with you.

Many thanks to all the hackathon sponsors for the shiny prizes, venue, and food.

Categories: Projects

Leave a Reply