Introducing QromaScan & The Metadata Trifecta

voice recognition metadata QromaScan kickstarter

Earlier in the week, I talked about the notion of ‘On-Demand & On Command’ as a way to understand how voice will be at the heart of our future technology usage.

I’m sold on the belief that improvements in voice recognition will significantly reduce the relentless tapping, flicking and swiping of our screens. What’s more, advances in natural language recognition will lead the way. Without a doubt, you will soon be able to say something like “Show me all the home runs in last night’s Dodger game.”__, and it will pop up on a screen of your choice.

=== What’s less likely to happen is for you to ask some device to show you all the photos of you and your dad in New York 5 years ago. It isn’t that technology loves the Dodger game more than your photos. It’s just that without metadata (data that describes other data), your phone doesn’t know one photo from the next.

So how do you get to any one of your thousands of photos as precisely as you got to that ball leaving Dodger Stadium? Funny you should ask….

The Trifecta

Let’s do a quick experiment. Close you eyes, and think of one of your photos; any photo. Do you see it? What you might be seeing is one or more of three things; a time, a place or some people. Think about it, you can get to almost any one of your thousands of photos if you can combine your search using these three elements. Date, location, and people; that’s the photo metadata trifecta.

The problem is that your photos don’t have all three because it’s tedious work to tag photos. Really tedious. That iPhone in your pocket will give you two out of the three tags when you take a picture. The photo gets embedded with the exact date and location it was taken but what it’s missing is critical; people. Trying to find a photo without using people as a search term kills the trifecta. It has to be all three, or you’ll get a lot of fly balls mixed in with the home runs.

Enter QromaScan

About couple of years ago, I realized that the same advances in the voice recognition technology that got us our content could potentially also be used to tag our own data. Seeing a big box of unscanned photos at my friend’s house gave me an idea. What if I could scan and trifecta tag my photos in one step? And so, QromaScan was born.

This week, I launched QromaScan on Kickstarter. It’s a book shaped device that opens up into a photo lightbox. It has 12 LED lights inside that illuminate the green scanning surface below. Placing an iPhone on top of the box with our innovative app creates the perfect conditions for scanning photos. You can tell our app that date, location and people that are in the photo, and our voice recognition engine will tag the image as it scans. Presto, trifecta.

kick

The images we create can be moved to computers, mobile devices or the web with their tags intact and searchable. This means that any photo that was in your box will soon be able to be summoned quickly; either on your phone or in the cloud.

Back QromaScan

Find out more about QromaScan through our crowdfunding project. If you’ve been eyeing that box of photos, and wondering how you were going to move it into the digital world, we might be able to help you as you help us.

Previous Post Next Post