Alex Karpenko hands me a camera and tells me to run. We’re standing on a pier in San Francisco, and the device in Karpenko’s hand is an unreleased prototype of a new, software-driven video camera called Rylo. Karpenko wants me to see what he and co-founder Chris Cunningham show recruits and investors when they ask why they should get involved. Karpenko says I don’t have to worry about where to point the camera, or try to hold it still. Just go. So I grab the camera—a small, oblong 360-degree shooter with a lens on either side—and start running. Cunningham runs too, a few steps ahead of me.
After an embarrassingly tiring 30 yards or so, we stop. I hand Karpenko the camera, which he quickly plugs into his iPhone. He opens the Rylo app, imports the video, and shows it to me. The footage looks fantastic. It’s stable despite my heavy foot-pounding, level even with my total lack of attention, and trained perfectly on Cunningham’s back. Watching me watch the video, Cunningham smiles. “You asked what convinces people to work with us? That’s it. It’s always the video.”
For the last two years, Cunningham and Karpenko have been quietly working on a new kind of camera. The former Instagram employees—Cunningham built software, Karpenko created the Hyperlapse app—saw that every time they made it easier for people to make great stuff, people made more stuff. But while filters, lenses, and basic editing tools can spruce up most photos, video presents a bigger challenge. Even before you get to the content, Karpenko says, you have to get three hard things right: Your video needs to be stable, it needs to be level, and it needs to be looking at the right thing. Rylo’s job is to solve all of those things with software.
When you shoot with the $500 Rylo, you can control almost everything about it after the fact. The two cameras each capture a 195-degree field of view, which Rylo stitches together into a single sphere. But you’re not really meant to use the sphere. Instead, you can pull out the exact frame you want, and share that as a normal video. Or you can pick two spots in the sphere, and have the shot pan from one to the other. You can split the shot, and see your subject and photographer simultaneously. (You can also capture stills, which Rylo horizon-levels automatically.) All you do at first is press record; the artistic decisions come later. And whatever you choose comes out stable, level, and clear.
At first, the Rylo team hoped to make all that possible with only software. “We looked for cameras that existed, to see if we could build on top of those,” Cunningham says. They quickly realized they needed more control over the optics of the camera in order to correct for things like lens distortion. Cunningham scoured Alibaba, buying camera parts for a prototype 360-degree rig while Karpenko hacked away at the algorithm. Even early on, with a camera held together by hot glue, the software worked impressively well.
That’s because Rylo’s camera optics aren’t really the point. They’re never really the point, anymore, as we enter the era of computational photography. Google’s Pixel 2 gets depth-perception out of a single camera because it trained an algorithm to recognize the human head; Apple took similar steps to enable the Portrait Lighting feature in the new iPhone cameras. The megapixel race is over, replaced by an arms race in computer vision and machine learning.
Rylo’s also focused on a less futuristic but maybe more important problem: sharing. (These are ex-Instagrammers, after all.) Rather than making users wait interminably for videos to transfer wirelessly, or force them to manage SD cards and lug around a laptop, Rylo does everything over a short cable, which connects to an iPhone now with Android support coming soon. You can edit, render, and share a video in the course of about ten seconds, all on your phone’s screen.
Just before I leave Rylo’s office, which used to be a test kitchen for a fancy San Francisco eatery, Karpenko shows me the best demo yet. It starts as an unremarkable video of the Golden Gate Bridge, shot from a hand held out the sunroof. It’s stable, sure, and level, yes, but it’s just a video of a car going to a bridge. But then Karpenko taps on the side of the bridge, and tells the video to track there as the car drives. A few seconds later, he tells it to pan up, so the shot points vertically right as the car passes underneath the bridge. Once the car is through, he has it look back at the center. Suddenly I’m watching something professional, like an outtake of the Full House credits or an establishing shot for San Francisco in a movie. It’s one camera, one take, and a million different possibilities.