Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Sources: Google is buying Lytro for about $40M (techcrunch.com)
289 points by lxm on March 21, 2018 | hide | past | favorite | 103 comments


Lytro has really cool technology but never really found a good use case for consumers. I owned both the 1st generation and 2nd generation camera, as much as I hated the UI of the software and usability of the hardware, I can clearly see the team has tried really hard to make it usable. They tried building an editing app, a community, a community 2.0, multiple hardware batch revisions. I wonder why they didn’t stop sooner or up to which point they feel like this direction is done. I felt they never really listened to the photographer community during the process, or answer support sufficiently, they have done pretty much the exact opposite of user first or user driven product development in my opinion, and this still left me confused today. Burning a few million dollars a month, I feel it really took a series of very bad decisions to lead to this financial outcome.


I discovered Lytro at the Maker Faire when they had their "borrow a Lytro" demo. It was a great bit of marketing: take a Lytro around the Faire for an hour (where are there are always plenty of interesting subjects) and afterward they put all your shots in an online gallery for you. I got a fun shot or two that day and promptly bought one of their first cameras.

But it wasn't a very good camera, and they completely missed the boat on making it a storytelling tool.

I took my Lytro with me when I walked with Team Torani (the Italian syrup people) in the Columbus Day parade that year, and did not get even one good shot. I wished I'd brought my little point-and-shoot instead, or just about any other camera, because I would have gotten dozens of good shots.

The pitch at the time was "you can put your photos on lytro.com, and people who view your gallery can click anywhere to refocus!" But who wants to do that? If you're looking at a gallery of photos, do you want to go clicking around to bring different things in and out of focus? I don't.

But for a photographer, this had some potential as a storytelling tool. Imagine having a photo that would do its own "pull focus" [1] when you view it, like in the movies. The photographer could first focus on one subject and then have it slowly change focus to something else to reveal something new to the person viewing the photo. Kind of a Ken Burns effect [2] but in the focus plane instead of - or perhaps in addition to - panning and zooming.

I remember one photo I got at the Menlo Park street fair that would have lent itself to this. I had the camera down around knee height, and a dog came up with his nose inches from the lens. In the background was his owner, an attractive young lady. It was a nice shot, and I imagined having it start with the dog's nose in focus and then gradually pulling the focus back to the owner.

Granted, this wasn't much of a story, but it was something. I couldn't even do that. All I could do on lytro.com was set the initial focus, and then leave it to any viewers to go clicking around to fiddle with the focus themselves. But nobody wants to do that.

It seemed like an opportunity lost, because they were so focused (pun intended) on the amazing technology and not on how to use it to tell stories.

[1] https://vimeo.com/233143268

[2] https://en.wikipedia.org/wiki/Ken_Burns_effect


I've got a Lytro, and haven't used it for years, for many of the reasons you articulate.

To my mind their mistake was not having an open source library that could work with their capture format, including basic support for focusing and producing an output image for display.

Pretty much every new photography tech has had a community of amateurs and professionals spring up around it and (hopefully) innovate.

They missed out on that; all we got was the fun-for-a-few-minutes clicking around thing, which gets old pretty fast.


> To my mind their mistake was not having an open source library that could work with their capture format, including basic support for focusing and producing an output image for display.

Oh, now you are making me wistful for what might have been.

If they'd had any kind of library or API, I could have played around with my Ken Burns idea by writing a few lines of code!

I've seen this same mistake over and over again going back to the earliest days of GUI apps: designing the user experience with hardcoded interactions instead of building the UI on top of a public API that enables others to create and experiment with their own interactions.


Reading your parent comment and the replies has me kind of dumbfounded at the wasted potential. Did they ever address these ideas directly? Would opening up the format in even the most limited ways (like focus) have exposed the IP?


I think there’s a different (and to me more interesting) potential, which is to expose the layering to allow creative effects in post. For example you could realistically stick fog into any picture (or remove some amount of fog/haze from pictures that had it to begin with), turn the background monochromatic, add motion blur selectively at particular depths, create focus blur with any bokeh you want, take multiple pictures with the same person in different places and composite them together, etc. etc.

With careful and laborious use of Photoshop it’s possible to do most anything today, but with a light-field image there are a lot of creative controls that could be made easier, faster, and more reliable. (Of course, doing this properly would take hiring some signal processing gurus, color scientists, professional photo retouchers, and interface designers, and spending years on semi-open-ended research.)


There is a lot of interesting stuff in your review. As a more classic photographer myself, I had some different criticisms but also found the thing about letting others click interact with the focus offputting.

The tech was really interesting to photographers and I absolutely think it could be adopted if built into a reasonable camera design. It’s very normal for photographers to carry many cameras and I knew people to carry Lyros some but I think some of it was that the company had two different users if was designing for, which isn’t usually a problem with cameras but this was so different and novelty that it mattered. I really think if they just made super nerdy products without trying to satisfy the cell phone crowd, they could have built a niche following in photographers.


The Vimeo link above goes to an astoundingly effective video essay on focus pulling, focus racks, etc. I'm not in any way doing video related work, but it was fascinating and I just wanted to recommend it.


I was of the opinion that Lytro was just a gimmick (we have light fields, what can we do with them?), but your post is really insightful. What you describe sounds like a useful product.


Do you still have the photo? I'd be curious to see it!


Now I'm curious to see it again too! But it seems to be long gone. I just checked the old machine I had the Lytro desktop software installed on, and then I remembered that I deleted the entire Lytro photo directory to free up space a long time ago and forgot to save the 2-3 interesting photos.


> The pitch at the time was "you can put your photos on lytro.com, and people who view your gallery can click anywhere to refocus!" But who wants to do that?

JFK assassination researchers.


1st gen owner here.

They absolutely tried hard, but I feel like they missed out on doing the one thing that might have really let the product take off: Releasing APIs (or at least protocol specs) for interacting with the camera over USB and wifi, and releasing a spec for their file format.

That might have given them a chance to really take off in the DIY and hobbyist market. After unboxing mine, I pretty much immediately wanted to use it to do the sorts of things that Google seems to want to do it. But without an easy way to dig into it, I lost interest pretty quickly.

Taking off my hacker hat and putting on my consumer hat, it just wasn't going to take off. My cell phone is even more portable, takes higher resolution images, and makes sharing them much easier. Getting to refocus after the fact wasn't that great a trick, because my cell phone camera already does such a good job at autofocusing.

Taking that one off and putting on my hobby photographer hat, same thing. Decades of taking pictures has left me with enough of a sense for where I want to put the focal point and what I want to do with the bokeh that I'd just as soon do it when I'm taking the picture. There was some initial fun of playing with a new toy, but ultimately I realized that doing it after the fact took what has historically been one of my favorite parts of photography and turned it into just another editing chore. That, and, even when I'm not 100% sure what I want to do with the focus at the time I'm taking the photo, my mirrorless's burst mode can accomplish essentially the same thing.

But if I could have strapped that sucker onto a robot and played with using it for 3D scene reconstruction or something. . . that would have been fun.


How well did it work for action?

That's where I find myself always hating autofocus (even on a fancy camera with focus tracking and motion detection and such).


How recent is your experience with "traditional" phase detect/contrast AF? There's been some ridiculously good improvements even in the past 12 months in cameras like the Nikon D850/Sony A7 XXX, to the point where fancy "lightfield" style setting focus in post like the Lytro is way lower on my personal wishlist than it once was. The eye detect AF in the Sonys is frankly superb when working with extremely shallow DoF. This stuff typically filters down range to cheaper models really quickly as well.


It's a recent camera but not top of the line (Fuji X-T2).

My problem is more that with animals, it's hard to make sure it's picked up the head vs the shoulders, say, and with shallow DOF that can ruin the shot. But then, I don't have eye detection, so maybe that would do the trick. :)


The v1, at least, wasn't great for action. It has 10 focal planes, but they cover a fairly small range of distances, so it still needed an autofocus to get the range of focal lengths to be centered on the subject. And that autofocus was pretty slow, even slower than what I'm used to on an SLR.

I'd assume the Illum (which I've never had a chance to play with) would be way better on that front.


But they did stop. They wound down the consumer products years ago and moved into enterprise consulting and selling huge cameras to move sets.


How about the potential as a robotics camera, say to compete with the Kinect or with laser scanners for self-driving cars, etc..?


They are buying them to make dash cams for self driving cars is my guess...

That tech in dash-cam video is priceless


It's likely for VR, the 'reverse' of the Lytro allows projection to the eye at multiple focal points.


I don't know about "likely", but that is an awesome idea.



Wow, that looks awesome! For something at the demonstration-prototype stage in 2013 I'd have hoped they would have something at least in development by now.


They have a fully working version of it: https://vr.lytro.com/within/hallelujah


Looks awesome, but only an "experience" rather than something you can buy? I guess it would need an ultra-beefy system to run it, and is probably extremely expensive and not supported by much software (yet).


They run it for you - it is a capture service. https://www.lytro.com/immerge#immergeDetails


We're talking about two different things. Hallelujah is interesting because it does light field videography, but what I linked was a light field display.


My top candidate for AR in the longer term is a scanning pico-watt laser which projects the image directly onto your retina.


Why would it be priceless?

With dash-cam video you can just chose a fixed hyperfocal distance and have everything in good focus.

You certainly don't need creative DoP and bokeh...


Just spit-balling, but could it be used for cheap depth perception, like a LIDAR alternative?


Probably, but so can two normal cameras, like the ones humans have in our heads.


stereo cameras only give two viewpoints, it requires a lot of processing power to re-create the 3d scene. a lightfield gives you much more information making reconstruction much simpler.

You can also see past occlusions, something not possible with stereo.


How can you see past occlusions?


A stereo camera array can see past occlusions in the sense that some things are occluded to one camera but not to another.

With a larger and more distributed camera array, cameras could be pointed everywhere so that every point in the field has a certain level of coverage.


The GP said you can see past occlusions with a Lytro and not with stereo, and that sounds exactly the other way around to me (as to you).


You can do it with both, actually. Light field data allows you to simulate different viewing angles of the same scene. Depending on the sensor quality you can use this to "see around" objects which occlude the scene from a specific angle (like if you view the image from the center of the field of view). Some limitations, of course, similar to stereoscopic imagery.

Using stereoscopic as an example, you can synthesize a 2d image with the center being between the two sensors. The further apart you place the sensors, up to some limit, the more you can "rotate" the scene. But you lose a lot of data when the viewing angles become too extreme. A light field image can be used for the same effect, but because it's one sensor your ability to rotate the scene will be constrained (as if you had two sensors very close together).

Apparently the examples I saw years ago on Lytro's site aren't available anymore. It was a neat effect with their initial camera, I imagine higher quality cameras could do much more.

A neat thing with light field sensors versus stereoscopic is that you have far more freedom in how you move around the scene. With two sensors you can only move "left and right" or "up and down". With light field data and one sensor you can move in any direction.


Ah, I see what you mean. You're constrained to viewpoints on the sensor, though, no? So usually a small square.


A few Superbowls ago Intel demoed a supercomputer which could take in live video streams from hundreds of cameras, run a batch job for about 30 seconds, and then be able to synthesize video from arbitrary viewpoints above the crowd.

I think it worked by making a point cloud that fits the camera observations.

Obstruction is not a big issue in that use case, but if there was obstruction, the system could choose to ignore pixels that were obstructed when constructing the image.


Right. And with the initial cameras, which were neat but not breathtaking in their capabilities, you really only got a small ability to move around the scene. A larger sensor or a pair of these would offer greater capabilities for this application. I believe using an array of sensors was how their video camera worked, but I'm not 100%.

(I spent a lot of time reading about light field stuffs back when Lytro was first announced, I was also tangentially connected to a synthetic aperture radar project so it piqued a lot of my curiosity at the time. I haven't kept up with the state-of-the-art since then, though.)


Sounds plausible to me.


Do you really want “cheap” depth perception for a self driving car?


Perhaps read that as "inexpensive" rather than "cheap" if you take cheap to imply low quality.

Light field sensors could greatly aid self-driving vehicles as a significant improvement over standard cameras and stereoscopic configurations for object detection. It's not as good as Lidar in some ways, but it's much less expensive and offers some of the same kinds of data (which means greater portability of algorithms or techniques between the two sensor types). And since cars need to see the same visual cues humans do, we need cameras anyways (for reading signs and road directions, lane detection, etc.) so using a light field sensor could offer a twofer. Improved object detection plus the ability to do visual detection we already need.


Yes? Why would you not want it to be cheap? Current self driving LIDAR units costing $75,000 seems like a pretty major barrier to mass producing the things.


Yeah dash cams are super wide angle. I don't see the point since everything will always be in focus.


The reason I was initially thinking of was that a dash cam with multi focal distances could provide a way to see a dash vid from multi perspectives - and basically make a single dash cam “seem” like many individual cams where you can get the focal clarity of distance from more than one perspective? Else, my understanding of the lytro tech is false.


A light field allows you to generate dense 3d point clouds, even when static.

segmentation of foreground/background, becomes much simpler.


I think its more that what the tech is for the Pixel 2 Super XL edition or Pixel 3. That would let you build a 3d photo and post it to Google Photos. Or maybe they could partner up with a 3D printing company the way you can print photo albums.


If you have a VR headset, I highly suggest trying out the "Welcome to Light Fields" demo that Google released on Steam last week (it's free): http://store.steampowered.com/app/771310/Welcome_to_Light_Fi...

It's absolutely incredible and seems like really promising technology. Even with the relatively low resolution of current headsets, the image quality is amazing and you can see some slight lighting changes as you move your head, which makes it feel far more immersive than a static photo. There should be a lot of potential to use this for extremely realistic virtual tours.

Note: I had some slight issues with it "hitching" at the start of each scene in the guided tour (which made me feel a bit of motion-sickness), but that was probably because I didn't install it on an SSD (or my PC is just not quite fast enough overall). There weren't any issues when viewing the scenes individually afterwards.


As someone who has been working with stereo 180 videos in VR for the past few months, this technology makes such a huge difference. While stereo photography looks broken from most angles, Google's lightfields demo just felt natural. Honestly, one of the best VR experiences I've tried in a while.


Huh, that'd certainly explain why Google wanted to pay $40MM for it. Thanks!


Well that was the logical outcome. Their technology was interesting but it clearly wasn't viable as a stand-alone compact camera. Everyone expected it would eventually be integrated into smartphones.


It could be smartphones, i also think the technology would be interesting for streetview.


I thought they had switched from consumer camera's to large 3D light field camera's for shooting movies.


Or, as is often said in the Valley: "Is it a product, or a feature?"


They tried to build the whole ecosystem around light field photography themselves. To do that, they would need to raise Magic Leap levels of capital, which they decided not to do.


This should not be viewed as a failure.

This is the nature of startups, even if you do every single thing right, work hard, manage the cash flow, innovate, etc., there is a more than 90% chance you won't get that billion dollar plus exit.

The only unique thing here is that Lytro got a lot of hype and raised a lot of money, so people naturally expected a unicorn.

What I am saying is these guys don't necessarily have anything to be ashamed of. In the worst case they gained a lot of experience that they can apply to their next idea.

That said, I am curious: Will the founders get ANY money out of it? A comment claimed that they raised $200 million at $360 million valuation, so theoretically they may end up with nothing.


My estimate is that the founders took money off the table during those rounds. It's doubtful they'll see anything from the sale. I wouldn't worry too much about them I'm sure they're sitting fine.


Can someone explain to me reasoning of the deal? It looks to me more some kind of Silicon Valley brotherhood deal. Sure 40m to Google is change money, but it is a failed company and they could duplicate the tech with far less.


Maybe it is the 59 patents...


The fact many of the employees seem to have been given severance would imply that to me.


Are they not even keeping the computational photography engineers? I can see jettisoning the rest, but I presumed this might be an acquihire of some great minds in computational photography to beef up the Pixel Camera team?


I tried to click around on Lytro's website, but it left me pretty baffled. What the f... is it?


Lytro makes light field cameras. A single sensor can be used to capture enough information to do a lot of computational imagery work like the somewhat gimmicky example of changing focus. But you also have enough data to do things like accurately apply filters to parts of an image based on depth information. A good simulation of viewing the image through caustics or other distortion effects. And, to me the most interesting, reconstruct 3d information from a single sensor.

Downsides, a 16 megapixel sensor only got something like 4 megapixel of image resolution because of how much data has to be captured for the light field. So consumer grade sensors resulted in (by contemporary standards) very low resolution images. And all those neat effects weren't really available, except maybe to their commercial partners (later products and services).

https://en.wikipedia.org/wiki/Light-field_camera


Thanks!


A camera that allows you to 'focus after the fact'. Actually pretty sweet tech; though it never really caught on.


I thought the future of light field cameras would be to replace the bulky DSLR's and their lenses.

If you can do that, then autofocus systems go away completely. What you're left with is the shutter and zoom, and the computer's ability to make beautiful bokeh from any image.


The problem is the sensor cost. You are sacrificing pixels for information/depth, meaning you need more pixels for the same result. Most people aren't willing to go back to a 2MP camera when much better image quality cameras are on the market.


Maybe not the home user, but people are willing to pay lots of money for photography gear if it gets the results they want.

Broncolor, Hasselblad, Leica and Zeiss, all come to mind as specialty companies that make money from selling expensive gear.


Awesome. After playing with Google's go pro light field images, it's really hard to go back and enjoy regular 360 photos. The quality and 6dof sense of presence. was amazing. Hopefully the addition of lytros tech makes it even better.


These are the mentioned light field images: https://www.blog.google/products/google-vr/experimenting-lig...

They can only be experienced properly with a positionally tracked VR headset, but they are extremely cool if you have one.


HTC Vive, Oculus Rift, and Windows Mixed Reality headsets... and it seems like Windows Mixed Reality headsets are available for around $200 on Amazon. Wow, we live in the future. I guess the main cost here is the PC attached to them... probably costs about $500 minimum if I'd have to guess for something with the specs required.

[edit] wow, you can get an Asus pre-built desktop for $440, damn! i5-7400, 12gb DDR4, 2TB HDD, graphics good enough for VR... wow. Anyone know if it's possible to build your own for a competitive price, or if that'd drive up the cost? I remember the days when it was cheaper to build your own...


What GPU is in that prebuilt? I think you'd probably want a 1070 especially for windows MR headsets.

Right now with gpu and memory prices inflated due to cryto mining, prebuilts are very competitively priced. If you want to build your own, definitely go with a motherboard gpu bundle otherwise you'll be paying $$$ for a gpu.


Light field cameras always fascinated me because they are the example of the literal opposite of evolutionary biological inspiration (this is not an insult by the way). The first biological 'eyes' were probably light sensitive spots on skin with no lenses or shape, which then became cavities for directional sensing and then lenses and retinas developed to enable image recognition.


The compound eyes of some insects are very similar to how a lightfield camera is optically structured.


This tells me the tech isn’t too useful from the price and from the buyer and the amount of time it took to get a sale on that price.

That’s beside the fact that it does this selective focus thing which should be automatic anyways or selected during the shot.


Along with selective focus (which struck me as gimmicky) you do get 3d images from a single sensor with better detail than a stereoscopic setup (2x standard cameras). This is useful for computational work as you can get better spatial data from one source or if you add multiple sensors you can get even more improvement.

I always figured this would be used in a fashion similar to synthetic aperture radar, but using image data, and applied to autonomous vehicles and similar applications. It's better than edge detection from a standard single sensor. And you have more to train your algorithms on since they can recognize what is an actual object more easily versus what is an image of an object.


For AR & VR this makes perfect sense. In VR space you do not want to be stuck with a single predetermined focus point, but have the freedom to move around the light field. Lytro's patents alone are probably a steal at $40M.


Hasn't Photoshop had this tech for a couple years now? I remember seeing on HN a video of Adobe demonstrating it at a conference.


Lytro captures true depth information. Sure, you can blur things in Photoshop, but it simply doesn't look like the real, depth-of-field based blur on real lenses, or the calculated one of light-field cameras. Re-focusing on photoshop is just a gimmic to fake the real thing (and if you're experienced in photography you can always tell)


Which tech are you referring to?


I'll be waiting for you OASIS, but not so soon.


When I saw Google's spherical-photo capture bot -- basically an array of Lytro cameras -- I figured this would happen soon.


Can you link me to this? I only know that Google is using GoPros to do light field capture: https://petapixel.com/2018/03/15/google-built-arc-16-gopros-...


No, they were regular GoPros frame-synced and spun around to act as even more cameras.

You can, in fact, do this with a single regular camera (with a wide-field lense) if you're patient enough and have a multi-axis motorized (and encodered) mount.


I remember Ben Horowitz singing praises for this company. Must be a good pay day for A16Z.


From the article, the sale price is estimated at $40 million, whilst the company had raised $200 million, most recently at a $360 million valuation.

Most likely not a good pay day, definitely a significant haircut for some of the investors to say the least.


Agree, and given the standard "liquidity preference", the founders are likely to net $0, except for the Google bonuses for the engineering (Jack Barker-like Rosenthal is not likely to stick there).

It's sad, really. It's one of the really innovative companies, but it's not easy to sell tech ahead of its time. Yet another incident that will encourage to pick copycats over real innovation.

I guess Google is no longer as generous with the acquisitions.


> It's sad, really. It's one of the really innovative companies, but it's not easy to sell tech ahead of its time.

This glorifies failure quite a bit. One major objective of tech companies is to produce products that customers value and like. Lytro failed to do this. Just because the underlying science is complicated doesn't make the tech "ahead of its time" due to the fact that maybe the tech, as implemented, just isn't what people want, now or in the future!

Having complicated science underlying a product certainly doesn't excuse a company from producing products that people want.


> Having complicated science underlying a product certainly doesn't excuse a company from producing products that people want.

That's not the biggest obstacle, actually. The biggest obstacle that the sales strategy has no case studies to work with.

What market does the new tech appeal to? Should it be B2C or B2B? (In their case, they tried both IIRC.)

You may hold the collective entity responsible, if you wish, but in this case, they required both sharp and inventive techies (which they had), and a business leader with a vision (which they didn't have). It's not easy to put together, especially if the investors make the call for the CEO's replacement.

An analogy from a different area, Siri could be another dead curio if Steve Jobs didn't decide to make it a centerpiece of their new iPhone.


>Lytro failed to do this. Just because the underlying science is complicated doesn't make the tech "ahead of its time" due to the fact that maybe the tech, as implemented, just isn't what people want, now or in the future!

As we speaking abstractly? What theoretically could be if we didn't know anything about the tech?

Because otherwise this specific tech is very much an "ahead of its time" technology, whether consumers adopted it or not.

Besides, consumer adoption is a BS test for a technology being ahead of its time.

Failure in the market can simply be because the implementation was not good, or the marketing wasn't, or the support was lacking, or the price too high, or 200 other reasons, that don't depend on the technology not being ahead of its time.


The high price was primary due to them trying to offset the high RD cost vs the actual hardware cost. Google might be able to sell it cheaper if they can get ROI with other applications.

Update: hardware cost about $400 interesting analysis of their strategy here http://hc25.web.rice.edu/files/projects/LytroMarketingPlan.p...


Google never really was generous with its acquisitions: many of its most successful ones like Blogger, KeyHole (Earth), Where2 (Maps), Writely (Docs), Zenter (Presentations), Urchin (Analytics) and Android were for tiny dollar amounts, sub-$100M. And the really big ones - YouTube and DoubleClick, and to a lesser extent Metaweb - turned out to be worth way more than Google paid for them. The only big duds I can think of where Google paid a "generous" amount for something that didn't really make them a whole lot were Motorola, Andy Rubin's robot collection (Boston Dynamics etc.), and Skybox, and in all cases they managed to sell them off for decent amounts.


Google got to keep Motorola Mobility's patent portfolio and license them back to Lenovo I believe.


And I think they dodged a bullet by failing to acquire Groupon (for $6bn).


Yes, it's a shame for the founders in this situation, particularly if the wished to continue whilst their investors wished to exit and recoup any losses they may have had.

I'm reminded every day that the adoption curve can be brutal for those at the sharp end of the stick.


Good points. Lightfield tech will be common on day but that's probably at least 5 years from now


Can you explain what you mean by "liquidity prrference"? My understanding is that you get money in proportion to your vested share at the time of sale


Apologies in advance for the oversimplification. It depends on the actual contract you signed but in general it goes like this:

VC puts in 1 dollar, you sell at 1.2, VC will take the first dollar and MAY take the next 20 cents. That could mean a liquidity preference of 1.2x. If you sold at 1.4, it COULD mean VC takes 1.2 and you split the next 20cents according to your share split with the VC.

It may be easier to think of a VC as a bank that doesn't ask you to pay back a loan every month BUT if a liquidity event occurs (i.e. someone buys your company), they absolutely want all their money back first (i.e. "senior" in debt to equity holders (you)) before you get to dip your hands in.


Not all shares are the same. Corporations have different classes of stock: as a simple example there may be common stock and preferred stock.

In a "liquidity event", and especially in a "down round" where the company is bought at a lower price per share than previous investors paid, those shares are not treated the same. Preferred shares may get something from the new investment round (perhaps less than they invested), while common shares may have their value wiped out to zero.

(Source: I have been a "commoner" in a company that was bought in a down round where my stock was zeroed but the preferred shares were still worth something.)



Search for "liquidation preference" on your favorite search engine


Sometimes you feel like asking trusted strangers online for a synthesized summary as it pertains to the topic at hand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: