I believe the OP means time and dedication in their investments, not simply money. The type of things that startups grind through to become successful. Appropriate amounts of money do need to be injected - but at the right times, not necessarily from the beginning.
Bigger companies have a bad habit of comparing the early results of experimental internal businesses with the ROI of their mainline business and cut the cord before the product/market was actually ready to be flooded with cash to scale up.
It's funny how the business classic "Innovator's Dilemma" was written largely about companies like Intel who exist in traditional technology markets with predictable evolutionary modes... yet they still suffer from the same lack of 'intrapreneurship' mentality by treating internal startups with immature markets as if they were mature product lines.
They should probably stick to acquisitions of real startups in the growth stage or actually stick-it-out for the long run with the markets they invest in, rather than looking for high growth opportunities within a short-timeframe or nothing.
Burning early adopters is never a wise choice if that market turns out to have legs.
Very interesting that you should mention "Innovator's Delimma". I remember when Andy Grove made that required reading for all of executive staff, and divisions had to incorporate the thought process in their plans.
You are right that this is showing classic signs of not giving innovative business units enough runway to find their product-market fit. That's really hard to do given Intel's culture.
Craig Barrett did a lot of damage. One Intel mid-level manager described him as a pinata. "Who ever hits him the hardest gets the most candy." So managers reported synthetic disasters in order to get more funding. Paul Otellini was great, I have huge respect for him, but his tenure was too short lived. Otellini understood how to build a market.
That's an interesting backstory, I was not familiar with the fact Andy Grove made it required reading. It really should be for every tech company management, regardless of it's age.
The farther a company gets from it's early roots the harder it gets for them to recreate the 'early days'. Unless they get a shake up in management. But the typical people good at startup culture would get killed pretty quickly in bigco corporate culture.
Exactly, but how many of those were good decisions?
Altera-- Not actually a bad decision, but it seems to have done nothing but bolstered Xilinx's position.
MobileEye-- A great way to burn ~$15 billion. The fact that Intel felt that it had to purchase that company with really no defensible advantage other than its maps highlights the huge challenges facing Intel if they're unable to cultivate an internal SDC team.
Movidius-- Legitimately I have no idea why they paid so much for a glorified DSP. But then again, so are most "Deep Learning Processors" right now.
It's really hard to compare Movidius with anything else because there are so few published benchmarks.
A few notes though:
Movidius is inference only. That might be useful if you have the specific requirements that needs low power, high speed inference and also somehow has to have a x86 CPU.
If you want high speed, low power inference and don't need x86 then the NVidia Jetson wipes the floor with it.
If you want low speed (~2 inference/second), low power then RPi is a good option.
Thank you for the great answer Nick, that's the best summary I could've hoped for!
So in summary:
⇒ low power, high speed inference on x86 CPU 🡺 Intel Movidius
⇒ low power, high speed inference non x86 🡺 NV Jetson
⇒ low power, low speed inference 🡺 RPi and similar
⇒ high power, high speed inference 🡺 GPU(s)
NVidia Jetson is the obvious winner, if there is no viable alternative, however Jetson is quite expensive and therefore I can't go that route. I want to speed up training & inference. eGPUs are more or less affordable, but low power training & inference at medium or low-cost would strike me as a clear winner.
Sorry for the late answer, nonetheless, even though I didn't get an alternative DSP that offers similar advantages as Movidius, I'm still grateful for your insightful comment.
The Movidius Stick supports the Raspberry Pi now as a deployment (not development) option, so you can get low power and high speed inference on a single RPi. I have one of these on my desk and they're neat but I haven't done much with it yet. I did grab one because I had heard it would support RPis, though.
Granted, it's 2x the price of a Rpi 3. All together that's about $100 USD. And NVidia just announced the TX1 SE Devkit at $200 USD. I have a TX2, but the TX1 will definitely do better at a higher power/size profile.
The MCS only supports Caffe as well, while the TX1/TX2 will support a wider array of DL frameworks (as well as FP16 support since it's a Tegra).
Also, what's your use case? That differs significantly, because IIRC, the Movidius chip only supports inference, so if you want to train models too, then that's a non-starter.