Incidentally I was reading an old HN post today on Kdb+ for timeseries where people who use that commercial db said it's amazingly fast (I only played around with it, nothing speedy). People in that discussion mentioned that 300k writes/s mentioned about Kdb+ is nothing compared to the millions of writes/s IoT solutions need. So I am wondering what kind of performance you can get on time series databases on a single machine; I understand that with clustering systems like Cassandra can do 100 million/s and more (someone mentioned billions but not what DB), but getting benchmarks just throwing hardware at the problem, to me, only makes sense if we know the performance on a single node, multicore performance single node and then the scaling graph when adding nodes.
I am just interested in this kind of thing; I used to work with time series for a German telco; that was DB2 on heavy IBM metal for massive amounts of money and financial work for a startup. I wonder what the state of the art here is.
Another comment here mentioned a recent FAST paper talking about BtrDB, which can get ~16mil writes/sec (nanosecond timestamp, 8 byte values) on a single node, with near-linear speedup when clustering: https://blog.acolyer.org/2016/05/04/btrdb-optimizing-storage...
I am just interested in this kind of thing; I used to work with time series for a German telco; that was DB2 on heavy IBM metal for massive amounts of money and financial work for a startup. I wonder what the state of the art here is.