I’m just catching up on my development RSS feeds and ran across yet another insightful technical article by Mike Ash. I’m finding this quite funny as I just gave a presentation at the Infralys (soon to be integrated in Ackacia!) hosted Rendezvous de la Virtualisation 2013 discussing the impact of SSD and flash storage arriving in the storage stack. Here are the slides for those interested.
In my presentation the coolest, most way out there SSD storage technology is the Diablo Memory Channel storage, where they put NAND chips onto cards that get attached to the RDIMM slots in your server. This is to ensure consistent (and very very very small) latency between the CPU and the storage. No jumping across the PCI bus and traversing various other components and protocols to get to storage, it’s right there accessible via the memory bus.
And here I have Mike explaining from the developer perspective “Why Registers Are Fast and RAM Is Slow”.
Always good to remind us that every part of the stack can be optimized and it’s a matter of perspective. Multi-millisecond latency fetching data from a physical object traversing multiple networks is forever for a modern CPU.
Thought experiment of the day: What if we configured our servers to behave like resource constrained devices, disabled swap and killed processes that stepped out of bounds? We’ve been taking the easy route throwing memory and hardware at problems that might have software optimization answers…