Last year Dell acquired EMC. EMC was rebranded to “Dell EMC” and moved under the Dell Technologies umbrella. In the last couple of months we could see the websites, emailing lists and Twitter accounts change from EMC to Dell EMC, etc. So it was inevitable that the EMC Elect program was going to change as well. In the meantime Dell also had an influencer program called the Dell Tech Centre Rockstars. Going forward, these two programs will merge and, as of today, be known as the Dell EMC Elect!
I’m excited to announce I’ll be attending Storage Field Day 12! During the event we’ll talk storage technology for three days, starting on March 8th. There’s an impressive line-up of companies and delegates gathering in Silicon Valley and of course we’ll live stream the presentations for the folks back home, who can pitch in over Twitter. Did I mention the line-up of companies already? Oh boy!
Last week we migrated several Oracle databases to a new DBaaS platform. The company I’m working for is in the midst of a datacenter migration to a new cloud provider. Since the Oracle databases were located on old and very expensive Oracle machines, we looked for opportunities to optimize and reduce costs. After much debate, we decided to move all databases to a shared Oracle Exadata platform. Much faster, and much cheaper: the hardware is more expensive, but you win it back with lower licensing costs (less sockets used).
All the Oracle database migrations went pretty well: stop app, export database, transfer to new DC, import & start database. The app teams updated their connection strings and tested the apps. Pretty painless! However there were also some scripts working alongside the databases, mainly for data loads. Server names changed and some scripts had to be moved from the old database servers to the application servers.
There’s no denying that off-premises cloud services are growing. Just look at the year-to-year growth of big public cloud providers. There’s big potential if you focus on two aspects of cloud. The first is speeding up access to data that is potentially not located in the same city or even geographic area. The second is supporting new protocols and storage methodologies that are suited for cloud native applications. One player in this area of IT is Avere, which aims to connect on-premises storage and compute to their siblings in the cloud.
When the news about the Dell and EMC merger became public last year, I was somewhat skeptical. I’ve had some really sketchy experiences with Dell servers and storage products, so it didn’t feel like a step forward. At the same time there was the organizational and support aspect. Mergers usually result in confusion for both sales processes and us people in the field having to glue all the products together. Not something I was looking forward to.
Lo and behold: I got an invite from the EMC Elect program to attend DellEMCWorld in Austin, Texas! This was my chance to fly over there and experience the merger announcements firsthand, plus ask questions. So I did! And I have to say: I was impressed.
I can’t recall the last storage system installation that didn’t have some amount of solid state drives in its configuration. And for good reason: we’re rapidly getting used to the performance benefits of SSD technology. Faster applications usually result in real business value. The doctor treating patients can get his paperwork done faster and thus has time for more patients in a day. Or the batch job processing customer mailing lists or CGI renderings completes sooner, giving you a faster time to market.
To reduce the application wait times even further, solid state drives need to be able to achieve even lower latencies. Just reducing the media latency won’t cut it anymore: the software component in the chain needs to catch up. Intel is doing just that with Storage Performance Development Kit (SPDK).
A long, long time ago when public cloud IaaS (Infrastructure as a Service) was still relatively new I was doing some contract work for a big international company. One of the tasks for the department was an IaaS proof of concept: does offloading servers to the public cloud result in cost savings? Long story short: the PoC was halted after several months because the AWS IaaS offering was prohibitively expensive and inflexible. We had enough systems and data to keep a team of qualified engineers busy and to get good purchasing discounts. An additional problem was the very rigid service catalog: there weren’t that many flavors of machines available back then and custom machines were either not possible or even more expensive.
Fast forward to 2016 and I’m looking into public/private/on-premises IaaS again for a different company. Prices have dropped, but there are still some things to keep in mind when considering a move to the public cloud IaaS models.
Several years ago cloud computing became the hottest thing in IT. You couldn’t read an article or product description without being smothered with ‘cloud’ every other sentence. In fact, many organizations renamed their product and service offerings to include cloud. If you didn’t have ‘cloud products and services’, you were going to lose business.
I’ve noticed that there’s still a surprising amount of IT people that don’t fully comprehend what cloud computing means. Is it a location or a concept? Are there different types and layers of cloud computing? Lets see if we can explain a few of the more common options…
A Brocade firmware upgrade once in a while is highly recommended: new releases usually squash bugs and add new features, which helps with a stable and efficient SAN infrastructure. The upgrade process itself is relatively straightforward if you keep in mind that you can only non-disruptively upgrade one major release at a time. With that knowledge you only need a FTP server (like Filezilla), SSH client (PuTTY), the upgrade packages and some patience.
According to Tintri, the rise of server virtualization broke the traditional storage system. Initially we had relatively simple environments where one server talks to a number of LUNs on a storage system. Sometimes we’d have a small cluster of servers accessing those volumes. Still relatively simple.
Fast forward to now: large clusters of hypervisor hosts are the norm, collectively accessing an even larger number of volumes. Each hypervisor in turn hosts a large number or virtual machines. In case of performance problems, how are you ever going to figure out the root cause and which other systems are affected?