Feed aggregator

Systems and Storage News Roundup (1/15-1/22)

Analyst News - Thu, 01/25/2018 - 10:35
Categories: Analyst News

Hybrid Cloud: 4 Top Use Cases

Analyst News - Wed, 01/17/2018 - 12:19

Interop ITX expert Camberley Bates explains which hybrid cloud deployments are most likely to succeed.

In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.

 

Read this article on NetworkComputing.com here.

The post Hybrid Cloud: 4 Top Use Cases appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/8-1/15)

Analyst News - Tue, 01/16/2018 - 11:28
Categories: Analyst News

Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster

Analyst News - Sun, 01/14/2018 - 19:06

As cloud computing progresses forward, an ages-old problem for IT is resurfacing—how to integrate and secure data stores that are disbursed geographically and across a growing diversity of applications. Data silos, which have always limited an organizations ability to extract value from all of its data, become even more isolated. Consider the mainframe, with its stores of critical data going back decades, as the original data silo. Mainframe users still want to leverage this data for other applications such as AI but must overcome accessibility and formatting barriers in order to do so. It’s a task made easier by software vendors like Syncsort who’s Ironstream moves data from the ‘frame in a way that is easily digestible by Splunk applications for example.

But as cloud computing progresses, the siloed data issue becomes even more apparent as IT executives try to broaden their reach to encompass and leverage their organization’s data stored in multiple public clouds (AWS, Azure, GCP) along with all that they store on site. The cloud-world solution to this problem is what is now becoming known as the Data Fabric.

Data Fabrics—essentially information networks implemented on a grand scale across physical and virtual boundaries—focus on the data aspect of cloud computing as the unifying factor. To conceive of distributed, multi-cloud computing only in terms of infrastructure would miss a fundamental aspect of all computing technology – data. Data is integral and must be woven into multi-cloud computing architectures. The concept that integrates data with distributed and cloud-based computing is the Data Fabric.

The reason for going the way of the Data Fabric, at least on a conceptual basis, is to break down the data siloes that are inherent in isolated computing clusters and on-and off-premises clouds. Fabrics allow data to flow and be shared by applications running in both private and public cloud data centers. They move data to where it’s needed at any given point in time. In the context of IoT for example, they enable analytics to be performed in real time on data being generated by geographically disbursed sensors “on the edge.”

Not surprisingly, the Data Fabric opportunity presents fertile ground for storage vendors. NetApp introduced their conceptual version of it four years ago and have been instantiating various aspects of it ever since. More recently, a number of Cloud Storage Services vendors have put forward their Data Fabric interpretations that are generally based on global file systems. These include ElastifileNasuni, and Avere—recently acquired by Microsoft.

Entries are also coming from other unexpected sources. One is from the ever-evolving Big Data space. MapR’s Converge-X Data Fabric is an exabyte scale, globally distributed data store for managing files, objects, and containers across multiple edge, on-premises, and public cloud environments.

At least two-thirds of all large enterprise IT organizations now see hybrid clouds—which are really multi-clouds—as their long-term IT future. Operating in this multi-cloud world requires new data management processes and policies and most of all, a new data architecture. Enterprise IT will be increasingly called upon to assimilate a widening range of applications directed toward mobile users, data practitioners and outside business partners. In 2018, a growing and increasingly diverse list of vendors will offer their interpretations of the Data Fabric.

The post Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

HCI – What Happened in 2017 and What to Watch for in 2018

Analyst News - Sun, 01/14/2018 - 18:53

The Hyperconverged Infrastructure (HCI) segment continued to grow throughout 2017. Evaluator Group research in this area expanded in 2017 as well, adding products from NetApp, Lenovo and IBM. In this blog we’ll review some of the developments that occurred in 2017 and discuss what to look for in 2018.

2017 saw some consolidation in the HCI market plus new hyperconverged solutions from three big infrastructure companies. Early in the year HPE bought SimpliVity and standardized on this software stack for their HCI offering. IBM joined the HCI market with a Nutanix-based product running on their Power processors and Lenovo added the vSAN-based VX Series to their existing Nutanix-based HX Series HCI solution. NetApp released a new HCI solution, using the architecture from their SolidFire family of scale-out, all-flash arrays.

Going Enterprise?

In 2017 HCI vendors were touting their enterprise credentials, listing their Fortune 100 customers and providing some detail on selected use cases. The strong implication is that “enterprise” customers means “enterprise” use cases and that HCIs are capable of replacing the tier-1 infrastructures that supports mission-critical applications. While there are likely some examples of this occurring, our data suggests this isn’t happening to the extent that HCI vendors are suggesting.

Taken from contact with end user customers and through our HCI in the Enterprise studies the past two years, our information indicates that enterprises are indeed buying HCIs, but not currently replacing tier-1 infrastructure with these systems. They’re using them for new deployments and to consolidate existing tier-2 and tier-3 applications. However, this doesn’t prove that HCIs are not capable of supporting tier-1 use cases.

More Clouds

Companies want the ability to connect to the public cloud as a backup target, to run workloads and do cloud-native developments as well. Many HCI vendors have responded with “multi-cloud” solutions that run their software stack on premises and in the cloud with support for containers and DevOps platforms. Some are including cloud orchestration software that automates backend processes for IT and provides self-service operation for developers and application end users.

NetApp and IBM Release HCI Solutions

In 2017 NetApp released its HCI based on the SolidFire Element OS. This product promises lower TCO as well, due to the performance of its SolidFire software stack that was designed for all-flash, and its architecture that separates storage nodes from compute nodes. This allows NetApp HCI clusters to scale storage and compute resources independently.

2017 also saw IBM announce an HCI product using the Nutanix software stack running on IBM Power servers. The company is positioning this offering as a high performance HCI solution for big data, analytics and similar use cases. They’re also touting the lower TCO driven by their ability to support more VMs per node than other leading HCIs, based on IBM testing.

VMware’s vSAN 6.6 release added some features like software-based encryption and more data protection options for stretched clusters, but not the big jump in functionality we saw in 2016 with vSAN 6.2. vSAN came in as the most popular solution in our HCI in the Enterprise study, for the second year in a row. The Ready Nodes program seems to be a big success as the vast majority of vSAN licenses are sold with one of more than a dozen OEM hardware options.

Nutanix kept up their pace of introducing new features as AOS 5.0 and 5.1 added over two dozen and announced some interesting cloud capabilities with Xi Cloud Services, Calm and a new partnership with Google Cloud Services (going up against Microsoft’s Azure and Azure Stack). See the Industry Update for more information.

Nutanix continues to grow their business every quarter, but have yet to turn a profit – although they claim they’re on road to becoming profitable. The CEO said they are “getting out of the hardware business”, instead emphasizing their software stack and cloud-based services, (as evidenced by the Xi and GCS activities). That is fine, as long as they plan to only sell through others, but to date they continue to sell their own systems. They will continue to sell the NX series of HCI appliances using Supermicro servers but won’t recognize revenue from it and have taken it off the comp plan. Given the consolidation we’re seeing in the HCI market, and the fact that server vendors like HPE and Cisco have their own software, this is probably the right move.

What to watch for in 2018

NetApp – see how their HCI does in the market. Their disaggregated technology is unique and seems to address one of the issues HCI vendors have faced, inefficient scaling. While this is a new HCI product, the SolidFire software stack is not new and NetApp has a significant presence in the IT market.

HPE SimpliVity – after some reorganization and acquisition pains, this product may be poised to take off. We have always liked SimpliVity’s technology and adding the resources and server expertise, plus a captive sales organization and an enterprise data center footprint may do the trick. That said, HPE has some catching up to do as Nutanix and the vSAN-based products are dominating the market.

Cisco – the company claims 2000+ customers in the first 18 months of HyperFlex sales but a large portion of those deals are most likely to existing UCS customers. The Springpath technology continues to mature but the question is what kind of success Cisco can have in 2018, as they expand beyond their installed base.

IBM – can they make HCI work in the big data and analytics space? Can they be successful with Nutanix running on a new server platform? We shouldn’t underestimate IBM but this is a new direction for technology that has been based on industry standard hardware and it’s being sold to a new market segment.

Orchestration and Analytics – HCI vendors have been introducing more intelligence into their management platforms with some providing policy-based storage optimization and data protection, plus monitoring and management of the physical infrastructure. We expect this to continue with the addition of features that leverage data captured from the installed based and analyzed to generate baselines and best practices.

For more information see Evaluator Series Research on Hyperconverged Infrastructures including Comparison Matrix, Product Briefs, Product Analyses and the study “HCI in the Enterprise”.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post HCI – What Happened in 2017 and What to Watch for in 2018 appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/1-1/8)

Analyst News - Fri, 01/12/2018 - 08:30

Systems and Storage News Roundup (1/1-1/8)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

1/8 – Actifio Update: Disruptive, Profitable and Growing…

Cisco

1/8 – Cisco Announces Infinite Broadband Unlocked for cBR-8

HPE

1/8 – HPE Provides Customer Guidance in Response to Industrywide Microprocessor Vulnerability

Intel

1/7 – New 8th Gen Intel Core Processors with Radeon RX Vega M Graphics Offer 3x Boost in Frames Per Second in Devices as Thin as 17mm

Mellanox

1/4 – Mellanox Ships BlueField System-on-Cip Platforms and SmartNIC Adapters to Leading OEMs and Hyperscale Customers

Micron

1/8 – Micron and Intel Announce Update to NAND Memory Joint Development Program

OCZ

1/8 – Toshiba Unveils Mainstream RC100 NVMe SSD Series at CES 2018

SanDisk

1/8 – Western Digital Unveils New Solutions to Help People Capture Preserve, Access and Share Their Ever-Growing Collections of Photos and Videos

Seagate

1/8 – Seagate Teams Up With Industry-Leading Partners To Offer New Mobile Data Storage Solutions At CES 2018

SolarWinds

1/8 – SolarWinds Acquires Loggly, Strengthens Portfolio of Cloud Offerings

Supermicro

1/8 – Supermicro Unveils New All-Flash 1U Server That Delivers 300% Better Storage Density with Samsung’s Next Generation Small Form Factor (NGSFF) Storage

The post Systems and Storage News Roundup (1/1-1/8) appeared first on Evaluator Group.

Categories: Analyst News
Syndicate content