Analyst News

Hybrid Cloud: 4 Top Use Cases

Analyst News - Wed, 01/17/2018 - 12:19

Interop ITX expert Camberley Bates explains which hybrid cloud deployments are most likely to succeed.

In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.

 

Read this article on NetworkComputing.com here.

The post Hybrid Cloud: 4 Top Use Cases appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/8-1/15)

Analyst News - Tue, 01/16/2018 - 11:28
Categories: Analyst News

Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster

Analyst News - Sun, 01/14/2018 - 19:06

As cloud computing progresses forward, an ages-old problem for IT is resurfacing—how to integrate and secure data stores that are disbursed geographically and across a growing diversity of applications. Data silos, which have always limited an organizations ability to extract value from all of its data, become even more isolated. Consider the mainframe, with its stores of critical data going back decades, as the original data silo. Mainframe users still want to leverage this data for other applications such as AI but must overcome accessibility and formatting barriers in order to do so. It’s a task made easier by software vendors like Syncsort who’s Ironstream moves data from the ‘frame in a way that is easily digestible by Splunk applications for example.

But as cloud computing progresses, the siloed data issue becomes even more apparent as IT executives try to broaden their reach to encompass and leverage their organization’s data stored in multiple public clouds (AWS, Azure, GCP) along with all that they store on site. The cloud-world solution to this problem is what is now becoming known as the Data Fabric.

Data Fabrics—essentially information networks implemented on a grand scale across physical and virtual boundaries—focus on the data aspect of cloud computing as the unifying factor. To conceive of distributed, multi-cloud computing only in terms of infrastructure would miss a fundamental aspect of all computing technology – data. Data is integral and must be woven into multi-cloud computing architectures. The concept that integrates data with distributed and cloud-based computing is the Data Fabric.

The reason for going the way of the Data Fabric, at least on a conceptual basis, is to break down the data siloes that are inherent in isolated computing clusters and on-and off-premises clouds. Fabrics allow data to flow and be shared by applications running in both private and public cloud data centers. They move data to where it’s needed at any given point in time. In the context of IoT for example, they enable analytics to be performed in real time on data being generated by geographically disbursed sensors “on the edge.”

Not surprisingly, the Data Fabric opportunity presents fertile ground for storage vendors. NetApp introduced their conceptual version of it four years ago and have been instantiating various aspects of it ever since. More recently, a number of Cloud Storage Services vendors have put forward their Data Fabric interpretations that are generally based on global file systems. These include ElastifileNasuni, and Avere—recently acquired by Microsoft.

Entries are also coming from other unexpected sources. One is from the ever-evolving Big Data space. MapR’s Converge-X Data Fabric is an exabyte scale, globally distributed data store for managing files, objects, and containers across multiple edge, on-premises, and public cloud environments.

At least two-thirds of all large enterprise IT organizations now see hybrid clouds—which are really multi-clouds—as their long-term IT future. Operating in this multi-cloud world requires new data management processes and policies and most of all, a new data architecture. Enterprise IT will be increasingly called upon to assimilate a widening range of applications directed toward mobile users, data practitioners and outside business partners. In 2018, a growing and increasingly diverse list of vendors will offer their interpretations of the Data Fabric.

The post Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

HCI – What Happened in 2017 and What to Watch for in 2018

Analyst News - Sun, 01/14/2018 - 18:53

The Hyperconverged Infrastructure (HCI) segment continued to grow throughout 2017. Evaluator Group research in this area expanded in 2017 as well, adding products from NetApp, Lenovo and IBM. In this blog we’ll review some of the developments that occurred in 2017 and discuss what to look for in 2018.

2017 saw some consolidation in the HCI market plus new hyperconverged solutions from three big infrastructure companies. Early in the year HPE bought SimpliVity and standardized on this software stack for their HCI offering. IBM joined the HCI market with a Nutanix-based product running on their Power processors and Lenovo added the vSAN-based VX Series to their existing Nutanix-based HX Series HCI solution. NetApp released a new HCI solution, using the architecture from their SolidFire family of scale-out, all-flash arrays.

Going Enterprise?

In 2017 HCI vendors were touting their enterprise credentials, listing their Fortune 100 customers and providing some detail on selected use cases. The strong implication is that “enterprise” customers means “enterprise” use cases and that HCIs are capable of replacing the tier-1 infrastructures that supports mission-critical applications. While there are likely some examples of this occurring, our data suggests this isn’t happening to the extent that HCI vendors are suggesting.

Taken from contact with end user customers and through our HCI in the Enterprise studies the past two years, our information indicates that enterprises are indeed buying HCIs, but not currently replacing tier-1 infrastructure with these systems. They’re using them for new deployments and to consolidate existing tier-2 and tier-3 applications. However, this doesn’t prove that HCIs are not capable of supporting tier-1 use cases.

More Clouds

Companies want the ability to connect to the public cloud as a backup target, to run workloads and do cloud-native developments as well. Many HCI vendors have responded with “multi-cloud” solutions that run their software stack on premises and in the cloud with support for containers and DevOps platforms. Some are including cloud orchestration software that automates backend processes for IT and provides self-service operation for developers and application end users.

NetApp and IBM Release HCI Solutions

In 2017 NetApp released its HCI based on the SolidFire Element OS. This product promises lower TCO as well, due to the performance of its SolidFire software stack that was designed for all-flash, and its architecture that separates storage nodes from compute nodes. This allows NetApp HCI clusters to scale storage and compute resources independently.

2017 also saw IBM announce an HCI product using the Nutanix software stack running on IBM Power servers. The company is positioning this offering as a high performance HCI solution for big data, analytics and similar use cases. They’re also touting the lower TCO driven by their ability to support more VMs per node than other leading HCIs, based on IBM testing.

VMware’s vSAN 6.6 release added some features like software-based encryption and more data protection options for stretched clusters, but not the big jump in functionality we saw in 2016 with vSAN 6.2. vSAN came in as the most popular solution in our HCI in the Enterprise study, for the second year in a row. The Ready Nodes program seems to be a big success as the vast majority of vSAN licenses are sold with one of more than a dozen OEM hardware options.

Nutanix kept up their pace of introducing new features as AOS 5.0 and 5.1 added over two dozen and announced some interesting cloud capabilities with Xi Cloud Services, Calm and a new partnership with Google Cloud Services (going up against Microsoft’s Azure and Azure Stack). See the Industry Update for more information.

Nutanix continues to grow their business every quarter, but have yet to turn a profit – although they claim they’re on road to becoming profitable. The CEO said they are “getting out of the hardware business”, instead emphasizing their software stack and cloud-based services, (as evidenced by the Xi and GCS activities). That is fine, as long as they plan to only sell through others, but to date they continue to sell their own systems. They will continue to sell the NX series of HCI appliances using Supermicro servers but won’t recognize revenue from it and have taken it off the comp plan. Given the consolidation we’re seeing in the HCI market, and the fact that server vendors like HPE and Cisco have their own software, this is probably the right move.

What to watch for in 2018

NetApp – see how their HCI does in the market. Their disaggregated technology is unique and seems to address one of the issues HCI vendors have faced, inefficient scaling. While this is a new HCI product, the SolidFire software stack is not new and NetApp has a significant presence in the IT market.

HPE SimpliVity – after some reorganization and acquisition pains, this product may be poised to take off. We have always liked SimpliVity’s technology and adding the resources and server expertise, plus a captive sales organization and an enterprise data center footprint may do the trick. That said, HPE has some catching up to do as Nutanix and the vSAN-based products are dominating the market.

Cisco – the company claims 2000+ customers in the first 18 months of HyperFlex sales but a large portion of those deals are most likely to existing UCS customers. The Springpath technology continues to mature but the question is what kind of success Cisco can have in 2018, as they expand beyond their installed base.

IBM – can they make HCI work in the big data and analytics space? Can they be successful with Nutanix running on a new server platform? We shouldn’t underestimate IBM but this is a new direction for technology that has been based on industry standard hardware and it’s being sold to a new market segment.

Orchestration and Analytics – HCI vendors have been introducing more intelligence into their management platforms with some providing policy-based storage optimization and data protection, plus monitoring and management of the physical infrastructure. We expect this to continue with the addition of features that leverage data captured from the installed based and analyzed to generate baselines and best practices.

For more information see Evaluator Series Research on Hyperconverged Infrastructures including Comparison Matrix, Product Briefs, Product Analyses and the study “HCI in the Enterprise”.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post HCI – What Happened in 2017 and What to Watch for in 2018 appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/1-1/8)

Analyst News - Fri, 01/12/2018 - 08:30

Systems and Storage News Roundup (1/1-1/8)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

1/8 – Actifio Update: Disruptive, Profitable and Growing…

Cisco

1/8 – Cisco Announces Infinite Broadband Unlocked for cBR-8

HPE

1/8 – HPE Provides Customer Guidance in Response to Industrywide Microprocessor Vulnerability

Intel

1/7 – New 8th Gen Intel Core Processors with Radeon RX Vega M Graphics Offer 3x Boost in Frames Per Second in Devices as Thin as 17mm

Mellanox

1/4 – Mellanox Ships BlueField System-on-Cip Platforms and SmartNIC Adapters to Leading OEMs and Hyperscale Customers

Micron

1/8 – Micron and Intel Announce Update to NAND Memory Joint Development Program

OCZ

1/8 – Toshiba Unveils Mainstream RC100 NVMe SSD Series at CES 2018

SanDisk

1/8 – Western Digital Unveils New Solutions to Help People Capture Preserve, Access and Share Their Ever-Growing Collections of Photos and Videos

Seagate

1/8 – Seagate Teams Up With Industry-Leading Partners To Offer New Mobile Data Storage Solutions At CES 2018

SolarWinds

1/8 – SolarWinds Acquires Loggly, Strengthens Portfolio of Cloud Offerings

Supermicro

1/8 – Supermicro Unveils New All-Flash 1U Server That Delivers 300% Better Storage Density with Samsung’s Next Generation Small Form Factor (NGSFF) Storage

The post Systems and Storage News Roundup (1/1-1/8) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/25-1/1)

Analyst News - Thu, 01/04/2018 - 12:32

Systems and Storage News Roundup (12/25-1/1)

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

CA Technologies

12/28 – Three Ruling Technology Trends

 

Huawei

12/29 – China Mobile Chooses Huawei’s CloudFabric Solution for its SDN Data Center Network

 

The post Systems and Storage News Roundup (12/25-1/1) appeared first on Evaluator Group.

Categories: Analyst News

Who Will Be the Haves and Have Nots? – Forbes blog by John Webster

Analyst News - Wed, 01/03/2018 - 13:16

Tis the season for industry analysts to make stunningly insightful predictions about the coming year. They make interesting reading at best. Rarely do analysts review what they wrote the year before to see if they actually got it right. Years ago, I predicted that a technology called iSCSI would take over the storage networking world. It didn’t.

Today, the second day of the new year, I break with that tradition. Rather than try to peer into the future, I simply want to ask you to do something that I believe will improve the futures of our children.

The rapid advance of technology, an advance that is occurring exponentially, is creating a new gap within American society – those who can harness the power of technological advancement and those who can’t. This widening gap will have lasting economic consequences.

It disturbs me greatly that local educational systems don’t get that. In my home state of New Hampshire, the cover story of a widely distributed local weekly featured an article enumerating 7 ideas for updating NH schools. The authors talked to local school administrators and other leaders in education about what they think will boost student learning and they suggested things like more recess. Learning the ability to manipulate and exploit computing technology did not even make the list. Can they be that blind?

It’s no secret known only to technologists that there are approximately half a million jobs available right now that require some level of computational skills – jobs that could be filled today by young women and men with the right skills. But colleges and universities in the US graduated only 46,000 last year with degrees in computer science. And I would guess that the majority of them were male.

Computer coding is no longer the realm of geeks with advanced math degrees. Programs are now available for kids—girls and boys—starting at the first-grade level. Educators can go to code.org and girlswhocode.com for classroom-based activities, guidance for teachers on how to teach coding, and suggestions for ways to get the local community involved. You use what you find here to encourage them—as forcefully as you like. Code.org has already reached 25% of US students and 10M of them are female. Also, check-out the Hour of Code program and a TEDx talk by Hadi Partovi, founder and CEO of code.org.

Unfortunately, local school boards move slowly. It may take years to convince them that learning to speak the language of technology is as important as math and the spoken word. To fill the gap, each of us can get involved by starting with a few kids at a time. You don’t even have to know how to code. All you need is a few PCs or lap tops (available used), a space for kids to gather such as a church basement or community center, and someone to supervise. Everything else is available online from code.org and girlswhocode.com.

As we go forward in time, I believe that those who have and have not within any culture will be increasingly defined by an ability to manipulate technology. Vast computing warehouses like Amazon Web Services and Google Cloud Platform are available to any one of us for the swipe of a credit card. All one needs to know is how to use them.

The post Who Will Be the Haves and Have Nots? – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/11-12/18)

Analyst News - Fri, 12/22/2017 - 11:07

Systems and Storage News Roundup (12/11-12/18)

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

12/12 – Moving Beyond Backups to Enterprise-Data-as-a-Service

CA Technologies

12/14- Predictions 2018: How DevOps, AI Will Impact Security

Catalogic Software

12/18 – Catalogic Software Updates Flagship DPX Data Protection Software

Fujitsu

12/11 – Fujitsu Develops WAN Acceleration Technology Utilizing FPGA Accelerators

IBM

12/13 – IBM and Blue Prism Deliver Digital Workforce Capabilities

NEC

12/12 – NEC Develops Deep Learning Technology to Improve Recognition Accuracy

The post Systems and Storage News Roundup (12/11-12/18) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/4-12/11)

Analyst News - Wed, 12/13/2017 - 11:10
Categories: Analyst News

Data Protection: Research Outlook and Changes in the Market

Analyst News - Fri, 12/08/2017 - 13:07

I’m just a little over thirty days into my role at Evaluator Group covering various aspects of Data Management including Data Protection and Secondary Storage. Even with a good understanding of the technology and what vendors are doing in this space, I’ve been drinking from the proverbial firehose getting caught up on the detailed analysis done by my colleagues on the myriad of Data Protection vendors and solutions we cover. I’m excited to be part of the Evaluator Group team and add my perspectives to the analyses.

There are some important changes happening in the Data Protection market and I believe this segment is as dynamic as ever. The Data Protection requirements of customers have never been wider, and the number and type of solutions have never been greater. Software vendors continue to enhance their solutions by bringing in new data management and visibility to the data they manage working through the backup process. Hardware vendors continue to build on technology advancements to deliver more performance, increased scalability, and enhanced efficiencies. Other vendors are evolving secondary storage systems to be scale out architectures that grow as customers needs grow, scale without data migration and often support integration with public clouds. A few vendors are combining all the hardware and software into hyperconverged secondary storage systems that promise fast deployment and ease of use within an enterprise’s data protection activities.

With the explosion of data and the rapid move towards hybrid IT environments using one or more private or public clouds, we see a number of organizations evaluating their Data Protection strategies and solutions and determining they need to make changes to meet evolving business and legal requirements. To help our customers evaluate what these changes mean for them, we will be making some changes and additions to our coverage of this market segment. We’re starting with Data Protection, updating product analyses as needed, adding an Eval(u)Scale™ assessment to product briefs, and thinking about how we can update the Evaluation Matrices to reflect the changes in the industry while maintaining much of the historic information that our users depend on.

Expanding beyond Data Protection, we are planning additional coverage into multiple aspects of Enterprise Data Management. Data Management is about knowing everything about your data, everywhere it is, and deriving more value from that data while still protecting it and optimizing its storage. Many of the Data Management concepts are not new to IT administrators and Evaluator Group analysts have covered the management of data from a number of perspectives. The opportunity here is to organize all this expertise into a comprehensive, integrated approach to Data Management.

We’ll have more information on what’s driving changes in Data Protection in the next blog and details about Data Management coverage as well as our insights in future blogs.

The post Data Protection: Research Outlook and Changes in the Market appeared first on Evaluator Group.

Categories: Analyst News

How does Hyperconverged Fit in the Enterprise? — findings from Evaluator Group Research

Analyst News - Thu, 11/30/2017 - 17:07

Hyperconverged Infrastructures (HCIs) have enjoyed significant success over the past several years, based partly on their ability to reduce the effort and shorten the timeframe required to deploy new infrastructures. HCI’s early success came largely in small and mid-sized companies, but as this technology has matured, adoption is being seen in more enterprise-level companies and in more critical use cases, to a certain extent.

Evaluator Group conducted a research study “HCI in the Enterprise” to determine how HCIs fit in enterprise environments by examining where companies are deploying or planning to deploy these products, which applications they are supporting and what benefits they expect to derive from that deployment. The study also explored the expectations of IT personnel involved in the use or evaluation of these products, including where an HCI may not be an appropriate solution. This blog discusses some the findings from this study, which included an email survey and follow up interviews with IT professionals from enterprise-level companies (over 1000 employees).

Seven out of eight enterprises we contacted are using or evaluating HCIs, or plan to in the near future. These companies told us their top IT priorities were to improve efficiency, reduce cost and increase “agility” – the ability to respond quickly to changes in infrastructure driven by their dynamic environments. Most are looking at this technology to help simplify their IT infrastructures as a way to achieve these objectives.

Simplification was a recurring theme, with infrastructure consolidation being the most common use case among the companies surveyed. These organizations are looking to HCIs as a way to reduce the number of systems in their environments and standardize on a single platform for multiple applications. This is the same approach that smaller and mid-market companies have taken for the past several years, but with a major difference.

Where smaller companies have adopted HCIs as a solution for most of their applications, enterprise IT is taking a more measured approach. They’re using HCI, but not for their most mission critical applications. The rationale seems to be that there are plenty of benefits to be had with this technology in tier 2 and tier 3 applications. When we asked why HCIs were not chosen, the reason most often given was maturity of the technology, with some concern over the fact that HCIs vendors are often start-up companies as well.

In the study we asked enterprise IT which products they were either using, evaluating or planned to evaluate. Out of the 14 products lists, the solution most often cited was VMware vSAN, with Cisco HyperFlex and Nutanix Enterprise Cloud Platform coming in second and third, respectively.

Another area we asked about was decision factors; what characteristics were the most important when comparing one HCI solution with another. Hyperconverged infrastructure products are generally “feature rich”. Most HCI vendors have also assembled a variety of models to choose from. But features and models weren’t even among the top criteria enterprise IT folks used in an evaluation. The number one characteristic was actually performance and number two was economics. What’s interesting about this is that these two characteristics are closely related.

Cost as a comparative factor is best defined in terms of ability to do useful work. In a hyperconverged cluster that work is measured by the number of virtual machines it can support. Each VM consumes storage and compute resources, more specifically CPU cycles and storage IOPS. This means HCIs with better storage performance and more CPU cores can usually handle more VMs, on a per-node basis. So HCIs with better performance often end up costing less as well. In these situations, it’s imperative that a testing suite designed for hyperconverged infrastructures, such as IOmark, be used in order to capture accurate performance data.

This study also contains detailed information on the findings mentioned above, including where and why HCIs are not appropriate, as well as input on networking, hardware platforms, the infrastructure being replaced, etc. More information and report details are available on Evaluator Group website, or contact us.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post How does Hyperconverged Fit in the Enterprise? — findings from Evaluator Group Research appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (11/13-11/20)

Analyst News - Tue, 11/21/2017 - 13:23

Systems and Storage News Roundup (11/13-11/20)

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

Actifio

11/20 – Actifio Continues to Pioneer the Future of Copy Data Management

CA Technologies

11/17 – CA Technologies CTO: AI is a force for good and evil in cybersecurity

Cavium

11/13 – HPE Helps Businesses Capitalize on High Performance Computing and Artificial Intelligence Applications with New High-Density Compute and Storage

Cisco

11/16 – NTT East Japan Adopts Cisco NFV Portfolio To Help Small and Medium Enterprises With ICT Cloud Computing

Cloudera

11/15 – Komatsu Helps Improve Mining Performance with Industrial Internet of Things (IIoT) Platform Powered by Cloudera

Cohesity

11/15 – Cohesity Announces Availability of Cohesity DataPlatform Cloud Edition on Amazon Web Service

Cray

11/13 – New Cray Storage Solutions Address Key Customer Needs for Improved Productivity and Unparalleled Data Access

DDN

11/13 – DDN Strengthens its HPC Storage Leadership with New Solutions and Next Generation Monitoring Tools

EMC

11/13 – New Dell EMC Solutions Bring Machine and Deep Learning to Mainstream Enterprises

Excelero

11/13 – SciNet Relies on Excelero for High-Performance, Peta-Scale Storage at New Supercomputing Facility

Huawei

11/20 – Enabling Full Service on Unified, Simplified LTE-based Infrastructure Network for MBB2020

Infinidat

11/14 – The (R)elovution Continues-Infinibox R4 Delivers Higher Performance and New Features

Lenovo

11/14 – Lenovo Accelerates Artificial Intelligence Initiatives to Solve Humanity’s Greatest Challenges

Maxta

11/15 – Hyperconvergence in campus IT-Why taking a software approach is best

Oracle

11/16 – New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications

Veritas

11/14 – Veritas Advances Backup Exec Offering to Help Organizations Protect Critical Data in a Multi-Cloud World

The post Systems and Storage News Roundup (11/13-11/20) appeared first on Evaluator Group.

Categories: Analyst News

Storage Industry News Roundup (11/6-11/13)

Analyst News - Thu, 11/16/2017 - 09:26

Storage Industry News Roundup (11/6-11/13)

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

AWS

11/6 – AWS Announces Availability of C5 Instances for Amazon EC2

Caringo

11/9 – Caringo Drive for Swarm Scale-Out Hybrid Storage

Cavium

11/8 – Cavium ThunderX2 Motherboard Specification for Microsoft’s Project Olympus Contributed to the Open Compute Project

Cisco

11/13 – Technology Innovation Unleashes the Potential of Cloud Computing to Empower the Internet Economy

Commvault

11/7 – Protection as a Service to Simplify Data Recovery on Laptops and Other Devices

Cray

11/13 – New Cray Storage Solutions Address Key Customer Needs For Improved Productivity and Unparalleled Data Access

Data Direct

11/13 – DDN Strengthens Its HPC Storage Leadership With New Solutions And Next Generation Monitoring Tools

Datrium

11/6 – Datrium Storage System Injects Flash Into The Equation

EMC

11/13 – New Dell EMC Solutions Bring Machine and Deep Learning to Mainstream Enterprises

Excelero

11/13 – SciNet Relies on Excelero for High-Performance, Peta-Scale Storage at New Supercomputing Facility

Fujitsu

11/8 – Fujitsu Launches New PRIMEQUEST Series, Boosts Processing Performance by up to 50%

HGST

11/13 – Western Digital Expands Solutions In Life Sciences Through Partnership With Globus To Connect Massive Data To The Researchers Who Need It

HPE

11/13 – HPE Helps Businesses Capitalize on High Performance Computing and Artificial Intelligence Applications with New High Density Compute and Storage

Huawei

11/7 – Huawei Releases Innovative SuperFTTB Solution for Gigabit Access Acceleration

Intel

11/9 – Intel Doubles Capacity of World’s Most Responsive Data Center SSD

Mellanox

11/8 – Mellanox Interconnect Solutions Boost Qualcomm Arm-Based Data Center Platforms with Mellanox ConnectX-5

Micron

11/13 – Mircon Advances Persistent Memory with 32GB NVDIMM

Nutanix

11/8 – Nutanix Unveils New Developer-Centric Services and Expands Workload Support to Simplify IT in the Multi-Cloud Era

Supermicro

11/13 – Supermicro Unveils New HPC Solutions at SC17

 

The post Storage Industry News Roundup (11/6-11/13) appeared first on Evaluator Group.

Categories: Analyst News

Recovering from Digital Transformation – Forbes blog by John Webster

Analyst News - Fri, 11/10/2017 - 17:44

Digital Transformation projects now abound within enterprise IT. Depending on the vertical industry segment we see IoTCustomer 360Industry 4.0 and Insurtec to say nothing of the multi-cloud architectures cropping up across the IT landscape. As the operational techs now ease these systems into production, a critical question looms: Can they be recovered when an unexpected failure occurs with minimal disruption and data loss?

When new Digital Transformation systems are built and tested, it is common to see them added to an existing data protection and disaster recovery infrastructure. Why create another one when we already have the capability that’s been functioning for decades?

Recovery from disasters, however minor, remains one of the most basic requirements of enterprise IT. Data protection technology continues to evolve from recoveries using backup tapes, to the use of disk-based purpose built backup appliances PBBAs), and now to the latest advances using storage array-based software solutions—a trend to called “self-protecting” storage. However, it is now incumbent on IT operations to not only keep up with but anticipate the recovery of these new systems as they become increasingly critical to the future of the organization. In developing a strategy that encompasses them, enterprise IT should to consider:

  • The source, owner, application and provenance of the data
  • Long-standing business and organizational requirements for disaster recovery and recovery from failures including the most common one—human error
  • Security and compliance issues that these new systems may introduce to the IT environment

Because enterprise IT rarely if ever builds data protection and recovery systems from scratch using open source software and commodity hardware, vendor selection will be a critical phase in the effort to modernize.  Matching these applications with the self-protecting storage systems now available can allow IT to meet performance demands while delivering nearly immediate recoverability at the same time. New software is now available that automates the management of data protection and recovery operations across different applications, storage systems, and target protection devices (including public and private clouds) is part of an advanced data protection strategy.  Additional capabilities needed to manage data retention, identification of unprotected data, real-time status of protection operations, and integration with copy data management further add value to the overall data protection environment.

Because there are multiple elements to an overall data protection and disaster recovery strategy, the vendor with a portfolio of integrated solutions often fares better in an evaluation than a collection of independent “parts” that put an additional integration and support burden on administrators. Vendors in this category include CommVault, Dell EMC, IBM and Veritas. But there are also solutions available that support multiple application sets running on the distributed architectures (Cassandra, HDFS, MongoDB) now common to Digital Transformation initiatives. These include Datos IO. Either way, the vendor’s support infrastructure is critical and should be viable for the long-term.

Getting the right protection and recovery strategy for Digital Transformation early-on is essential. Administrators of NoSQL databases such as Cassandra and MongoDB as well as distributed file systems like HDFS commonly believe that maintaining three copies of data is protection enough until a data corruption event occurs, and all three copies are corrupted. If deemed of value to business objectives these systems will likely be long-lived in enterprise IT. Their processes and procedures will become ingrained and more difficult to change over time. Enterprise production-grade data protection and recoverability is therefore a must for Digital Transformation.

The post Recovering from Digital Transformation – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

Storage Industry News Roundup (10/30-11/6)

Analyst News - Wed, 11/08/2017 - 14:40

Storage Industry News Roundup (10/30-11/6)

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

AWS

11/6 – AWS Announces Availability of C5 Instances for Amazon EC2

Cisco

11/2 – Cisco Announces World’s First AI-Powered Voice Assistant for Meetings

DataDirect

11/1 – 3i Partners with DDN Storage to Deliver Demanding Microscopy Work Flow Solutions and Unmatched Performance

Fujitsu

11/2 – Fujitsu and NetApp Launch Converged Infrastructure Solution NFLEX for Strategic IT Delivery

IBM

11/1 – IBM Brings Cloud-native Environment to Private Clouds

Intel

11/6 – New Intel Core Processor Combines High-Performance CPU with Custom Discrete Graphics From AMD to Enable Sleeker, Thinner Devices

Mellanox

11/6 – Mellanox Announces Innova-2 FPGA-Based Programmable Adapter Family to Power Next Generation of Cloud, Security, Big Data and Deep Learning Platforms

Micron

10/31 – Micron Accelerates Edge Storage for Video Surveillance, Announces New Collaborations to Increase Adoption

Microsoft

11/1 – United Technologies Chooses Microsoft Cloud to Enhance Customer Experience and Accelerate Digital Transformation

Oracle

11/6 – Oracle Utilities Defines Cloud Path for Utilities

Supermicro

11/6 – NASA Selects Supermicro to Expand Advanced Computing and Data Analytics Used to Study The Earth, Solar System and Universe

The post Storage Industry News Roundup (10/30-11/6) appeared first on Evaluator Group.

Categories: Analyst News

Why we are so interested in High-Performance Computing in the Enterprise, and you should be too

Analyst News - Sun, 11/05/2017 - 14:31

Most recently we added Frederic Van Haren to our team of analysts to cover a very big space – HPC/ AI / Deep Learning / Machine Learning — call it what you will.  We’ve been known as the super technical guys in information management but we’re branching out—again.  HPC has been dominated by CPU’s, now GPU’s with 1U servers bought mostly by the big PhD-dominated research labs.  Not exactly the typical folks we deal with.

Well that has changed.  AI’s various disciplines are entering every phase of our life and the life of the enterprise.  As such, enterprise IT is now adding HPC-lite or -heavy systems to their traditional environments.  While this is a HUGE movement (no reference to our politics, please), but adoption is also slow and complicated.  HPC storage environments like Spectrum Scale, Lustre, Gluster and HDFS are not that easy to deploy and maintain.  Integrating them into production data centers is complicated. But when use cases take off, they can grow big quickly.  That is why we are seeing more inquiries and desire for knowledge from traditional IT. They are still the ones in the data center managing transactional and customer data—now tasked to work with the guys in the basement making up these new systems.  Oh, and did I mention their increasing responsibilities relating to the use of the Public Cloud for additional compute and data layers?

I personally know enough to be very dangerous, as I really date back to when an IBM mainframe was used to feed the Cray processors.  The issues include scaling, data consistency around high performance processing and integration into production with the need to deliver the analysis in real time.

Why you should care? If you are a systems or storage vendor, this is the next big thing. If you are running traditional IT environments, this will show up on your door step.  Adoption will be slow as industries learn where HPC will give them a competitive edge and what the required investments in infrastructure and people look like.  It will be messy as well. Data scientists will become system architects. The LOBs will control budgets. Neither may know what is required for scale. Every CEO and CIO will be asked, what is your AI strategy?  If they don’t have one, a new person will be in place very soon.

What systems technologies that they will need?  GPU, CPU, Networks (IP, IB, satellite), large scale file systems, content repositories (object, public cloud, tape), solid state (arrays and in the server), edge devices including the buzz ‘micro-data center.”  System vendors will need to know how to simplify. IT users will need strategies. We plan to bring our expertise on systems and data, adding Frederic Van Haren, with his deep background in managing large-scale HPC and Big Data systems, to the mix.  We hope to illuminate how and what is needed to get to productive results, what to avoid and what to consider.

The post Why we are so interested in High-Performance Computing in the Enterprise, and you should be too appeared first on Evaluator Group.

Categories: Analyst News

Storage Industry News Roundup (10/23-10/30)

Analyst News - Wed, 11/01/2017 - 10:38

Storage Industry News Roundup (10/23-10/30)

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

Atlantis

10/25 – Atlantis Computing Launches Citrix-Integrated Software to Simplify and Reduce Costs for Virtual Workspaces

AWS

10/26 – AWS Announces Availability of P3 Instances for Amazon EC2

Cavium

10/24 – Cadence and Arm Deliver First SoC Verification Solution for Low-Power, High-Performance Arm-Based Servers

Cisco

10/26 – Cisco Redefines Storage Networking with Built-In Telemetry and Cost Effective 32Gbps Storage Switch

Cloudera

10/23 – Cloudera Speeds Analytics Deployment for Cybersecurity Hub

 Cray

10/23 – Cray and Microsoft bring Supercomputing to Microsoft Azure

Datrium

10/25 – Datrium Increases Storage Performance With New Server Software, New All-Flash Data Nodes

Huawei

10/27 – Huawei Launches 5G Microwave Bearer Solution to Help Operators Evolve Their Networks Towards 5G

IBM

10/26 – IBM Transforms FlashSystem to Help Drive Down the Cost of Data

MapR

10/24 – MapR Delivers Self-Service Data Science for Leveraging Machine Learning and Artificial intelligence

Oracle

10/30 – Oracle Delivers Industry’s Only Cloud-Based SaaS Solution for Core Administration and Digital Engagement with Healthx, it a type of insurance for your computers. 

Seagate

10/28 – Seagate Launches First Drive For AI-Enabled Surveillance

Veritas

10/25 – Veritas Study: Alarming Majority of Organizations (69%) Export Full Responsibility for Data Protection, Privacy and Compliance onto Cloud Service Providers

The post Storage Industry News Roundup (10/23-10/30) appeared first on Evaluator Group.

Categories: Analyst News
Syndicate content