Analyst News

Scale-Out Data Protection: Research and Coverage Update

Analyst News - Mon, 03/19/2018 - 15:52

They say that times flies when you’re having fun, so I must be having too much fun these days.  My first (and only) blog with Evaluator Group was months ago and it is long past time for this one.

Though it wasn’t covered in a blog as I had originally planned, hopefully you were able to read about what is driving customers to re-evaluate their data protection strategies in the Technical Insight published in January.  I’ve heard from several readers that they agree wholeheartedly with one or more of the drivers described in the report.  A few other readers have provided input on additional drivers they see having impact.

We are continuing to work on updates to our data protection research materials.  One of the top areas we are researching is scale-out data protection.  There is some variability in this scale-out space, with solutions that focus mainly on backup and others that support secondary storage workloads.  The maturity of vendors in this space vary, with some who have been at it for more than a decade to others that are young startups.  Evaluator Group is working on a number of research documents on this space, so watch for announcements in the near future.

Evaluator Group is also continuing to explore several enterprise data management topics that are gaining in popularity and relevance.  Driving this level of interest is the amount of data moving into private and public cloud environments.  As the data is copied or moved, many solutions analyze and catalog various aspects of data such as the data type, who created it, when it was created, etc.  The creation of this metadata may be the intent of some of these solutions or a byproduct. Either way, it opens interesting new opportunities to better manage and derive value from an enterprise’s information.

As described in the Technical Insight referenced above, one of the interesting areas in the enterprise data management space we have been researching is Copy Data Management (CDM).  CDM is about locating and managing all the copies of your data, wherever they are, and optimizing the storage of those copies.  As with many of today’s IT solutions, CDM vendors have different approaches to how they identify copies of data and what they do to coordinate and optimize those copies. Soon, the task will be to determine how best to share the results of that research with you all.

Stay tuned.

The post Scale-Out Data Protection: Research and Coverage Update appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (2/26-3/5)

Analyst News - Wed, 03/07/2018 - 12:58

Systems and Storage News Roundup (2/26-3/5)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

3/1 – At Waste Industries, Digital Transformation Initiatives Rooted in Trust

Cavium

2/26 – Cavium Showcases Next-Generation 5G Radio Access and Core Networks, Telco Cloud and Edge Infrastructure Solutions at Mobile World Congress 2018

Cisco

2/27 – Cisco NB-IoT Platform Now Commercially Available Worldwide, Making it Practice and Profitable for Companies to Deliver Connected Services via Low Cost, Low Power Devices

Commvault

2/28 – Commvault and Mercy Partner to Deliver Powerful New Cloud Backup and Disaster Recovery Service to The Health Care Market

Fujitsu

2/26 – Fujitsu Launches SAP S/4HANA Conversion Service, Fully Supports Migration to Next Generation ERP

HPE

2/26 – HPE Enhances Safety and Security of Citizens Through Implementation of India’s First Cloud-Based Integrated Command and Control Center

Infinidat

2/27 – Introducing Infinidat’s Synchronous Replication For The Infinibox

Intel

2/27 – Intel Introduces ‘Intel AI: In Production’ Program-A New Way to Bring Artificial Intelligence Devices to Market

NEC

3/5 – NEC Showcases The Latest in Intelligent Public Transport Solutions at IT Trans

Pure Storage

2/27 – It’s Time to Prime The AI Engine

SanDisk

2/26 – Western Digital NVME Solutions Enable Data to Thrive in Intelligent Edge and Mobile Computing Environments

The post Systems and Storage News Roundup (2/26-3/5) appeared first on Evaluator Group.

Categories: Analyst News

The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster

Analyst News - Tue, 02/20/2018 - 09:32

High Performance Computing (HPC) traditionally exists as a separate and distinct discipline from enterprise data center computing. Both use the same basic components—servers, networks, storage arrays—but are optimized for different types of applications. Those within the data center are largely transaction-oriented while HPC applications crunch numbers and high volumes of data. However, an intersection is emerging, driven by more recently by business-oriented analytics that now fall under the general category of Artificial intelligence (AI).

Data-driven, customer-facing online services are advancing rapidly in many industries, including financial services (online trading, online banking), healthcare (patient portals, electronic health records), and travel (booking services, travel recommendations). The explosive, global growth of SaaS and online services is leading to major changes in enterprise infrastructure, with new application development methodologies, new database solutions, new infrastructure hardware and software technologies, and new datacenter management paradigms. This growth will only accelerate as emerging Internet of Things (IoT)-enabled technologies like connected health, smart industry, and smart city solutions come online in the form of as-a-service businesses.

We can see how the Supple company proves to us that business is now about digital transformation. In the minds of many IT executives, this typically means delivering cloud-like business agility to its user groups—transform, digitize, become more agile. And it is often the case that separate, distinctly new cloud computing environments are stood-up alongside traditional IT to accomplish this. Transformational IT can now benefit from a shot of HPC.

HPC paradigms were born from the need to apply sophisticated analytics to large volumes of data gathered from multiple sources. Sound familiar? The Big Data way to say the same thing was “Volume, Variety, Velocity.” With the advent of cloud technologies, HPC applications have leveraged storage and processing delivered from shared, multi-tenant infrastructure. Many of the same challenges addressed by HPC practitioners are now faced by modern enterprise application developers.

As enterprise cloud infrastructures continue to grow in scale while delivering increasingly sophisticated analytics, we will see a move toward new architectures that closely resemble those employed by modern HPC applications. Characteristics of new cloud computing architectures include independent scaling compute and storage resources, continued advancement of commodity hardware platforms, and software-defined datacenter technologies—all of which can benefit from an infusion of HPC technologies. These are now coming from the traditional HPC vendors—HPEIBM and Intelwith its 3D-XPoint for example—as well as some new names like NVIDIA, the current leader in GPU cards for the AI market.

To extract better economic value from their data, enterprises can now more fully enable machine learning and deep neural networks by integrating HPC technologies. They can merge the performance advantages of HPC with AI applications running on commodity hardware platforms. Instead of reinventing the wheel, the HPC and Big Data compute-intensive paradigms are now coming together to provide organizations with the best of both worlds. HPC is advancing into the enterprise data center and it’s been a long time coming.

 

The post The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

What Comes after CI and HCI? – Composable Infrastructures

Analyst News - Mon, 02/19/2018 - 18:07

There’s an evolution occurring in IT infrastructure that’s providing alternatives to the traditional server-, storage- and SAN-based systems enterprises have used for the past two decades or so. This evolution was first defined by “hyperscalers”, the big public cloud and social media companies, and encompasses multiple technology approaches like Converged, Hyperconveged and now Composable Infrastructures. This blog will discuss these combined “Integrated Infrastructure” approaches and look at how they attempt to address evolving needs of IT organizations.

Hyper-scale Infrastructures

Companies like Google, Facebook and AWS had to address a number of challenges including huge data sets, unpredictable capacity requirements and dynamic business environments (first, public cloud and social media, now IoT, AI, etc.) that stressed the ability of traditional IT to deliver services in a timely fashion. So they created a new model for IT that incorporated software-based architectures (developed internally), that ran on standard hardware and provided the needed flexibility, scale and cost-containment.

But enterprise IT doesn’t have the expertise to support this kind of do-it-yourself infrastructure, nor the desire to dedicate the resources or take on the risk. In general, enterprises want to use trusted suppliers and have clear systems responsibility. They need much of what hyper-scale systems provided, but with integrated solutions that are simple to operate, quick to deploy and easy to configure and re-configure.

CI, SDS and HCI

Converged Infrastructure solutions were some of the first attempts at an integrated infrastructure, creating certified stacks of existing servers, storage, networking components and server virtualization that companies bought by the rack. Some were sold as turnkey solutions by the manufacturer and others were sold as reference architectures that VARs or enterprises themselves could implement. They reduced the integration required and gave companies a rack-scale architecture that minimized set up costs and deployment time.

Hyperconverged Infrastructures (HCIs) took this to the next level, actually combining the storage and compute functions into modules that users could deploy themselves. Scaling was easy too, just add more nodes. At the heart of this technology was a software-defined storage (SDS) layer that virtualized the physical storage on each node and presented it to a hypervisor that ran on each node as well, to support workloads and usually the SDS package itself.

HCIs come in several formats, from a turnkey appliance sold by the HCI manufacturer to a software-only model where the customer chooses their hardware vendor. Some enterprises even put together their own HCI-like solution, running an SDS package on a compatible server chassis and adding the hypervisor.

While Converged and Hyperconverged Infrastructures provide value to the enterprise, they don’t really provide solution for every use case. HCIs were great as a consolidation play for the lower end, SMB and mid-market companies. Enterprises use them too, but more for independent projects or remote environments that need a turnkey infrastructure solution. In general, they’re not using HCIs for mainstream data center applications because of concerns about creating silos of infrastructure and vendor lock-in, but also a feeling that the the technology lacks maturity and isn’t “mission critical” (based on 2017 Evaluator Group Study “HCI in the Enterprise”).

While they’re comprised of traditional IT infrastructure components, CIs present a system that’s certainly mature and capable of handling mission critical workloads. But CIs are also relatively expensive and inflexible, since they’re essentially bundles of legacy servers, storage and networking gear, instead of software-defined modules of commodity hardware with a common management platform. They also lack the APIs and programmable aspects that can support automation, agility and cloud connectivity.

Composable Infrastructure

Composable Infrastructure (CPI) is a comprehensive, rack-scale compute solution that combines some characteristics of both Converged and Hyperconverged Infrastructures. CPI disaggregates and then pools physical resources, allocating them at run time for a specific compute job, then returns them to the pool. It provides a comprehensive compute environment that supports applications running in VMs, containers and bare metal OS.

CPI doesn’t use SDS, as HCIs do, to share the storage pool, but supports direct-attachment of storage devices (like drives and SSDs), eliminating the SDS software latency and cost. CPI also doesn’t require a hypervisor to run an SDS layer or workloads. Instead, it creates bare metal server instances that can support containers or a hypervisor if desired, reducing software licensing and hypervisor lock-in.

Composable Infrastructures are stateless architectures, meaning they’re assembled at run time, and can be controlled by 3rd party development platforms and management tools through APIs. This improves agility and makes CPI is well suited for automation. For more information see the Technology Insight paper “Composable – the Next Step in Integrated Infrastructures”.

 

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post What Comes after CI and HCI? – Composable Infrastructures appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (2/5-2/12)

Analyst News - Thu, 02/15/2018 - 09:48

Systems and Storage News Roundup (2/5-2/12)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

2/12 – Escaping the Gravitational Pull of Big-Data

CA Technologies

2/8 – Erica Christensen, CA Technologies, Discusses The Importance of Producing STEM10

Cisco

2/5 – Global Cloud Index Projects Cloud Traffic to Represent 95 Percent of Total Data Center Traffic by 2021

Dell EMC

2/6 – Dell EMC Expands Server Capabilities for Software-defined, Edge and High-Performance Computing

Huawei

2/8 – Huawei Releases E2E Cloud VR System Prototype

Infinidat

2/6 – Infinidat Backup Appliance: When the Quality and Cost of Storage Really Matters

MapR

2/8 – MapR Simplifies End-to-End Workflow for Data Scientists

Maxta

2/8 – Five Requirements for Hyper-Converged Infrastructure Software

NEC

2/9 – NEC Succeeds in Simultaneous Digital Beamforming That Supports 28 GHz Band For 5G Communications

Oracle

2/12 – Oracle Cloud Growth Driving Aggressive Global Expansion

Supermicro Computer

2/7 – Supermicro Expands Edge Computing and Network Appliance Portfolio with New High Density SoC Solutions

SwiftStack

2/9 – Multicloud Storage Mitigates Risk of Public Cloud Lock-In

Tintri

2/5 – Automation and The Age of The Self-Driving Data Center

The post Systems and Storage News Roundup (2/5-2/12) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/15-1/22)

Analyst News - Thu, 01/25/2018 - 10:35
Categories: Analyst News

Hybrid Cloud: 4 Top Use Cases

Analyst News - Wed, 01/17/2018 - 12:19

Interop ITX expert Camberley Bates explains which hybrid cloud deployments are most likely to succeed.

In the early days of cloud computing, experts talked a lot about the relative merits of public and private clouds and which would be the better choice for enterprises. These days, most enterprises aren’t deciding between public or private clouds; they have both. Hybrid and multi-cloud environments have become the norm.

However, setting up a true hybrid cloud, with integration between a public cloud and private cloud environment, can be very challenging.

“If the end user does not have specific applications in mind about what they are building [a hybrid cloud] for and what they are doing, we find that they typically fail,” Camberley Bates, managing director and analyst at Evaluator Group, told me in an interview.

So which use cases are best suited to the hybrid cloud? Bates highlighted three scenarios where organizations are experiencing the greatest success with their hybrid cloud initiatives, and one use case that’s popular but more challenging.

1. Disaster recovery and business continuity

Setting up an independent environment for disaster recovery (DR) or business continuity purposes can be a very costly proposition. Using a hybrid cloud setup, where the on-premises data center fails over to a public cloud service in the case of an emergency, is much more affordable. Plus, it can give enterprises access to IT resources in a geographic location far enough away from their primary site that they are unlikely to be affected by the same disaster events.

Bates noted that costs are usually big driver for choosing hybrid cloud over other DR options. With hybrid cloud, “I have a flexible environment where I’m not paying for all of that infrastructure all the time constantly.” she said. “I have the ability to expand very rapidly if I need to. I have a low-cost environment. So if I combine those pieces, suddenly disaster recovery as an insurance policy environment is cost effective.”

2. Archive

Using a hybrid cloud for archive data has very similar benefits as disaster recovery, and enterprises often undertake DR and archive hybrid cloud efforts simultaneously.

“There’s somewhat of a belief system that some people have that the cloud is cheaper than on-prem, which is not necessarily true,” cautioned Bates. However, she added, “It is really cheap to put data at rest in a hybrid cloud for long periods of time. So if I have data that is truly at rest and I’m not moving it in and out, it’s very cost effective.”

3. DevOps application development

Another area where enterprises are experiencing a lot of success with hybrid clouds is with application development. As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process.

Bates said, “The DevOps guys are using [public cloud] to set up and do application development.” She explained, “The public cloud is very simple and easy to use. It’s very fast to get going with it.”

But once applications are ready to deploy in production, many enterprises choose to move them back to the on-premises data center, often for data governance or cost reasons, Bates explained. The hybrid cloud model makes it possible for the organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production.

4. Cloud bursting

Many organizations are also interested in using a hybrid cloud for “cloud bursting.” That is, they want to run their applications in a private cloud until demand for resources reaches a certain level, at which point they would fail over to a public cloud service.

However, Bates said, “Cloud bursting is a desire and a desirable capability, but it is not easy to set up, is what our research found.”

Bates has seen some companies, particularly financial trading companies, be successful with hybrid cloud setups, but this particular use case continues to be very challenging to put into practice.

 

Read this article on NetworkComputing.com here.

The post Hybrid Cloud: 4 Top Use Cases appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/8-1/15)

Analyst News - Tue, 01/16/2018 - 11:28
Categories: Analyst News

Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster

Analyst News - Sun, 01/14/2018 - 19:06

As cloud computing progresses forward, an ages-old problem for IT is resurfacing—how to integrate and secure data stores that are disbursed geographically and across a growing diversity of applications. Data silos, which have always limited an organizations ability to extract value from all of its data, become even more isolated. Consider the mainframe, with its stores of critical data going back decades, as the original data silo. Mainframe users still want to leverage this data for other applications such as AI but must overcome accessibility and formatting barriers in order to do so. It’s a task made easier by software vendors like Syncsort who’s Ironstream moves data from the ‘frame in a way that is easily digestible by Splunk applications for example.

But as cloud computing progresses, the siloed data issue becomes even more apparent as IT executives try to broaden their reach to encompass and leverage their organization’s data stored in multiple public clouds (AWS, Azure, GCP) along with all that they store on site. The cloud-world solution to this problem is what is now becoming known as the Data Fabric.

Data Fabrics—essentially information networks implemented on a grand scale across physical and virtual boundaries—focus on the data aspect of cloud computing as the unifying factor. To conceive of distributed, multi-cloud computing only in terms of infrastructure would miss a fundamental aspect of all computing technology – data. Data is integral and must be woven into multi-cloud computing architectures. The concept that integrates data with distributed and cloud-based computing is the Data Fabric.

The reason for going the way of the Data Fabric, at least on a conceptual basis, is to break down the data siloes that are inherent in isolated computing clusters and on-and off-premises clouds. Fabrics allow data to flow and be shared by applications running in both private and public cloud data centers. They move data to where it’s needed at any given point in time. In the context of IoT for example, they enable analytics to be performed in real time on data being generated by geographically disbursed sensors “on the edge.”

Not surprisingly, the Data Fabric opportunity presents fertile ground for storage vendors. NetApp introduced their conceptual version of it four years ago and have been instantiating various aspects of it ever since. More recently, a number of Cloud Storage Services vendors have put forward their Data Fabric interpretations that are generally based on global file systems. These include ElastifileNasuni, and Avere—recently acquired by Microsoft.

Entries are also coming from other unexpected sources. One is from the ever-evolving Big Data space. MapR’s Converge-X Data Fabric is an exabyte scale, globally distributed data store for managing files, objects, and containers across multiple edge, on-premises, and public cloud environments.

At least two-thirds of all large enterprise IT organizations now see hybrid clouds—which are really multi-clouds—as their long-term IT future. Operating in this multi-cloud world requires new data management processes and policies and most of all, a new data architecture. Enterprise IT will be increasingly called upon to assimilate a widening range of applications directed toward mobile users, data practitioners and outside business partners. In 2018, a growing and increasingly diverse list of vendors will offer their interpretations of the Data Fabric.

The post Big Data Meets Data Fabric and Multi-cloud – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

HCI – What Happened in 2017 and What to Watch for in 2018

Analyst News - Sun, 01/14/2018 - 18:53

The Hyperconverged Infrastructure (HCI) segment continued to grow throughout 2017. Evaluator Group research in this area expanded in 2017 as well, adding products from NetApp, Lenovo and IBM. In this blog we’ll review some of the developments that occurred in 2017 and discuss what to look for in 2018.

2017 saw some consolidation in the HCI market plus new hyperconverged solutions from three big infrastructure companies. Early in the year HPE bought SimpliVity and standardized on this software stack for their HCI offering. IBM joined the HCI market with a Nutanix-based product running on their Power processors and Lenovo added the vSAN-based VX Series to their existing Nutanix-based HX Series HCI solution. NetApp released a new HCI solution, using the architecture from their SolidFire family of scale-out, all-flash arrays.

Going Enterprise?

In 2017 HCI vendors were touting their enterprise credentials, listing their Fortune 100 customers and providing some detail on selected use cases. The strong implication is that “enterprise” customers means “enterprise” use cases and that HCIs are capable of replacing the tier-1 infrastructures that supports mission-critical applications. While there are likely some examples of this occurring, our data suggests this isn’t happening to the extent that HCI vendors are suggesting.

Taken from contact with end user customers and through our HCI in the Enterprise studies the past two years, our information indicates that enterprises are indeed buying HCIs, but not currently replacing tier-1 infrastructure with these systems. They’re using them for new deployments and to consolidate existing tier-2 and tier-3 applications. However, this doesn’t prove that HCIs are not capable of supporting tier-1 use cases.

More Clouds

Companies want the ability to connect to the public cloud as a backup target, to run workloads and do cloud-native developments as well. Many HCI vendors have responded with “multi-cloud” solutions that run their software stack on premises and in the cloud with support for containers and DevOps platforms. Some are including cloud orchestration software that automates backend processes for IT and provides self-service operation for developers and application end users.

NetApp and IBM Release HCI Solutions

In 2017 NetApp released its HCI based on the SolidFire Element OS. This product promises lower TCO as well, due to the performance of its SolidFire software stack that was designed for all-flash, and its architecture that separates storage nodes from compute nodes. This allows NetApp HCI clusters to scale storage and compute resources independently.

2017 also saw IBM announce an HCI product using the Nutanix software stack running on IBM Power servers. The company is positioning this offering as a high performance HCI solution for big data, analytics and similar use cases. They’re also touting the lower TCO driven by their ability to support more VMs per node than other leading HCIs, based on IBM testing.

VMware’s vSAN 6.6 release added some features like software-based encryption and more data protection options for stretched clusters, but not the big jump in functionality we saw in 2016 with vSAN 6.2. vSAN came in as the most popular solution in our HCI in the Enterprise study, for the second year in a row. The Ready Nodes program seems to be a big success as the vast majority of vSAN licenses are sold with one of more than a dozen OEM hardware options.

Nutanix kept up their pace of introducing new features as AOS 5.0 and 5.1 added over two dozen and announced some interesting cloud capabilities with Xi Cloud Services, Calm and a new partnership with Google Cloud Services (going up against Microsoft’s Azure and Azure Stack). See the Industry Update for more information.

Nutanix continues to grow their business every quarter, but have yet to turn a profit – although they claim they’re on road to becoming profitable. The CEO said they are “getting out of the hardware business”, instead emphasizing their software stack and cloud-based services, (as evidenced by the Xi and GCS activities). That is fine, as long as they plan to only sell through others, but to date they continue to sell their own systems. They will continue to sell the NX series of HCI appliances using Supermicro servers but won’t recognize revenue from it and have taken it off the comp plan. Given the consolidation we’re seeing in the HCI market, and the fact that server vendors like HPE and Cisco have their own software, this is probably the right move.

What to watch for in 2018

NetApp – see how their HCI does in the market. Their disaggregated technology is unique and seems to address one of the issues HCI vendors have faced, inefficient scaling. While this is a new HCI product, the SolidFire software stack is not new and NetApp has a significant presence in the IT market.

HPE SimpliVity – after some reorganization and acquisition pains, this product may be poised to take off. We have always liked SimpliVity’s technology and adding the resources and server expertise, plus a captive sales organization and an enterprise data center footprint may do the trick. That said, HPE has some catching up to do as Nutanix and the vSAN-based products are dominating the market.

Cisco – the company claims 2000+ customers in the first 18 months of HyperFlex sales but a large portion of those deals are most likely to existing UCS customers. The Springpath technology continues to mature but the question is what kind of success Cisco can have in 2018, as they expand beyond their installed base.

IBM – can they make HCI work in the big data and analytics space? Can they be successful with Nutanix running on a new server platform? We shouldn’t underestimate IBM but this is a new direction for technology that has been based on industry standard hardware and it’s being sold to a new market segment.

Orchestration and Analytics – HCI vendors have been introducing more intelligence into their management platforms with some providing policy-based storage optimization and data protection, plus monitoring and management of the physical infrastructure. We expect this to continue with the addition of features that leverage data captured from the installed based and analyzed to generate baselines and best practices.

For more information see Evaluator Series Research on Hyperconverged Infrastructures including Comparison Matrix, Product Briefs, Product Analyses and the study “HCI in the Enterprise”.

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post HCI – What Happened in 2017 and What to Watch for in 2018 appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (1/1-1/8)

Analyst News - Fri, 01/12/2018 - 08:30

Systems and Storage News Roundup (1/1-1/8)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

1/8 – Actifio Update: Disruptive, Profitable and Growing…

Cisco

1/8 – Cisco Announces Infinite Broadband Unlocked for cBR-8

HPE

1/8 – HPE Provides Customer Guidance in Response to Industrywide Microprocessor Vulnerability

Intel

1/7 – New 8th Gen Intel Core Processors with Radeon RX Vega M Graphics Offer 3x Boost in Frames Per Second in Devices as Thin as 17mm

Mellanox

1/4 – Mellanox Ships BlueField System-on-Cip Platforms and SmartNIC Adapters to Leading OEMs and Hyperscale Customers

Micron

1/8 – Micron and Intel Announce Update to NAND Memory Joint Development Program

OCZ

1/8 – Toshiba Unveils Mainstream RC100 NVMe SSD Series at CES 2018

SanDisk

1/8 – Western Digital Unveils New Solutions to Help People Capture Preserve, Access and Share Their Ever-Growing Collections of Photos and Videos

Seagate

1/8 – Seagate Teams Up With Industry-Leading Partners To Offer New Mobile Data Storage Solutions At CES 2018

SolarWinds

1/8 – SolarWinds Acquires Loggly, Strengthens Portfolio of Cloud Offerings

Supermicro

1/8 – Supermicro Unveils New All-Flash 1U Server That Delivers 300% Better Storage Density with Samsung’s Next Generation Small Form Factor (NGSFF) Storage

The post Systems and Storage News Roundup (1/1-1/8) appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (12/25-1/1)

Analyst News - Thu, 01/04/2018 - 12:32

Systems and Storage News Roundup (12/25-1/1)

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

CA Technologies

12/28 – Three Ruling Technology Trends

 

Huawei

12/29 – China Mobile Chooses Huawei’s CloudFabric Solution for its SDN Data Center Network

 

The post Systems and Storage News Roundup (12/25-1/1) appeared first on Evaluator Group.

Categories: Analyst News
Syndicate content