Feed aggregator

Flash Memory Summit – August 6-9th, 2018

Analyst News - Fri, 03/30/2018 - 14:51

The Evaluator Group team will be attending this year’s Flash Memory Summit in Santa Clara, CA.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post Flash Memory Summit – August 6-9th, 2018 appeared first on Evaluator Group.

Categories: Analyst News

Cisco Live – June 10-14th 2018

Analyst News - Fri, 03/30/2018 - 14:49

The Evaluator Group team will be attending this year’s Cisco Live in Orlando, FL.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post Cisco Live – June 10-14th 2018 appeared first on Evaluator Group.

Categories: Analyst News

VeeamON – May 14-16th, 2018

Analyst News - Fri, 03/30/2018 - 14:47

The Evaluator Group team will be attending this year’s VeeamON in Chicago, IL.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post VeeamON – May 14-16th, 2018 appeared first on Evaluator Group.

Categories: Analyst News

Pure Accelerate – May 22-24th, 2018

Analyst News - Fri, 03/30/2018 - 14:29

The Evaluator Group team will be attending this year’s Pure Accelerate in San Francisco, CA.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post Pure Accelerate – May 22-24th, 2018 appeared first on Evaluator Group.

Categories: Analyst News

ZertoCon – May 21-23rd, 2018

Analyst News - Fri, 03/30/2018 - 14:27

The Evaluator Group team will be attending this year’s ZertoCON in Boston, MA.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post ZertoCon – May 21-23rd, 2018 appeared first on Evaluator Group.

Categories: Analyst News

Nutanix .NEXT – May 8-10th, 2018

Analyst News - Fri, 03/30/2018 - 14:25

The Evaluator Group team will be attending this year’s Nutanix .NEXT in New Orleans, LA.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post Nutanix .NEXT – May 8-10th, 2018 appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (3/19-3/26)

Analyst News - Thu, 03/29/2018 - 12:00

Systems and Storage News Roundup  (3/19-3/26)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

 

Actifio

3/19 – Public Cloud: Real-World Lessons of Strategic Success

Dataera

3/26 – Intent-Based Management Will Revolutionize the Storage Industry

EMC

3/21 – Dell EMC Takes Open Networking to the Edge for Next-Generation Access

Huawei

3/19 – Huawei Releases the Converged Flash Array OceanStor V5

IBM

3/20 – IBM Launches Watson Data Kits to Help Accelerate Enterprise AI Adoption

Intel

3/26 – Building the Foundation for a Better 5G

NEC

3/26 – MegaFon and NEC test Artificial Intelligence (AI) for Planning and Maintenance of Transport Network Resources

Oracle

3/20 – Oracle Java SE 10 Release Arrives

The post Systems and Storage News Roundup (3/19-3/26) appeared first on Evaluator Group.

Categories: Analyst News

HANA – SAP’s Strategy is now In-Memory – Forbes blog by John Webster

Analyst News - Tue, 03/20/2018 - 09:50

The ideal computational system keeps all data in the fastest accessible place – typically system memory. However, memory volatility and cost have traditionally limited its size. Data that needs to be persisted shuffles back and forth to and from disk. Shuffling is a drag on performance, particularly when an application consumes large volumes of data. A disk-induced latency is of particular concern for AI and machine learning applications where real-time information delivery is the ultimate objective.

Thankfully, the same kinds of technology advancements that are driving the flash storage revolution are also quietly driving significant changes in the ways that computer system memory is implemented and used as persistent storage. It is now economically feasible to run an entire database for example in server memory—a significant technological advancement that promises to alter the way computing systems are designed and implemented that is now known as in-memory computing.

Non-volatile memory (NVRAM) is riding down the same cost for capacity curve as SSD. And technologies are now coming to market that allow discreet memory modules inside clustered servers to be networked together to form a scalable memory fabric. Intel (Optane) and Micron (QuantX) are advancing affordable 3D XPoint non-volatile memory modules for example, that are capable of supporting real time information applications.

One of SAP ’s hottest offerings right now is HANA, an in-memory data platform and its S/4HANA business suite built on HANA technology. IBM and Oracle also have products in this category and SAP currently claims over 21,000 HANA and 7,900 S/4HANA licensed customers. However, unlike the other two, SAP is all-in on in-memory. In a February 28, 2018 corporate fact sheet, SAP stated that “…our strategy is to become the most innovative cloud company powered by HANA.”

I began following in-memory computing technology as a bleeding-edge trend three years ago when the search was on for ways to run Big Data analytics applications in real time. Now, the significance of SAP’s strategy statement to in-memory computing—vendors and users alike—lies in the fact that a software company with over 378,000 customers in over 180 countries is staking its future on in-memory advancement. SAP aims to take in-memory mainstream.

The post HANA – SAP’s Strategy is now In-Memory – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

Dell Technologies World – April 30-May 3rd, 2018

Analyst News - Mon, 03/19/2018 - 17:23

The Evaluator Group team will be attending this year’s Dell Technologies World in Las Vegas, NV.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post Dell Technologies World – April 30-May 3rd, 2018 appeared first on Evaluator Group.

Categories: Analyst News

NAB Show – April 7-12th, 2018

Analyst News - Mon, 03/19/2018 - 17:16

The Evaluator Group team will be attending this year’s NAB Show in Las Vegas, NV.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post NAB Show – April 7-12th, 2018 appeared first on Evaluator Group.

Categories: Analyst News

Interop ITX – April 30-May 4th, 2018

Analyst News - Mon, 03/19/2018 - 17:14

The Evaluator Group team will be attending this year’s Interop ITX conference in Las Vegas, NV.  Click here for more information on this conference and contact us if you would like to set up a time to talk with Evaluator Group at this event.

The post Interop ITX – April 30-May 4th, 2018 appeared first on Evaluator Group.

Categories: Analyst News

Scale-Out Data Protection: Research and Coverage Update

Analyst News - Mon, 03/19/2018 - 15:52

They say that times flies when you’re having fun, so I must be having too much fun these days.  My first (and only) blog with Evaluator Group was months ago and it is long past time for this one.

Though it wasn’t covered in a blog as I had originally planned, hopefully you were able to read about what is driving customers to re-evaluate their data protection strategies in the Technical Insight published in January.  I’ve heard from several readers that they agree wholeheartedly with one or more of the drivers described in the report.  A few other readers have provided input on additional drivers they see having impact.

We are continuing to work on updates to our data protection research materials.  One of the top areas we are researching is scale-out data protection.  There is some variability in this scale-out space, with solutions that focus mainly on backup and others that support secondary storage workloads.  The maturity of vendors in this space vary, with some who have been at it for more than a decade to others that are young startups.  Evaluator Group is working on a number of research documents on this space, so watch for announcements in the near future.

Evaluator Group is also continuing to explore several enterprise data management topics that are gaining in popularity and relevance.  Driving this level of interest is the amount of data moving into private and public cloud environments.  As the data is copied or moved, many solutions analyze and catalog various aspects of data such as the data type, who created it, when it was created, etc.  The creation of this metadata may be the intent of some of these solutions or a byproduct. Either way, it opens interesting new opportunities to better manage and derive value from an enterprise’s information.

As described in the Technical Insight referenced above, one of the interesting areas in the enterprise data management space we have been researching is Copy Data Management (CDM).  CDM is about locating and managing all the copies of your data, wherever they are, and optimizing the storage of those copies.  As with many of today’s IT solutions, CDM vendors have different approaches to how they identify copies of data and what they do to coordinate and optimize those copies. Soon, the task will be to determine how best to share the results of that research with you all.

Stay tuned.

The post Scale-Out Data Protection: Research and Coverage Update appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (2/26-3/5)

Analyst News - Wed, 03/07/2018 - 12:58

Systems and Storage News Roundup (2/26-3/5)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

3/1 – At Waste Industries, Digital Transformation Initiatives Rooted in Trust

Cavium

2/26 – Cavium Showcases Next-Generation 5G Radio Access and Core Networks, Telco Cloud and Edge Infrastructure Solutions at Mobile World Congress 2018

Cisco

2/27 – Cisco NB-IoT Platform Now Commercially Available Worldwide, Making it Practice and Profitable for Companies to Deliver Connected Services via Low Cost, Low Power Devices

Commvault

2/28 – Commvault and Mercy Partner to Deliver Powerful New Cloud Backup and Disaster Recovery Service to The Health Care Market

Fujitsu

2/26 – Fujitsu Launches SAP S/4HANA Conversion Service, Fully Supports Migration to Next Generation ERP

HPE

2/26 – HPE Enhances Safety and Security of Citizens Through Implementation of India’s First Cloud-Based Integrated Command and Control Center

Infinidat

2/27 – Introducing Infinidat’s Synchronous Replication For The Infinibox

Intel

2/27 – Intel Introduces ‘Intel AI: In Production’ Program-A New Way to Bring Artificial Intelligence Devices to Market

NEC

3/5 – NEC Showcases The Latest in Intelligent Public Transport Solutions at IT Trans

Pure Storage

2/27 – It’s Time to Prime The AI Engine

SanDisk

2/26 – Western Digital NVME Solutions Enable Data to Thrive in Intelligent Edge and Mobile Computing Environments

The post Systems and Storage News Roundup (2/26-3/5) appeared first on Evaluator Group.

Categories: Analyst News

The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster

Analyst News - Tue, 02/20/2018 - 09:32

High Performance Computing (HPC) traditionally exists as a separate and distinct discipline from enterprise data center computing. Both use the same basic components—servers, networks, storage arrays—but are optimized for different types of applications. Those within the data center are largely transaction-oriented while HPC applications crunch numbers and high volumes of data. However, an intersection is emerging, driven by more recently by business-oriented analytics that now fall under the general category of Artificial intelligence (AI).

Data-driven, customer-facing online services are advancing rapidly in many industries, including financial services (online trading, online banking), healthcare (patient portals, electronic health records), and travel (booking services, travel recommendations). The explosive, global growth of SaaS and online services is leading to major changes in enterprise infrastructure, with new application development methodologies, new database solutions, new infrastructure hardware and software technologies, and new datacenter management paradigms. This growth will only accelerate as emerging Internet of Things (IoT)-enabled technologies like connected health, smart industry, and smart city solutions come online in the form of as-a-service businesses.

We can see how the Supple company proves to us that business is now about digital transformation. In the minds of many IT executives, this typically means delivering cloud-like business agility to its user groups—transform, digitize, become more agile. And it is often the case that separate, distinctly new cloud computing environments are stood-up alongside traditional IT to accomplish this. Transformational IT can now benefit from a shot of HPC.

HPC paradigms were born from the need to apply sophisticated analytics to large volumes of data gathered from multiple sources. Sound familiar? The Big Data way to say the same thing was “Volume, Variety, Velocity.” With the advent of cloud technologies, HPC applications have leveraged storage and processing delivered from shared, multi-tenant infrastructure. Many of the same challenges addressed by HPC practitioners are now faced by modern enterprise application developers.

As enterprise cloud infrastructures continue to grow in scale while delivering increasingly sophisticated analytics, we will see a move toward new architectures that closely resemble those employed by modern HPC applications. Characteristics of new cloud computing architectures include independent scaling compute and storage resources, continued advancement of commodity hardware platforms, and software-defined datacenter technologies—all of which can benefit from an infusion of HPC technologies. These are now coming from the traditional HPC vendors—HPEIBM and Intelwith its 3D-XPoint for example—as well as some new names like NVIDIA, the current leader in GPU cards for the AI market.

To extract better economic value from their data, enterprises can now more fully enable machine learning and deep neural networks by integrating HPC technologies. They can merge the performance advantages of HPC with AI applications running on commodity hardware platforms. Instead of reinventing the wheel, the HPC and Big Data compute-intensive paradigms are now coming together to provide organizations with the best of both worlds. HPC is advancing into the enterprise data center and it’s been a long time coming.

 

The post The Coming Intersection of HPC and the Enterprise Data Center – Forbes blog by John Webster appeared first on Evaluator Group.

Categories: Analyst News

What Comes after CI and HCI? – Composable Infrastructures

Analyst News - Mon, 02/19/2018 - 18:07

There’s an evolution occurring in IT infrastructure that’s providing alternatives to the traditional server-, storage- and SAN-based systems enterprises have used for the past two decades or so. This evolution was first defined by “hyperscalers”, the big public cloud and social media companies, and encompasses multiple technology approaches like Converged, Hyperconveged and now Composable Infrastructures. This blog will discuss these combined “Integrated Infrastructure” approaches and look at how they attempt to address evolving needs of IT organizations.

Hyper-scale Infrastructures

Companies like Google, Facebook and AWS had to address a number of challenges including huge data sets, unpredictable capacity requirements and dynamic business environments (first, public cloud and social media, now IoT, AI, etc.) that stressed the ability of traditional IT to deliver services in a timely fashion. So they created a new model for IT that incorporated software-based architectures (developed internally), that ran on standard hardware and provided the needed flexibility, scale and cost-containment.

But enterprise IT doesn’t have the expertise to support this kind of do-it-yourself infrastructure, nor the desire to dedicate the resources or take on the risk. In general, enterprises want to use trusted suppliers and have clear systems responsibility. They need much of what hyper-scale systems provided, but with integrated solutions that are simple to operate, quick to deploy and easy to configure and re-configure.

CI, SDS and HCI

Converged Infrastructure solutions were some of the first attempts at an integrated infrastructure, creating certified stacks of existing servers, storage, networking components and server virtualization that companies bought by the rack. Some were sold as turnkey solutions by the manufacturer and others were sold as reference architectures that VARs or enterprises themselves could implement. They reduced the integration required and gave companies a rack-scale architecture that minimized set up costs and deployment time.

Hyperconverged Infrastructures (HCIs) took this to the next level, actually combining the storage and compute functions into modules that users could deploy themselves. Scaling was easy too, just add more nodes. At the heart of this technology was a software-defined storage (SDS) layer that virtualized the physical storage on each node and presented it to a hypervisor that ran on each node as well, to support workloads and usually the SDS package itself.

HCIs come in several formats, from a turnkey appliance sold by the HCI manufacturer to a software-only model where the customer chooses their hardware vendor. Some enterprises even put together their own HCI-like solution, running an SDS package on a compatible server chassis and adding the hypervisor.

While Converged and Hyperconverged Infrastructures provide value to the enterprise, they don’t really provide solution for every use case. HCIs were great as a consolidation play for the lower end, SMB and mid-market companies. Enterprises use them too, but more for independent projects or remote environments that need a turnkey infrastructure solution. In general, they’re not using HCIs for mainstream data center applications because of concerns about creating silos of infrastructure and vendor lock-in, but also a feeling that the the technology lacks maturity and isn’t “mission critical” (based on 2017 Evaluator Group Study “HCI in the Enterprise”).

While they’re comprised of traditional IT infrastructure components, CIs present a system that’s certainly mature and capable of handling mission critical workloads. But CIs are also relatively expensive and inflexible, since they’re essentially bundles of legacy servers, storage and networking gear, instead of software-defined modules of commodity hardware with a common management platform. They also lack the APIs and programmable aspects that can support automation, agility and cloud connectivity.

Composable Infrastructure

Composable Infrastructure (CPI) is a comprehensive, rack-scale compute solution that combines some characteristics of both Converged and Hyperconverged Infrastructures. CPI disaggregates and then pools physical resources, allocating them at run time for a specific compute job, then returns them to the pool. It provides a comprehensive compute environment that supports applications running in VMs, containers and bare metal OS.

CPI doesn’t use SDS, as HCIs do, to share the storage pool, but supports direct-attachment of storage devices (like drives and SSDs), eliminating the SDS software latency and cost. CPI also doesn’t require a hypervisor to run an SDS layer or workloads. Instead, it creates bare metal server instances that can support containers or a hypervisor if desired, reducing software licensing and hypervisor lock-in.

Composable Infrastructures are stateless architectures, meaning they’re assembled at run time, and can be controlled by 3rd party development platforms and management tools through APIs. This improves agility and makes CPI is well suited for automation. For more information see the Technology Insight paper “Composable – the Next Step in Integrated Infrastructures”.

 

The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

The post What Comes after CI and HCI? – Composable Infrastructures appeared first on Evaluator Group.

Categories: Analyst News

Systems and Storage News Roundup (2/5-2/12)

Analyst News - Thu, 02/15/2018 - 09:48

Systems and Storage News Roundup (2/5-2/12)

 

 

If your favorite vendor is missing from our news roundup, it’s likely they didn’t have any pertinent announcements last week. If there’s a vendor you’re interested in seeing, email me at Nick@evaluatorgroup.com.

 

Actifio

2/12 – Escaping the Gravitational Pull of Big-Data

CA Technologies

2/8 – Erica Christensen, CA Technologies, Discusses The Importance of Producing STEM10

Cisco

2/5 – Global Cloud Index Projects Cloud Traffic to Represent 95 Percent of Total Data Center Traffic by 2021

Dell EMC

2/6 – Dell EMC Expands Server Capabilities for Software-defined, Edge and High-Performance Computing

Huawei

2/8 – Huawei Releases E2E Cloud VR System Prototype

Infinidat

2/6 – Infinidat Backup Appliance: When the Quality and Cost of Storage Really Matters

MapR

2/8 – MapR Simplifies End-to-End Workflow for Data Scientists

Maxta

2/8 – Five Requirements for Hyper-Converged Infrastructure Software

NEC

2/9 – NEC Succeeds in Simultaneous Digital Beamforming That Supports 28 GHz Band For 5G Communications

Oracle

2/12 – Oracle Cloud Growth Driving Aggressive Global Expansion

Supermicro Computer

2/7 – Supermicro Expands Edge Computing and Network Appliance Portfolio with New High Density SoC Solutions

SwiftStack

2/9 – Multicloud Storage Mitigates Risk of Public Cloud Lock-In

Tintri

2/5 – Automation and The Age of The Self-Driving Data Center

The post Systems and Storage News Roundup (2/5-2/12) appeared first on Evaluator Group.

Categories: Analyst News
Syndicate content