Server manufacturers have long recognised the niche in public cloud computing that physical servers neatly fill. This has evolved over time to IT leaders and the industry recognising that some workloads will always be run on-premise; some may run both on the public cloud and on-premise, and some may be wholly cloud-based.
Artificial intelligence (AI) inference is the workload that’s now gaining traction among the server providers, as they look to address concerns over data loss, data sovereignty and potential latency issues, when crunching AI data from edge devices and the internet of things (IoT).
Dell Technologies has now extended its Dell NativeEdge edge operations software platform to simplify how organisations deploy, scale and use AI at the edge.
The Dell platform offers what the company describes as “device onboarding at scale”, remote management and multi-cloud application orchestration. According to Dell, NativeEdge offers high-availability capabilities to maintain critical business processes and edge AI workloads, which are able to continue to run irrespective of network disruptions or device failures. The platform also offers virtual machine (VM) migration and automatic application, compute and storage failover, which, said Dell, provides organisations increased reliability and continuous operations.
One of its customers, Nature Fresh Farms, is using the platform to manage over 1,000 IoT-enabled facilities. “Dell NativeEdge helps us monitor real-time infrastructure elements, ensuring optimal conditions for our produce, and receive comprehensive insights into our produce packaging operations,” said Keith Bradley, Nature Fresh Farms’ vice-president of information technology.
Coinciding with the KubeCon North America 2024 conference, Nutanix announced its support for hybrid and multi-cloud AI based on the new Nutanix Enterprise AI (NAI) platform. This can be deployed on any Kubernetes platform, at the edge, in core datacentres and on public cloud services.
Nutanix said NAI delivers a consistent hybrid multi-cloud operating model for accelerated AI workloads, helping organisations securely deploy, run and scale inference endpoints for large language models (LLMs) to support the deployment of generative AI (GenAI) applications in minutes, not days or weeks.
It’s a similar story at HPE. During the company’s AI day in October, HPE CEO Antony Neri discussed how some of its enterprise customers need to deploy small language AI models.
“They typically pick a large language model off the shelf that fits the needs and fine tune these AI models using their unique, very specific data,” he said. “We see most of these were loads on premise and co-locations where customers control their data, given their concerns about data sovereignty and regulation, data leakage and the security of AI public cloud APIs.”
In September, HPE unveiled a collaboration with Nvidia resulting in what Neri describes as a “full turnkey private cloud stack that makes it easy for enterprises of all sizes to develop and deploy generative AI applications”.
He said that with just three clicks and less than 30 seconds to deploy, a customer can deploy an HPE private cloud AI, which integrates Nvidia accelerated computing network and AI software with HPE’s AI server, storage and cloud services.
During its Tech World event in October, Lenovo unveiled Hybrid AI Advantage with Nvidia, which it said combines full-stack AI capabilities optimised for industrialisation and reliability.
The AI part of the package includes what Lenovo calls “a library of ready-to-customise AI use-case solutions that help customers break through the barriers to ROI [return on investment] from AI”.
The two companies have partnered closely to integrate Nvidia accelerated computing, networking, software and AI models into the modular Lenovo Hybrid AI Advantage.
Edge AI with the hyperscalers
The public cloud platforms all offer feature-rich environments for GenAI, machine learning and running inference workloads. They also have product offerings to cater for AI inference on IoT and edge computing devices.
Amazon Web Services offers SageMaker Edge Agent; Azure IoT hub is part of the mix Microsoft offers; and Google has Google Distributed Cloud. Such offerings generally focus on doing the heavy lifting, namely machine learning, using the resources available in their respective public clouds to build data models. These are then deployed to power inference workloads at the edge.
What appears to be happening with the traditional server companies is that in response to the cloud AI threat, they see a number of opportunities. IT departments will continue to buy and deploy on-premise workloads, and AI at the edge is one such area of interest. The second factor likely to influence IT buyers is the availability of blueprints and templates to help them achieve their enterprise AI goals.
According to analyst Gartner, while the public cloud providers have been very good at showing the art of the possible with AI and GenAI, they have not been particularly good at helping organisations achieve their AI objectives.
Speaking at the recent Gartner Symposium, Daryl Plummer, chief research analyst at Gartner, warned that tech providers are too focused on looking at the advancement of AI from their perspective, without taking customers on the journey to achieve the objectives of these advanced AI systems. “Microsoft, Google, Amazon, Oracle, Meta and OpenAI have made one major mistake – they’re showing us what we can do, [but] they’re not showing us what we should do,” he said.
The missing pieces concern domain expertise and IT products and services that can be tailored to a customer’s unique requirements. This certainly looks like the area of focus the likes of Dell, HPE and Lenovo will look to grow in partnership with IT consulting firms.
This post is exclusively published on eduexpertisehub.com
Source link