It’s a widely held view that artificial intelligence (AI) is the key to fundamentally changing how organisations will derive insights from data. Clearly we are not going to get there overnight and that for most it will be a journey with more than a few setbacks along the way. We are finding that many of our clients are really only just starting their journey. Interestingly a recent white paper from IDC suggests the following adoption stats which supports our own experience:
- 31 percent of organisations are in discovery/evaluation
- 22 percent of organisations plan to implement AI in next 1-2 years
- 22 percent of organisations are running AI trials
- 4 percent of organisations have already deployed AI
The IDC paper notes that regardless of how the early adopters have chosen to develop their AI/Cognitive solution (On-premise or in the Cloud) there is generally a point in their journey where they have “hit the wall”. The inference is that these new AI/Cognitive solutions require a new breed of IT Infrastructure as they start to scale out.
In their research, IDC asked organisations “what they experienced when they started running AI applications on their existing on-premise infrastructure
- 77.1% of respondents said they ran into one or more limitations with their on-premise AI infrastructure.
- Among cloud users for cognitive, a remarkable 90.3% of organisations ran into such limitations.”
What Should You Do?
IDC believes that organisations that are currently considering AI initiatives or that are moving from an experimentation stage to a more mature stage can take any or — over time — several AI development approaches dependent upon the size of the AI initiative and whether you are pursuing and On-premise or SaaS based solution.
Small to Medium-Sized AI Initiatives
For small to medium-sized AI initiatives, developing a solution in-house is recommended. There are multiple advantages with this approach. Through collaborative experimentation, developers, LOBs, data analysts or data scientists (if available), and the infrastructure team will obtain important new skill sets while creating a tailored solution for the business.
Larger AI Initiatives
Larger AI initiatives will benefit from external support. The time, cost, and complexity of developing a comprehensive AI solution that is intended to bring business-critical innovations to the organisation may be too great to take on with an in-house trial-and-error approach, except for large organisations with significant resources.
On-Premise or Cloud?
For some larger AI initiatives, SaaS solutions may exist, but as with any cloud-based software solution, customisability will be limited and scalability will depend on the provider’s infrastructure, as will performance. Also, cost can become detrimental when data volumes or the number of transactions grow rapidly. In the case of business-critical data, data that is sensitive, or data that is subject to regulatory compliance, the security of a SaaS solution will need to be evaluated.
IDC mentioned accelerators as an important way to overcome infrastructure performance limitations with AI systems. Their research found that among businesses with accelerated infrastructure for AI applications, 65% run these solutions on-premise: 22% on-premise only and 43% both on-premise and in the cloud. AI systems that employ deep learning algorithms, which require massive compute capabilities to train. In some cases, training deep learning algorithms with accelerators can bring iterations down from days to hours.
A recent blog by Tim Vincent Vice President, IBM Cognitive Systems Software notes that “as an organisation embraces a holistic AI strategy to drive insights from their data, it is critical to look at their infrastructure as well as their data management strategy and AI capabilities. Organisations that are deriving the most value from data are building their data management and AI platforms close to where the data resides, thereby reducing latency. Additionally, they are using infrastructure specifically designed for data and compute-intensive workloads like advanced analytics and AI, as well as deploying software that is optimised to exploit it. Together this maximises efficiency of deployment and the value of insights.
When it comes to implementing AI, organisations often struggle with server performance bottlenecks and open source software complexity. To address these challenges, there are deep learning toolkits like IBM PowerAI and servers like the IBM POWER9 server family which is available on the Red Hat operating system.
According to Tim Burke, Engineering Vice President, Cloud and Operating System Infrastructure, Red Hat; IT optimisation for AI “drives the need for more complete, enterprise-ready platforms that take the most popular open source innovations and back them with the reliability and support expected by modern enterprises. POWER9-based servers, running Red Hat’s leading open technologies offer a more stable and performance optimised foundation for machine learning and AI frameworks, which is required for production deployments.”
”GPUs are at the foundation of major advances in AI and deep learning around the world,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. “Through the tight integration of IBM POWER9 processors and NVIDIA V100 GPUs made possible by NVIDIA NVLink, enterprises can experience incredible increases in performance for compute-intensive workloads.”
Access the IDC report here.
Interested in configuring your own AI Optimised server? Take a few minutes to either customise a server based on your workload or get some solution-specific hardware recommendations for a server that can handle the most advanced deep learning workloads.