SHARE

Intel is working on an exciting One API project. The project addresses the complexities introduced by the diverse nature of APIs deployed across multiple hardware architectures – the latter being something Intel knows a lot about!

But, there is something I should clarify before taking a further look at Intel’s One API project. OneAPI (yes, I know they sound similar!) is a standard used within the mobile communications industry. OneAPI’s allows Communication Service Providers (CSPs) to expose their networks. Intel’s One API is a very different project which Bill Savage, Intel vice president and general manager of Compute Performance Developer Products, describes below.

“One API is a project to deliver a set of developer tools that provide a unified programming model that simplifies development for workloads across diverse architectures. As our breadth of compute has grown to include specialized accelerators, Intel will deliver software solutions that allow developers to get the full performance out of the hardware.”

The purpose of One API is to eliminate the need to maintain separate code bases, multiple programming languages and different tools and workflows needed for different hardware architectures. The hardware environments which carry today’s data-centric workloads might include scalar (CPU), vector (GPU), matrix (AI) and spatial (FPGA) platforms. Intel believes developers can maximise the performance of their applications if they are designed to take advantage of the technology provided by these more advanced hardware architectures.

If you’ll excuse the pun, I’ll draw a parallel here with Intel’s Parallel Studio XE. This is a comprehensive toolset for C, C++, Fortran and Python programmers which allows developers to work with the latest techniques in vectorisation, multi-threading, multi-node parallelisation and memory optimisation. This is Intel, one of the world’s leading chip and memory manufacturers, providing the best software development solutions for their multi-core chipsets. AI, ML and analytics are no longer niche topics. They have become mainstream and the demands they make on system resources need to be met by a well coupled approach to the design and optimisation of both the software and hardware.

Sticking with Parallel Studio for just a moment, the Intel MPI Library is an example of the sort of problem Intel is addressing with One API. Here the issue is about developing your app once and being able to choose at runtime whether to deploy over TCP/IP, Omni-Path, InfiniBand or some other multi-cluster interconnect. One MPI Library can be used to develop, maintain and test for multiple fabric types.

One API is similar in concept. In this case, we’re talking about a unified programming model that offers full native code performance across a range of hardware architectures including CPUs, GPUs, FPGAs and AI accelerators. Here’s a summary of what Open API contains.

  • A new direct programming language. Data Parallel C++ (DPC++) is an open, cross-industry alternative to single architecture proprietary languages. It delivers parallel programming productivity and performance using a programming model developers will recognise.
  • An API-based programming. The libraries span several workload domains that benefit from acceleration. Library functions are custom coded for each target architecture.
  • Analysis and debug tools. These are based on enhanced versions of Intel’s analysis and debug tools. They are designed to support DPC++ across the range of scalar, vector, matrix and spatial Intel hardware architectures.

Intel is planning to release a developer beta in Q4 this year, together with additional details about the One API project. We will keep you posted on Code Matters as the project unfolds.

Grey Matter is proud to be an Intel Software Elite Reseller.

Find out more about how Grey Matter Ltd can help you with this subject. Send us a message:

We’d love to be able to send you news about our great offers and the latest info.

Yes keep me updated please