Unlocking Intel’s Gaming GPUs for AI

Interestingly enough, the gaming GPUs that Intel released for the PC/client market is one of the most useful pieces of hardware to come out in the last decade. Less so because they are priced affordably (which is a plus), but because they are already in some of the laptops and desktops you own and widely available.

It is no secret that I and others at Intel have been working hard to find the right enablement for the community—working with the maintainer community at events like SciPy and similar events has netted several breakthroughs in untangling the strands of complexity for the enablement.

In terms of original strategic genius for enablement, I’ll have to hand it to NVidia. They were tactically savvy to do two things simultaneously: give seed units of cards to academics and core developers, and allow for the same free tools to work from gaming to datacenter with the guarantee that the ABI remains stable. This long runway of iteration over the past decade has given them a lead that makes their longtime GPU rival AMD struggle to keep up.

For any newcomer to the GPU space, we are talking about trying to shortcut a 10-year lead. Not impossible, but it would require an interesting enablement strategy to pull it off.

The Intel Arc GPU, as filmed by me in 2022

One thing that many in the industry may incorrectly assume is that manufacturers produce the vast majority of the AI models, and therefore it is the responsibility of the hardware manufacturer to publish. The truth is actually quite different—especially if you’ve been working in the open source space. The community builds the models; academics, developers, and framework maintainers are usually the authors of the AI models that exist today and that have driven the imagination in the AI boom starting in 2023.

This leads to a bit of a chicken or egg problem—what comes first? the AI frameworks enabled with a manufacturer’s hardware? Or the tools to integrate them into the frameworks?

The reality is you can never really beat open source (and the community that surrounds it). Here are the things I’ve found that work:

  1. Provide the tools (such as compilers and runtimes) with adequate access (download availability) and documentation (i.e. “hello world” examples and smoke tests)

  2. Never take the autonomy out of the maintainers and community—competing with the community on packaging, releases, and distribution only angers downstream maintainers and users.

  3. Upstreaming methods that don’t anger the community is a plus: there must be a sunset date to forks and extensions. Creating code “islands” on a fork or an extension can easily become a maintenance nightmare, outpacing your in-house engineering’s ability to keep their side up to date.

  4. Choose the right team. If your in-house dev relationship isn’t respected within the community, or isn’t technical enough to articulate the changes to engineering, it can isolate & separate you from what the community is doing.

  5. Clear licensing and SW version guarantees. If the license is non-standard ( BSD type III, MIT, et al.) this can make things difficult as the community of maintainers and users don’t need the extra legal liability of complicated licensing. Versions of compatibility between runtimes, compilers, and hardware also need to be clearly documented—not just known to your in-house engineering.

  6. LISTEN and ACT on what the community asks for. Failure to do so can cause a loss of confidence which may take years to fix.

In summary, over the last 10-15 years I’ve experienced a lot as a Software Engineer and a part of the open-source community. Even though I’ve been coding less on specific projects, my involvement on the governance side has only increased (my Github commits tapering is clear evidence on that front).

The experiences across multiple companies I have had (or on my own time) have given me the insights to help companies navigate the open-source software world, something I will continue with as I try to get the Intel GPUs enabled. It’s a big ship to steer, but even small corrections make a difference. To the open-source community that has been with me for so long, this is the way I’m trying to give back to y’all.

Disclosure: David is employed at Intel as the CTO of AI for global solutions/sales

Next
Next

New website migration