CUDA and OpenCL

Martin answers some questions about his recent post on GPU computing and CUDA

In response to some questions about my recent post on GPU computing and CUDA, I've dug out a few slides from the presentation that Andy Keane of Nvidia gave me.

First: how does CUDA relate to OpenCL? CUDA has its own C compiler, but it also provides a foundation for running other parallel APIs:

cuda 7

What's the difference between CUDA and OpenCL from a programmer's perspective? Native CUDA is C, and OpenCL is a low-level API. Both compile down to PTX (Parallel Thread eXecution), which is the intermediate assembly language for the CUDA architecture:

cuda 20

In C for CUDA, memory is managed by C. In OpenCL, memory is managed by the programmer. The principal selling point of OpenCL is that it's supported on other GPU computing hardware besides Nvidia's.

Another question I've heard is why there hasn't been much significant adoption of CUDA by financial institutions. The short answer is that financials want double precision, and that was only introduced in CUDA with the Tesla 10-series in March 2008.

Nevertheless, there's been a lot of developmental activity for CUDA in computational finance:

CUDA 31

Any more questions?

Copyright © 2009 IDG Communications, Inc.