As most Palisade customers know, computer processing speed is important in both Monte Carlo software and packages for genetic algorithm optimization. So when I came across an InfoWorld article on the acceleration capabilities of GPU-based processing, I clicked right through to find out more about the gizmos called GPUs.
A GPU (Graphics Processing Unit) is used to speed up graphics rendering and–much more interesting to me since I’m not a gamer — to process huge data sets via complex algorithms. The reason the gizmo is good at both of these tasks, I learned, is that has a very, very parallel structure. And this structure makes it ideal for "embarrassingly parallel" –I’m not making this up–problems. This, embarrassingly, is standard developer lingo.
What does it take to be embarrassingly parallel? It takes many independent, easily separated tasks. Each of these tasks can be processed on a different computational unit. Massive amounts of data, many separate algorithms, many machines. Read risk analysis, decision evaluation, face recognition, and weather prediction. So what’s not to be embarrassed about?
I progressed and was tickled to discover that in the early days an embarrassingly parallel problem might have been solved with a gizmo called a "blitter," which is—-oh,never mind, it’s enough to make even the most senior developer blush!