Monday, March 14, 2011

Pi day thinking on parallel computing adoption

Happy Pi day! It just occurred to me that the logo of Parasians (on parasians.com) showcases P and i, making it Pi. Of course, the Pi in this case means bringing out "intelligence" from "Parallel" computing.

I have been thinking and reading about new technology adoption and how some markets have successfully generated interest among user bases. It appears short-term benefit to an organization and barrier of using the technology are two key driving factors. I decided to look for these in a forum (meetup) setting.

Last Monday I co-hosted the first meetup of HPC & GPU Supercomputing Group of Silicon Valley. With 35+ attendees from industries, academia, and government research groups, there is clear excitement and energy in the parallel computing space. The people are mostly in three camps:

1). the GPU computing camp: the majority of attendees know at least a little bit about GPU computing, which has become a compelling alternative to CPU computing in high-performance computer systems.
2). the HPC computing camp: a number of attendees are experienced in supercomputing, grid computing, and focus on distributed computing, which is a subset of parallel computing.
3). the interested camp: the people who are looking to learn about technologies and trends.

Given the goal of the meetup is to fill a void in the HPC/GPU development ecosystem, the first speaker topics focus on bringing resources to parallel programming practitioners. The attendees were mostly concerned about the technology, so my test on benefits to their organization didn't go far. As for barriers to using the technology, several comments indicate that the HPC/GPU area is evolving with sets of challenges, it takes certain intellectual curiosity to pursue. It does look like there are business opportunities for those who can bridge the technology (and benefit) gap. More observation to be discussed in a couple more meetups.

P.S. The Parasians web portal is here, where there's discussion about the difference of parallel, concurrent, and distributed computing, why GPU parallel computing is the next paradigm, and levels of acceleration consideration yielding 750x+ application speed up!

No comments:

Post a Comment