With Big Data designated as the next frontier for innovation, competition, and productivity by McKinsey, and Gartner selecting it as one of the top 10 strategic technologies for 2012, Big Data is everywhere in the media, and on top of people’s minds. But are Big Data technologies sufficient in addressing “big data” problems? What about the compute-intensive applications that involve processing significant amounts of data? The Silicon Valley HPC/GPU Supercomputing Meetup Group decided to bring the two topics together for a discussion.
“It wasn't what I expected, which turned out to be a good thing.” Commented by an attendee of the recent panel discussion and debate on similarities and differences of Big Data vs. Big Compute. When multiple reviews on the event run over 150 words each, it is clear that the event had struck a chord, and perhaps, invoked some controversy.
Big Data and Big Compute, specifically using GPUs in this case, are not new concepts. Big Data usually refers to datasets too large for database management tools, and GPU computing leverages the parallel nature of graphics processing unit to perform computation in applications traditionally handled by the CPU.
The Meetup organizer and panel moderator, Dr. Jike Chong, offered several interpretations of Big Data and Big Compute from speaking with the four panelists with diverse backgrounds: constraint-based, relative, scalability and historic. The discussion quickly heated up:
Is Big Data vs. Big Compute a wallet problem in an economic context? Depends. Do we know which data is valuable prior to computation taking place?
What are the differences in priority? For big data, it tends to be to explore things rapidly, where as for big compute, there is more focus on optimization and fine tuning of the hardware.
While the focuses of the two approaches appear to be different, both are concerned with throughput. Some may say these are two different ways of solving the same problem, and others (myself included) advocate these to work together to solve big data compute intensive problems.
Steve Scott, NVIDIA Tesla CTO, one of the panelists, puts it well with a summary on GPUs for Big Data:
If it is all data movement, there’s no need for GPU or CPU.
If there’s some serious computing that needs to be done on that data and the problem can be distributed, GPU can help allow more complex analysis.
If the problem has no locality such as in big graph analytics, GPUs may work well in the future.
So where is the convergence and what are the implications? Lots of big compute problem are also big data: reverse time migration in oil and gas, visual search for example. It also turns out a very critical thing for both is power: how do we get less power per productivity? Both Big Data and Big Compute have to optimize, and will be driving that effort in the next five years.
Big Data and Big Compute are not opposing concepts, and the discussion revealed there is more than a perspective, application, or priority difference, there is underlying culture difference of the two camps using these approaches. Going forward, Big data, as “marriage of ‘database’ with compute”, and Big Compute need to take the other side into consideration, as technologies for each can shine where their priorities and interests aligns.
More information can be accessed at the links to slides.
The four panelists come from diverse backgrounds, and both Aaron Kimball and Tim Child had presented to the group prior:
Aaron Kimball, co-founder of WibiData
Steve Scott, Tesla CTO, NVIDIA
Tim Kaldewey, IBM research
Tim Child, Chief Entrepeneur Officer at 3DMashup
Description of Aaron Kimball's talk, Large Scale Machine Learning on Hadoop and HBase, is here, and Tim Child's talk on Postgres is here and was also mentioned here.
Join us next time on March 26 for another exciting discussion in HPC and GPU Supercomputing!
Wednesday, February 29, 2012
Thursday, August 25, 2011
When GPU Computing meets Advanced Analytics
In August the HPC/GPU Supercomputing Meetup Group featured a talk that generated much interest in both the GPU and the advanced analytics communities.
As an effort to expand readership, I posted the blog at NVIDIA's blog site as a guest author. Thanks to Will Park of NVIDIA for his help and suggestions.
We would love to hear your experience using GPUs to accelerate analytics: What are the challenges and how did you overcome them? How viable do you find GPUs accelerating advanced analytics? How much of the BI analytics market can GPU effectively address?
About the HPC/GPU Supercomputing Meetup Group:
As this is just one of many stimulating talks and discussions at this Meetup group, here is some background: Dr. Jike Chong initiated the Silicon Valley group in February this year, the group has attracted 200+ members, many of them pioneer parallel software practitioners who are passionate about high performance computing and GPU supercomputing. If you find topics in these areas interesting, join the discussions on Monday September 12!
As an effort to expand readership, I posted the blog at NVIDIA's blog site as a guest author. Thanks to Will Park of NVIDIA for his help and suggestions.
We would love to hear your experience using GPUs to accelerate analytics: What are the challenges and how did you overcome them? How viable do you find GPUs accelerating advanced analytics? How much of the BI analytics market can GPU effectively address?
About the HPC/GPU Supercomputing Meetup Group:
As this is just one of many stimulating talks and discussions at this Meetup group, here is some background: Dr. Jike Chong initiated the Silicon Valley group in February this year, the group has attracted 200+ members, many of them pioneer parallel software practitioners who are passionate about high performance computing and GPU supercomputing. If you find topics in these areas interesting, join the discussions on Monday September 12!
Tuesday, June 7, 2011
100x Speedups, Are They Real?
100x speedups, are they real? A handful of the 30+ attendees at June 6 meetup meeting of the HPC & GPU Supercomputing Group of Silicon Valley put their hands up while most were skeptical.
“How many machines are at your disposal?”
“Are we talking about single-thread, multi-thread, or what?”
“Is this for any application of application specific?”
Questions started coming from all directions for this topic that many have explored when faced with plethora of opportunities that multicore and manycore processors bring, which validates it is very much an ongoing debate.
For the next 50 minutes, Jike Chong, adjunct professor at Carnegie Mellon, Principal Application Architect at Parasians, and the organizer of this HPC/GPU meetup group, brought forth five key questions to shed some light on this discussion:
- What does 100x speedup mean?
- Who is concerned about 100x speedup?
- Where do 100x speedups come from?
- When is the comparison useful?
- How can I get the speedup?
His talk focused on the critical role that the application developers play in the changing landscape of the semiconductor industry. It distinguished the application developers’ concerns with the concerns of other important players in the field, such as the architecture researchers.
Jike introduced the audience to the past and present practices of industry practitioners and researchers to work towards answering the question about obtaining speedups across processors and platforms. He then used an example to illustrate the levels of optimizations that are possible for developing efficient applications on modern parallel computing platforms.
With this background in place, Jike discussed when and how speedups are useful for pioneer industry practitioners working on parallel application development, as well as common pitfalls when making or interpreting such comparisons.
Finally, Jike made concrete recommendations for organizations and practitioners looking for 100x speedup for their applications as they seek to take advantage of significant speedups to make game changing technology advances, realize significant cost savings, and enable new revenue capabilities.
100x speedups, are they real? Take a look at the slides and video (segments may be available soon), and let us know what you think!
This is a glimpse of what takes place monthly at this local group that grew from zero to 150+ active members in four months.
But wait, there’s more! A short talk and a book review also showcased in the meeting: Micah Villmow, Senior Compiler Engineer at AMD lead the group through AMD’s GPU computing timeline, how performance doubles or almost doubles with each generation from graphics to compute, and we could see when industry experts really started viewing HW as compute machine rather than a graphics machine. Ankit Gupta from NVIDIA shared a review of chapter 32 of GPU Computing Gems, volume 1: Real-Time Speed-Limit-Sign Recognition on an Embedded System Using a GPU.
While the slides of this action-packed meeting are accessible, those who attend in person have the benefit to learn and challenge each other in an intimate and interactive setting. The HPC & GPU Meetup “cluster” has now grown to include ten US-based groups, with more than 600 members, with the initiator Andrew Sheppard on a mission to start more!
Are there practices that work well for you in engaging with GPU developers and practitioners? Talk to any of our organizers! It’s interesting times for HPC and GPU supercomputing, so join a local group or start your own!
**Footnote: The “100x speedup, is it real?” talk builds on a recently published Berkeley Paper on this topic, which the speaker co-authored.
“How many machines are at your disposal?”
“Are we talking about single-thread, multi-thread, or what?”
“Is this for any application of application specific?”
Questions started coming from all directions for this topic that many have explored when faced with plethora of opportunities that multicore and manycore processors bring, which validates it is very much an ongoing debate.
For the next 50 minutes, Jike Chong, adjunct professor at Carnegie Mellon, Principal Application Architect at Parasians, and the organizer of this HPC/GPU meetup group, brought forth five key questions to shed some light on this discussion:
- What does 100x speedup mean?
- Who is concerned about 100x speedup?
- Where do 100x speedups come from?
- When is the comparison useful?
- How can I get the speedup?
His talk focused on the critical role that the application developers play in the changing landscape of the semiconductor industry. It distinguished the application developers’ concerns with the concerns of other important players in the field, such as the architecture researchers.
Jike introduced the audience to the past and present practices of industry practitioners and researchers to work towards answering the question about obtaining speedups across processors and platforms. He then used an example to illustrate the levels of optimizations that are possible for developing efficient applications on modern parallel computing platforms.
With this background in place, Jike discussed when and how speedups are useful for pioneer industry practitioners working on parallel application development, as well as common pitfalls when making or interpreting such comparisons.
Finally, Jike made concrete recommendations for organizations and practitioners looking for 100x speedup for their applications as they seek to take advantage of significant speedups to make game changing technology advances, realize significant cost savings, and enable new revenue capabilities.
100x speedups, are they real? Take a look at the slides and video (segments may be available soon), and let us know what you think!
This is a glimpse of what takes place monthly at this local group that grew from zero to 150+ active members in four months.
But wait, there’s more! A short talk and a book review also showcased in the meeting: Micah Villmow, Senior Compiler Engineer at AMD lead the group through AMD’s GPU computing timeline, how performance doubles or almost doubles with each generation from graphics to compute, and we could see when industry experts really started viewing HW as compute machine rather than a graphics machine. Ankit Gupta from NVIDIA shared a review of chapter 32 of GPU Computing Gems, volume 1: Real-Time Speed-Limit-Sign Recognition on an Embedded System Using a GPU.
While the slides of this action-packed meeting are accessible, those who attend in person have the benefit to learn and challenge each other in an intimate and interactive setting. The HPC & GPU Meetup “cluster” has now grown to include ten US-based groups, with more than 600 members, with the initiator Andrew Sheppard on a mission to start more!
Are there practices that work well for you in engaging with GPU developers and practitioners? Talk to any of our organizers! It’s interesting times for HPC and GPU supercomputing, so join a local group or start your own!
**Footnote: The “100x speedup, is it real?” talk builds on a recently published Berkeley Paper on this topic, which the speaker co-authored.
Friday, May 6, 2011
HPC/GPU Meetup on May 2nd a Success, Let's Keep Up the Momentum!
On Monday 5/2 I co-hosted the May meetup for the HPC/GPU computing group of Silicon Valley, and we had our biggest turnout yet (43!).
People who attended include industry pioneer practitioners of parallel computing, as well as CEOs and team leads who are evaluating the merits of HPC and GPU Supercomputing technologies for their organizations.
Here’s a run down of what took place: Jike Chong, the group organizer, began with an introduction of news in the area of HPC and GPU Supercomputing technologies. The attending members then had an opportunity to each gave a short self-intro. Participating members' backgrounds range from software consultants, to technology company engineers, to NASA scientists.
Talks in three areas were planned for this meeting:
- parallel computing infrastructure
- programming patterns
- programming techniques
First Andrew Sheppard, the initiator of the HPC and GPU Supercomputing Meetup groups in the US and Asia gave a talk on Programming with Thrust. The audience participated enthusiastically, with questions ranging from infrastructure overhead to coarse-grained parallelization of the STL-type abstractions.
Two short presentations by Minesh B. Amin and Morgan Conrad followed after a brief break where members mingled (the intros at the beginning helped them to orient and target the networking). Minesh presented a set of Parallel Management Patterns with a Python-based instantiation. Morgan presented a review of a set of programming techniques from the book "GPU Computing Gems".
Enthusiastic participants stayed well past 9:30pm, where a couple of members volunteered to speak in future meetings.
In a quest to develop and refine the meeting format to cater to a growing and increasingly diverse audience, we will continue to experiment with long and short talks. While the core of the format will remain similar, the variety of topics are definitely growing. Looking forward to the next one!
People who attended include industry pioneer practitioners of parallel computing, as well as CEOs and team leads who are evaluating the merits of HPC and GPU Supercomputing technologies for their organizations.
Here’s a run down of what took place: Jike Chong, the group organizer, began with an introduction of news in the area of HPC and GPU Supercomputing technologies. The attending members then had an opportunity to each gave a short self-intro. Participating members' backgrounds range from software consultants, to technology company engineers, to NASA scientists.
Talks in three areas were planned for this meeting:
- parallel computing infrastructure
- programming patterns
- programming techniques
First Andrew Sheppard, the initiator of the HPC and GPU Supercomputing Meetup groups in the US and Asia gave a talk on Programming with Thrust. The audience participated enthusiastically, with questions ranging from infrastructure overhead to coarse-grained parallelization of the STL-type abstractions.
Two short presentations by Minesh B. Amin and Morgan Conrad followed after a brief break where members mingled (the intros at the beginning helped them to orient and target the networking). Minesh presented a set of Parallel Management Patterns with a Python-based instantiation. Morgan presented a review of a set of programming techniques from the book "GPU Computing Gems".
Enthusiastic participants stayed well past 9:30pm, where a couple of members volunteered to speak in future meetings.
In a quest to develop and refine the meeting format to cater to a growing and increasingly diverse audience, we will continue to experiment with long and short talks. While the core of the format will remain similar, the variety of topics are definitely growing. Looking forward to the next one!
Wednesday, April 27, 2011
Much to Learn from a Marketing Analogy
Being in the technology space, I've seen my share of solutions marketing messaging. A recent visit to Pragmatic Marketing webpage shed some light on why technology-based companies' approach to marketing might not be working:
Instead of a tech firm, let's say they were KFC, the message might look like this: "Would you like to buy some packaged dead chicken parts? We’re an end-to-end solution for the killing, chopping, freezing, cooking, and packaging of chicken.”
Does your companies do this?
Ever wonder why your prospects are bored by your messaging?
This certainly gets me thinking about how I market Parasians...
Instead of a tech firm, let's say they were KFC, the message might look like this: "Would you like to buy some packaged dead chicken parts? We’re an end-to-end solution for the killing, chopping, freezing, cooking, and packaging of chicken.”
Does your companies do this?
Ever wonder why your prospects are bored by your messaging?
This certainly gets me thinking about how I market Parasians...
Monday, March 14, 2011
Pi day thinking on parallel computing adoption
Happy Pi day! It just occurred to me that the logo of Parasians (on parasians.com) showcases P and i, making it Pi. Of course, the Pi in this case means bringing out "intelligence" from "Parallel" computing.
I have been thinking and reading about new technology adoption and how some markets have successfully generated interest among user bases. It appears short-term benefit to an organization and barrier of using the technology are two key driving factors. I decided to look for these in a forum (meetup) setting.
Last Monday I co-hosted the first meetup of HPC & GPU Supercomputing Group of Silicon Valley. With 35+ attendees from industries, academia, and government research groups, there is clear excitement and energy in the parallel computing space. The people are mostly in three camps:
1). the GPU computing camp: the majority of attendees know at least a little bit about GPU computing, which has become a compelling alternative to CPU computing in high-performance computer systems.
2). the HPC computing camp: a number of attendees are experienced in supercomputing, grid computing, and focus on distributed computing, which is a subset of parallel computing.
3). the interested camp: the people who are looking to learn about technologies and trends.
Given the goal of the meetup is to fill a void in the HPC/GPU development ecosystem, the first speaker topics focus on bringing resources to parallel programming practitioners. The attendees were mostly concerned about the technology, so my test on benefits to their organization didn't go far. As for barriers to using the technology, several comments indicate that the HPC/GPU area is evolving with sets of challenges, it takes certain intellectual curiosity to pursue. It does look like there are business opportunities for those who can bridge the technology (and benefit) gap. More observation to be discussed in a couple more meetups.
P.S. The Parasians web portal is here, where there's discussion about the difference of parallel, concurrent, and distributed computing, why GPU parallel computing is the next paradigm, and levels of acceleration consideration yielding 750x+ application speed up!
I have been thinking and reading about new technology adoption and how some markets have successfully generated interest among user bases. It appears short-term benefit to an organization and barrier of using the technology are two key driving factors. I decided to look for these in a forum (meetup) setting.
Last Monday I co-hosted the first meetup of HPC & GPU Supercomputing Group of Silicon Valley. With 35+ attendees from industries, academia, and government research groups, there is clear excitement and energy in the parallel computing space. The people are mostly in three camps:
1). the GPU computing camp: the majority of attendees know at least a little bit about GPU computing, which has become a compelling alternative to CPU computing in high-performance computer systems.
2). the HPC computing camp: a number of attendees are experienced in supercomputing, grid computing, and focus on distributed computing, which is a subset of parallel computing.
3). the interested camp: the people who are looking to learn about technologies and trends.
Given the goal of the meetup is to fill a void in the HPC/GPU development ecosystem, the first speaker topics focus on bringing resources to parallel programming practitioners. The attendees were mostly concerned about the technology, so my test on benefits to their organization didn't go far. As for barriers to using the technology, several comments indicate that the HPC/GPU area is evolving with sets of challenges, it takes certain intellectual curiosity to pursue. It does look like there are business opportunities for those who can bridge the technology (and benefit) gap. More observation to be discussed in a couple more meetups.
P.S. The Parasians web portal is here, where there's discussion about the difference of parallel, concurrent, and distributed computing, why GPU parallel computing is the next paradigm, and levels of acceleration consideration yielding 750x+ application speed up!
Subscribe to:
Posts (Atom)