MarketThoughts.com Market Thoughts
 
 
Links | Sitemap | Search:   
  Home  > Commentary  > Archive  > Market Commentary  

The 38th Semi-Annual Conference on Supercomputing

(December 15, 2011)

Dear Subscribers and Readers,

Important Announcement: I have created a Twitter profile for my quick intraday comments on the financial markets and anything else that piques my interest.  To respond or create a dialogue for my posts, please sign up and subscribe to my posts.

The fine folks at Morningstar have done much to “democratize” stock research to retail investors in recent years—with widespread coverage on both U.S. large and small caps—as well as disclosing assumptions behind their DCF models. For those interested in the aggregated valuation of the stock market, Morningstar has compiled an average valuation of its entire coverage universe of over 2,000 U.S. stocks. I track this on a regular basis. Today, this indicator is at an oversold level of 0.85 (a value of 1.00 is assigned to a particular stock if it hits Morningstar's definition of “fair value”).  Note, however, that this ratio declined to as low as 0.77 on October 3rd; and sank to as low as 0.55 on November 20, 2008 (this represented its all-time low since the inception of this indicator in mid-2001).  This means that in a liquidity or confidence crisis (such as what we are now experiencing), valuations may not matter much:

While Morningstar's proprietary valuation indicator suggests the stock market is 15% below its fair value, subscribers should note that 2012 corporate earnings growth has been continuously revised downwards—meaning Morningstar may be overvaluing some of its stocks in its coverage universe. In addition, the weighted cost of capital has declined dramatically – suggesting that valuations may be more stretched than meets the eye, especially given the overhang from the European sovereign debt crisis and the ongoing decline in global banking shares.  In other words, Morningstar's proprietary valuation indicator suggests that the market may not be so undervalued after all—and more important, in a liquidity/confidence crisis, valuations do not matter too much. We thus continue expect the market to correct going into New Year's and early 2012.

Last month, the 38th semi-annual edition of the top 500 list of the world's most powerful supercomputers was published at the SC11 supercomputing conference in Seattle. This year's biggest surprise (at least to those outside the supercomputing community) has been the ascendance of the Japanese in the rankings. While the top 500 list is still dominated by Intel-based systems (although many of the top-performing systems utilize IBM's “Blue Gene” processors; with AMD a distant second, and IBM third), the number one supercomputer (which first appeared on the list in June), the “K Computer” is built by Fujitsu, using its own SPARC64 Vlllfx CPUs.  Powered by 88,128 CPUs (up from 68,544 CPUs in the June listing), each with eight cores (for a total 705,024 cores), the “K Computer” is capable of a peak performance of 10.51 petaflops (up from 8.16 petaflops in June) and handily beats the Tianhe-1A supercomputer at the National Supercomputing Center in Tianjin, with a performance of “just” 2.57 petaflops.

From a geopolitical standpoint, Japan now occupies the top spot in terms of the fastest-performing supercomputer for the first time since 2004 (when its “Earth Simulator,” with a performance of 36 teraflops, occupied the top spot).  The K Computer is about 290 times faster than the Earth Simulator in terms of peak performance.  On a country basis, 42.8% of the top 500 supercomputers (by supercomputing power; note that the NSA – which houses some of the most powerful systems in the world – stopped reporting in 1998) are located in the US.  Japan is second, with 19.2% of the world's supercomputing power (a jump from 6.7% just 12 months ago).  Rounding out the top five are China (14.2%--up from 12.1% six months ago), France (5.0%) and Germany (4.9%)  Note that France replaced Germany in the number four position in the latest list.  The UK, which ranked third two years ago (with 5.5% of the world's supercomputing power), is now in 6th place, housing just 3.7% of the world's supercomputing power.

Aside from providing the most up-to-date supercomputing statistics, the semi-annual list also publishes the historical progress of global supercomputing power – as well as a reasonably accurate projection of what lies ahead.  Following is a log chart summarizing the progression of the top 500 list since its inception in 1993, along with a ten-year projection:

Today, a desktop with an Intel Core i7 process operates at about 100 gigaflops (note that we are ignoring the GPU in our graphics processor from our calculations) – or the equivalent of an “entry-level” supercomputer on the top 500 list in 2001, or the most powerful supercomputer in the world in 1993.  On the highest end, the power of Japan's K Supercomputer is equivalent to the combined performance of the world's top 500 supercomputers just four years ago.  Moreover, IBM and Cray are on track to construct two supercomputers with a targeted performance of 20 petaflops (or 20 million gigaflops) sometime next year.  Code-named “Sequoia” and “Titan,” these supercomputers together will possess more than half of the combined performance of all the supercomputers in the top 500 list as of today (see above chart) once they come online.  By the 39th semi-annual edition of the Top 500 supercomputers next June, the combined performance of the world's 500 fastest supercomputers should exceed 100 petaflops (compared to 74.2 petaflops today; and 58.9 petaflops in June 2011).

Simulations that would have taken 10 years of computing hours for the most powerful supercomputer 12 months ago will take just two years on the K Supercomputer, and just one year once Sequoia comes online (roughly, since Linpack—the benchmark used to measure supercomputing performance—is not exactly representative of real-world supercomputing performance).  Tasks that take an immense amount of computing time today – such as weather forecasts, gene sequencing, airplane and automobile design, protein folding, etc. – will continue to be streamlined as newer and more efficient processors/software are designed.  By 2018-2019, the top supercomputer should reach a sustained performance of an exaflop (i.e. 1,000 petaflops)—this is both SGI's and Intel's goal.  IBM believes that such a system is needed to support the “Square Kilometre Array”—a radio telescope in development that will be able to survey the sky 10,000 times faster than ever before, and 50 times more sensitive than any current radio instrument—and will provide better answers to the origin and evolution of the universe.  The ongoing :democratization” of the supercomputing industry would also result in improvements in solar panel designs, better conductors, more effective drugs, etc.  As long as global technology innovation isn't stifled, the outlook for global productivity growth – and by extension, global economic growth and standard of living improvements – will remain bright for years to come.  Advances in material designs would also propel the private sector's efforts to commercialize space travel and reduce the costs of launching satellites.  Should the quantum computer be commercialized soon (note that quantum computing advances are coming at a dramatic rate) subscribers should get ready for the next major technological revolution (and secular bull market) by 2014 to 2020.  Make no mistake: The impact of the next technological revolution will dwarf that of the first and second industrial revolutions.

Signing off,

Henry To, CFA, CAIA

Article Tools

Subscribe to this FREE commentary

Discuss this page

E-mail this page to your friends

Printer-friendly version of this page

  Copyright © 2011 MarketThoughts LLC. | Privacy Policy | Terms & Conditions