New Rules of the Road for Memory and Storage in Large-Scale Computing
This IDC study is an assessment of the changing role of central and distributed memory/storage architectures for HPC problems. There are a range of new high-performance data analytics (HPDA) algorithms and applications that are different from traditional modeling and simulation HPC counterparts, and they will ultimately drive development of new hardware, software, and systems architectures. The demands for new HPDA visualization capabilities will grow as HPDA applications increasingly call for new ways to better display data output to help subject-matter experts conduct deeper and more substantive analysis. Finally, the need for effective workflow management tools will become even more acute as more HPDA systems will be required to pull double duty as an HPDA batch system for large static data set jobs and a transactional system that ingests both static and live data sets as well as continual user input, perhaps in a 24-hour-a-day uptime environment.
IDC sees the role of storage as a distinct architectural building block likely undergoing significant changes in reaction to emerging and evolving HPDA requirements. Ultimately, the distinction between computational and storage servers may dissolve as each loses its individual mission and becomes more blurred in an overall scheme of data management and processing. — Bob Sorensen, research vice president, Technical Computing
Learn how to effectively navigate the market research process to help guide your organization on the journey to success.Download eBook