>>> Join Here <<<
>>> Join Here <<<
This episode will discuss the Spectrum Scale Container Storage Interface (CSI). CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on container orchestration systems like Kubernetes and OpenShift. Spectrum Scale CSI provides your containers fast access to files stored in Spectrum Scale with capabilities such as dynamic provisioning of volumes and read-write-many access.
Spectrum Scale is a highly scalable, high-performance storage solution for file and object storage. It started more than 20 years ago as research project and is now used by thousands of customers. IBM continues to enhance Spectrum Scale, in response to recent hardware advancements and evolving workloads.
This presentation will discuss selected improvements in Spectrum V5, focusing on improvements for inode management, vCPU scaling and considerations for NUMA.
|Michael Harris||Mike is a Senior Software Engineer on the Spectrum Scale Core Team. Mike has a deep background on OS kernel, device drivers, virtualization, and system software with focus on NUMA, atomics and concurrency, high cpu count concurrency. On GPFS focusing on NUMA and scaling as well as DMAPI and host file system integration and system calls.|
|Karthik Iyer||Karthik Iyer is a Senior Software Engineer in Spectrum Scale Core. Karthik has 18 years of design and development experience in distributed system software, specifically in the areas of file system core and database management. Karthik also specialises in trouble shooting Spectrum Scale corruption related issues.|
Update on File Create and MMAP performance, optimised code for small DIO.
Spectrum Scale is a highly scalable, high-performance storage solution for file and object storage. IBM continues to enhance Spectrum Scale performance, in response to recent hardware advancements and evolving workloads.
This presentation will discuss performance related improvements in Spectrum V5, focusing on enhancements made in support of AI and HPC use cases, including improvements to MMAP reads, file create performance, and small direct IO. In addition we will review some performance numbers measured on the IBM ESS 5000.
Q: I assume copy of these charts will be posted to Spectrum Scale User Group “Presentations” web page?
A: Yes, for all episodes the slides and video should be posted afterwards.
Q: Please expand on other areas of performance improvements within GPFS that IBM is working on now?
A: Which areas would you like to see improved?
Q: Will prefetch still happen after the slow second IO?
A: Regarding ‘will prefetch still happen after the slow second IO’ – I know that Ulf said we should handle any more prefetch questions in another talk, but let me just comment on one case: we make decisions to prefetch after the associated I/Os are complete, so prior to prefetch kicking in, a slow I/O might delay the decision to start prefetching.
|John Lewars (IBM)||John Lewars is a Senior Technical Staff Member leading performance engineering work in the IBM Spectrum Scale development team. He has been with IBM for over 20 years, working first on several aspects of IBM's largest high performance computing systems, and later on the IBM Spectrum Scale (formerly GPFS) development team. John's work on the Spectrum Scale team includes working with large customer deployments and improving network resiliency, along with co-leading development of the team's first public cloud and container support deliverables.|
|Jürgen Hannappel (DESY)||Jürgen Hannappel works in the scientific computing group of the DESY IT department on data management for EuXFEL and Petra III. With a background in particle physics his interests shifted towards computing over time as his place of work moved from CERN and Bonn University to DESY|
|Olaf Weiser (IBM)||Olaf works with GPFS for over 15 years now. He started his GPFS career in one of the worlds biggest telecommunication companies as a technical administrator. Since more than 10 years, Olaf is with IBM as storage consultant and performance specialist. Recently, he joined IBM Research and Development and works on enhancements in Spectrum Scale to adopt client and customer's needs in the product.|
Spectrum Scale Strategy UpdateToday is the AI era and we are going through huge explosion of data. Besides the AI revolution, we have clouds, hybrid clouds and data is moving from “on-prem” to various clouds, multi-clouds and back. Coupled with this data growth, Hardware is evolving with an increasing factor of 10. The IBM Spectrum Scale team continues to Invest heavily in adding exciting new features and technology to maintain its leadership as a premier file system. In this session, Wayne Sawdon (CTO) and Ted Hoover (Program Director) of the Spectrum Scale development team will give an overview of recent, upcoming features and strategy for Spectrum Scale.
|Wayne Sawdon||Wayne joined IBM in 1982 and worked on a variety of research projects including the QuickSilver Transactional Operating System. He spend most of the 90's on educational leave at Carnegie Mellon University working on Distributed Shared Memory and Software Defined Computer Architecture. Upon returning he joined the TigerShark research project which became IBM's General Parallel File System. Although Wayne has worked on most of the file system, he only admits to working on its data management. These days, Wayne serves as the CTO for Spectrum Scale and ESS.|
|Ted Hoover||Ted Hoover is a Program Director within IBM’s Spectrum Scale product development organisation. Ted is responsible for the worldwide development of Spectrum Scale cloud, container, and performance engineering teams.|