BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Spectrum Scale User Group - ECPv6.16.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Spectrum Scale User Group
X-ORIGINAL-URL:https://www.spectrumscaleug.org
X-WR-CALDESC:Events for Spectrum Scale User Group
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20190331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20191027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20211031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20201021T160000
DTEND;TZID=Europe/London:20201021T170000
DTSTAMP:20260514T104611
CREATED:20200921T123915Z
LAST-MODIFIED:20220128T181136Z
UID:2014-1603296000-1603299600@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 007 - Manage the lifecycle of your files using the policy engine
DESCRIPTION:This episode will provide a comprehensive introduction to the IBM Spectrum Scale policy engine. It highlights the underlying architecture and how policies are executed in a IBM Spectrum Scale cluster. This episode also discusses example rules and policies facilitating Information Lifecycle Management accompanied with practical tips. \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n  \nDownload slides here \nReferences\n\nWhitepaper: IBM Spectrum Scale ILM and Archiving Policies – A practical Guide\nSpectrum Scale ILM policy examples and scripts\nApache Tika\n\nQ&A\nQ: ­Which type of nodes participate in policy execution?\nA: Depends on the nodes specified with the -N option of the mmapplypolicy command. If the -N option is not specified\, then the command runs parallel instances of the policy code on the nodes that are specified by the defaultHelperNodes attribute of the mmchconfig command. If -N is specified then the command runs parallel instances on the nodes or node class specified with the -N option. For more information see the IBM Spectrum Scale knowledge center: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.5/com.ibm.spectrum.scale.v5r05.doc/bl1adm_mmapplypolicy.htm \nQ: ­Can I somehow identify the type of a file via the policy engine\, e.g. via the magicbyte? Or do I have to rely on the file extension?­\nA: The policy engine does not allow access to the data – only the file’s metadata including extended attributes­ can be evaluated by the policy engine. To identify the type of a file with the policy engine an EXTERNAL LIST rule can be used along with an external script that determines the type of files. \nQ: ­Will the external tool process the filelist in parallel on all nodes\, which are used to generate the filelist?\nA: Yes\, if an external tool or interface script is defined in an EXTERNAL POOL rule then this script is executed on all nodes that are specified with the -N option of the mmapplypolicy command. This assumes that all node specified with the -N option have access to the interface script. If this is not the case\, then the policy run fails. You can control the number of instances of the external tool pool with the option -m and the number of files passed to one instance of the external pool with the option -B of the mmapplypolicy command. \nQ: Are they any limitations or recommendations around length of rules in policy files? For example\, we have ~750 filesets we want to place data on a specific pool. Should we just have one rule\, or many rules for this?\nA: Placement policies will be stored in a single file. The challenge is not so much the length of the file but the number of placement rules contained in the policy files. Whenever a file is created the policy engine must walk through all rules to find a match. If there are many rules\, this will delay the file creation. Therefore I recommend to keep the number of placement rules low. For example\, you could organize the placement policies by storage pools. There is a limit of eight storage pools\, thus this would lead to maximal eight placement rules. In each rule you can use the FILESET statement to specify multiple filesets to be placed on a pool. \nUser group host: Bob Oesterlin\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBio\n\n\n\n\n	Nils HausteinNils Haustein is Senior Technical Staff Member with IBM Systems. He is responsible for design and implementation of backup\, archiving\, file and object storage solutions. Nils provides guidance to IBM teams and consults with clients and business partners world wide. Nils has co-authored the book "Storage Networks explained". As leading IBM Master Inventor he has created more than 170 patents and is a respected mentor for the technical community world wide.
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-ilm-policy-engine/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20201006T160000
DTEND;TZID=Europe/London:20201006T170000
DTSTAMP:20260514T104611
CREATED:20200921T123555Z
LAST-MODIFIED:20220128T180818Z
UID:2011-1602000000-1602003600@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 006 - Persistent Storage for Kubernetes and OpenShift environments
DESCRIPTION:This episode will discuss the Spectrum Scale Container Storage Interface (CSI). CSI is a standard for exposing arbitrary block and file storage systems to containerized workloads on container orchestration systems like Kubernetes and OpenShift. Spectrum Scale CSI provides your containers fast access to files stored in Spectrum Scale with capabilities such as dynamic provisioning of volumes and read-write-many access. \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n  \n \nDownload slides here \nEpisode 2:  Best Practices for building a stretched cluster \nQ&A\nQ: ­This slide (titled Spectrum Scale CSI Driver – Architecture) shows CPU architecture is x86­\nA: ­Yes\, with Spectrum Scale CSI Driver 2.0.0 only x86 is supported. The support for other architectures (IBM Power and IBM Z) will be provided in upcoming releases­ (IBM usual roadmap disclaimers apply). \nQ: ­Is the management of storage class available via Ansible?­\nA: ­Setting up a storage class is a one-time operation. While it might be done using Ansible (and Kubernetes integration modules)\, clients usually do the management using the Kubernetes or Openshift CLI or GUI.­ \nQ: ­Will the slides be provided post this presentation?\nA: ­Yes. You will find the chart decks\, recordings\, Q&A and related information for all past talks including this one at https://www.spectrumscaleug.org/experttalks/. \nQ:  Once you have CSI driver support for non x86_64 platforms\, will the Spectrum Scale cluster be able to be heterogeneous (AIX\, Linux\, x86_64 and ppc64le)? Will this cluster support AIX NSD only nodes?­\nA: ­ In the first release for non x86_64 platforms\, all worker nodes that have Spectrum Scale client installed need to  be of same CPU architecture and the same operating system. If there are AIX NSD nodes\, those must be outside of Kubernetes cluster.­ AIX NSD only nodes might be integrated by remote mounting the storage cluster to a client Spectrum Scale cluster that runs the Kubernetes workload. \nQ: ­Is Network Load Balancer a per-requisite must have for the CSI deployment?­\nA: ­No it isn’t.­ \nQ: ­ Is there a possibility to have Spectrum Scale clients installed within containers?\n ­A: ­We are working on a capability called Container Native Spectrum Scale (CNSS) where Spectrum Scale will run inside a container.  The initial release is planned for December 2020.  (Disclaimer: All dates are subject to change; IBM usual roadmap disclaimers apply)­ \nQ: Do we need to have an x86 “only” Spectrum Scale/OpenShift cluster and a ppc64le “only” Spectrum Scale/OpenShift cluster?\nA: ­The requirement of same CPU architecture and same operating system is only for Spectrum Scale Client node which are part of Kubernetes/ Openshift cluster. NSD server can be of other platform (as per Spectrum Scale support matrix at https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html) \nQ: ­ Any plans for ability to self-provision Spectrum Scale clusters with containers?­\nA: ­Container Native Spectrum Scale (CNSS) will have an Operator that will deploy and configure a Spectrum Scale cluster automatically.  It will also remote mount the filesystem from a Spectrum Scale Storage Cluster. \nQ: ­One of the issues that we are trying to solve is to isolate Spectrum Scale IO with respect to each tenant/application/user on a single server just like how we can isolate cpu/network with cgroups. Would spectrum scale on containers help us in isolating or qos with storage IO?­\nA: ­Running Spectrum Scale in a container or CSI by itself will not address QoS.  A new fileset based QoS capability\, with CSI\, will be able to handle this in a future release­. (Disclaimer: All dates are subject to change; IBM usual roadmap disclaimers apply) \nQ: ­Does OpenStack have to be managed via the web gui. Can it be controlled via a CLI?­\nA: ­You are free to use either CLI or GUI.­­­ \nUser group host: Bill Anderson\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBio\n\n\n\n\n	Smita RautSmita Raut is a Senior Software Engineer with IBM Storage Labs in Pune\, India. She works with the IBM Spectrum Scale development team as the architect for persistent storage for containers. In her nine years with IBM\, she has lead the development on various projects\, including Object protocol for IBM Spectrum Scale and enablement of IBM Spectrum Scale on public cloud. She is an active technical blogger and has published several blogs on object protocol and container storage interface driver.\n\n\n	Harald SeippHarald Seipp is a Senior Technical Staff Member with IBM Systems in Germany. He is the founder and Technical Leader of the Center of Excellence for Cloud Storage as part of the EMEA Storage Competence Center. He is providing guidance to worldwide IBM teams across organizations\, and works with customers and IBM Business Partners across EMEA to create and implement complex storage cloud architectures. His more than 25 years of technology experience includes previous job roles as Software Developer\, Software Development Leader\, Lead Developer\, and Architect for successful software products\, and co-inventor of an IBM storage product. He holds various patents on storage and networking technology.\n\n\n	Renar GrunenbergRenar Grunenberg is since 27 years at HuK-Coburg. He leads the backup and storage team and is responsible for all the storage and backup stuff in his department and company. Renar has 15 years experience with Spectrum Scale including CES\, CSI\, ESS and normal core function. In this episode Renar will discuss a use case for Kafka self-service with K8s and Spectrum Scale CSI.\n\n\n	Simon ThompsonSimon Thompson is the Research Computing Infrastructure Architect within Advanced Research Computing at the University of Birmingham. He oversees the infrastructure and systems team\, running the University's HPC and research data systems. This involves experimenting (and breaking) new technology. Simon is also chair of the Spectrum Scale user group in the UK.
URL:https://www.spectrumscaleug.org/event/ssugdigital-persistent-storage-for-containers-with-spectrum-scale/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200922T160000
DTEND;TZID=Europe/London:20200922T173000
DTSTAMP:20260514T104611
CREATED:20200902T065507Z
LAST-MODIFIED:20220128T180834Z
UID:1942-1600790400-1600795800@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 005 - Update on functional enhancements in Spectrum Scale (inode management\, vCPU scaling\, considerations for NUMA)
DESCRIPTION:Spectrum Scale is a highly scalable\, high-performance storage solution for file and object storage. It started more than 20 years ago as research project and is now used by thousands of customers. IBM continues to enhance Spectrum Scale\, in response to recent hardware advancements and evolving workloads.\nThis presentation will discuss selected improvements in Spectrum V5\, focusing on improvements for inode management\, vCPU scaling and considerations for NUMA. \nPart 1 (inode management):\n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \nPart 2 (vCPU scaling and NUMA considerations):\n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n \nDownload slides here \nQ&A – inode management\nQ: If I make the block size 8k\, can an inode stuff a file of that size­?\nA: ­No \, the max inode size is 4K. As discussed on the call\, changing the blocksize (including the metadata blocksize) doesn’t impact the size of an inode\, which is currently limited to 4K. \nQ: ­When files are created an inode number is assigned from 1-Max. 32bit applications can only address inodes up to 4b. With millions of temp files created during application runs\, inodes get used up very quickly.  But once the job is finished the files are­ ­deleted\, but the inodes are not recycled. ­This results in filling the inodes while the file system isn’t full. Can inodes/files that have been deleted\, have their inode recycled for future use?\nA: Inodes do get reused after deletion. How many independent filesets do you have? If you only had the root fileset and you set the maxInodes to less than 4b\, then you can never have an inode number greater than 4b. \nQ: ­Do we have any idea how long it takes to re-layout the inode allocation map?  And can that be done while the FS is mounted?\nA: The run-time for this operation will depend on the size of the existing inode allocation map file since we will be migrating data from the exiting map to the new map. In one customer engagement\, the migration completed in an hour and in another case it took 18 hours.\nWhile this operation can theoretically be done while fs is mounted\, we have currently restricted it to be done with file system offline for safety reasons. We are evaluating to make this operation online in a future release. The re-layout parameters can however be tested with file system mounted. \nQ: ­Are there counters that report the lock collisions/waiters for lock contention that would indicate whether a re-layout is desirable?\nA: ‘mmfsadm dump ialloc’ provides counters on segment search. Grep for ‘inodes allocated from’. Ideally\, we expect allocations to happen from ‘inodes allocated from list of prefetched/deleted inodes’ or ‘inodes allocated from current ialloc segment’. Also\, long waiters during file creation is indication of inode space pressure. \nQ: Why does a large number of NumNodes influence mmdf run time?  (seen some minutes) ­\nA: mmdf fetches cached data. This should not be impacted by cluster size. \nQ: How does NumNodes relate to the number of segments?\nA: The number of inode allocation map segments are chosen such that every node can find a segment with free inodes even if 75% of all segments are full. This has to do with the inode expansion getting triggered only when the inode space is 75% full. We want inode allocation to continue while the inode expansion is taking place. This means that the number of segments would be roughly 4 times NumNodes. \nQ: Are there any general recommendations for initial inode allocation?  I know this depends on the filesystem’s expected use. We typically just base it roughly off existing systems.\nA: Use the default value of allocated inodes (by omitting NumInodesToPreallocate argument of –inode-limit option of mmcrfs/mmcrfileset) when creating file system/independent fileset and let the inodes expand on demand.­ \nQ:  How is the inode allocation map\, and its segmentation\, affected if metadata NSDs are added or deleted?\nA: The inode allocation map is not affected by newly added NSDs as it only tracks inode state. The block allocation map is the one that tracks free/used disk blocks and will get updated on new disk add/delete. \n­Q: Can we shrink the inode space\, if we by mistake allocate a large inode space using –inode-limit?\n ­A: No. \nQ: ­When files are deleted\, does the recovery of free inode happen in lazy way? One customer has just reported that after deleting data from 5TB filesystem\, free space is not reflecting on the filesystem­.\nA: Yes. The files are deleted asynchronously in the background. You can run ‘mmfsadm dump deferreddeletions’ to see the number of inodes that are queued for deletion in mounted file systems. \nQ: At what version is automatic inode expansion available?\nA: Since the earliest spectrum scale versions. \nQ: ­How do you indentify the metanode?­\nA: ­Here is an example:\nls -i testfile\n68608 testfile \nThen find this inode number in ‘mmfsadm dump files’. (note that the mmfsadm dump command should be avoided in production)\n===== dump files =====\n[… search on inode]\n  inode 68608 snap 0 USERFILE nlink 1 genNum 0x49DE6F0F mode­‑ \nThe above is an example of how you might lookup the metanode for a file. \nYou can map the cluster name by looking at the ‘tscomm’ section of a dump\, e.g.:\n===== dump tscomm =====\n[…]\nDomain \, myAddr <c1n2> (was: <c1p0>)[…]\nUID domain 0x1800DF65038 (0xFFFFB6768DF65038)  Name “c202f06fs04a-ib1.gpfs.net”­ \nQ: Is metanode transient?\nA: ­ A metanode is a per file assignment. It lasts for as long as there are open instances of the file. The assignment is dynamic and the metanode role may automatically migrate to other nodes for better performance. \nQ: ­If some of the node went down and if metanode unable to get update from those failed nodes: In such situation how updates are maintained by the metanode­?\nA: A non-metanode will sends its updates to the metanode before it writes any dependent blocks to disk. If the non-metanode went down before it could send its updates\, then log recovery will ensure that there are no inconsistent modifications to disk data by the non-metanode. Spectrum Scale only guarantees persistence of data/metadata from the last sync window. \nQ: Can we limit metanode to migrate to remote node? Also\, will it help in improving performance if limit metanode in storage cluster?\nA: Metanode performance depends on how many nodes are sending metanode updates and how expensive the network send is. The file system uses such heuristics to determine the optimal metanode placement. In most cases it is best to let the file system make this decision. The only known use case for preventing metanode migration to remote node is if the remote node is in a compute cluster which cannot afford the overhead of a metanode operation. For this rare case we have an undocumented configuration parameter to force the metanode to be in the storage cluster. \nQ: Sometime when we delete large data\, it takes significant time to show free space in df -h command output. Do we need to run mmrestripfs to reclaim the deleted space faster?\nA: ­ ‘df -h’ would return cached information on free space. It is likely that the large file that was deleted has not freed up its space as file deletes happen in the background. You can use ‘mmfsadm dump deferreddeletions’ to get a count of number of inodes that are queued for background deletion. If the node is not overloaded on I/O and you find that the number of to-be-deleted inodes are not reducing at a reasonable rate (depending on the file size and I/O througput of the node)\, then we would need to investigate further by collecting dumps and traces. Please open a ticket with IBM support in such a case. The mmrestripefs command is for restoring/rebalancing data and metadata replicas. It would not have any impact on speeding up background file deletion. \nQ&A – vCPU scaling and NUMA considerations\nQ: We see in the mmfs log now following messages\, what does it mean? What is missing?\n­[W] NUMA BIOS/platform support for NUMA is disabled or not available. NUMA results are approximated and GPFS NUMA awareness may suffer.\nA: That means libnuma was found but numa_available() returned false. This is a platform firmware functionality shortcoming. Spectrum Scale can still get a lot of information as some is derivable from /proc . File a ticket with your server vendor that libnuma :: numa_available() returns false . \nQ: ­So\, any recommendations on POWER9 for SMT settings?  AIX versus Linux on Power?  We used to suggest smaller SMT modes in the past.\nA: We are running SMT-4 on some large POWER9 systems. Evaluate based on I/O vs workload needs as discussed verbally. \nQ: Are there any special NUMA considerations for AMD systems which are different to NUMA considerations for Intel systems?\nA: This is highly dependent upon the processor and chip set independent of brand and based on what that processor and chipset offer for tuning. We do not have any prescriptive guidance. \nUser group host: Simon Thompson\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBio\n\n\n\n\n	Michael HarrisMike is a Senior Software Engineer on the Spectrum Scale Core Team. Mike has a deep background on OS kernel\, device drivers\, virtualization\, and system software with focus on NUMA\, atomics and concurrency\, high cpu count concurrency. On GPFS focusing on NUMA and scaling as well as DMAPI and host file system integration and system calls.\n\n\n	Karthik IyerKarthik Iyer is a Senior Software Engineer in Spectrum Scale Core. Karthik has 18 years of design and development experience in distributed system software\, specifically in the areas of file system core and database management. Karthik also specialises in trouble shooting Spectrum Scale corruption related issues.
URL:https://www.spectrumscaleug.org/event/ssugdigital-deep-dive-in-spectrum-scale-core/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200910T160000
DTEND;TZID=Europe/London:20200910T173000
DTSTAMP:20260514T104611
CREATED:20200721T072537Z
LAST-MODIFIED:20220128T181037Z
UID:1862-1599753600-1599759000@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 004 - Update on Performance Enhancements in Spectrum Scale
DESCRIPTION:Update on File Create and MMAP performance\, optimised code for small DIO. \nSpectrum Scale is a highly scalable\, high-performance storage solution for file and object storage. IBM continues to enhance Spectrum Scale performance\, in response to recent hardware advancements and evolving workloads.\nThis presentation will discuss performance related improvements in Spectrum V5\, focusing on enhancements made in support of AI and HPC use cases\, including improvements to MMAP reads\, file create performance\, and small direct IO. In addition we will review some performance numbers measured on the IBM ESS 5000. \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n \nDownload slides here \nQ&A\nQ: ­I assume copy of these charts will be posted to Spectrum Scale User Group “Presentations” web page?\nA: ­Yes\, for all episodes the slides and video should be posted afterwards­. \nQ: ­Please expand on other areas of performance improvements within GPFS that IBM is working on now­?\nA: ­Which areas would you like to see improved? \nQ: ­Will prefetch still happen after the slow second IO?\nA: ­Regarding ‘will prefetch still happen after the slow second IO’ – I know that Ulf said we should handle any more prefetch questions in another talk\, but let me just comment on one case:­ ­we make decisions to prefetch after the associated I/Os are complete\, so prior to prefetch kicking in\, a slow I/O might delay the decision to start prefetching­. \nUser group host: Simon Thompson\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBio\n\n\n\n\n	John Lewars (IBM)John Lewars is a Senior Technical Staff Member leading performance engineering work in the IBM Spectrum Scale development team.  He has been with IBM for over 20 years\, working first on several aspects of IBM's largest high performance computing systems\, and later on the IBM Spectrum Scale (formerly GPFS) development team.  John's work on the Spectrum Scale team includes working with large customer deployments and improving network resiliency\, along with co-leading development of the team's first public cloud and container support deliverables.\n\n\n	Jürgen Hannappel (DESY)Jürgen Hannappel works in the scientific computing group of the DESY IT department on data management for EuXFEL and Petra III. With a background in particle physics his interests shifted towards computing over time as his place of work moved from CERN and Bonn University to DESY\n\n\n	Olaf Weiser (IBM)Olaf works with GPFS for  over 15 years now. He started his GPFS career in one of the worlds biggest telecommunication companies as a technical administrator. Since more than 10 years\, Olaf is with IBM as storage consultant and performance specialist. Recently\, he joined  IBM Research and Development and works on enhancements in Spectrum Scale to adopt client and customer's needs in the product.
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talks-update-on-performance-enhancements-in-spectrum-scale/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200727T160000
DTEND;TZID=Europe/London:20200727T173000
DTSTAMP:20260514T104611
CREATED:20200618T102210Z
LAST-MODIFIED:20220128T181050Z
UID:1817-1595865600-1595871000@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 003 - Strategy Update
DESCRIPTION:Spectrum Scale Strategy UpdateToday is the AI era and we are going through huge explosion of data. Besides the AI revolution\, we have clouds\, hybrid clouds and data is moving from “on-prem” to various clouds\, multi-clouds and back. Coupled with this data growth\, Hardware is evolving with an increasing factor of 10. The IBM Spectrum Scale team continues to Invest heavily in adding exciting new features and technology to maintain its leadership as a premier file system. In this session\, Wayne Sawdon (CTO) and Ted Hoover (Program Director) of the Spectrum Scale development team will give an overview of recent\, upcoming features and strategy for Spectrum Scale.\n \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n \nDownload slides here \nQ&A\nNone \nUser group host: Bob Oesterlin\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBio\n\n\n\n\n	Wayne SawdonWayne joined IBM in 1982 and worked on a variety of research projects including the QuickSilver Transactional Operating System. He spend most of the 90's on educational leave at Carnegie Mellon University working on Distributed Shared Memory and Software Defined Computer Architecture. Upon returning he joined the TigerShark research project which became IBM's General Parallel File System. Although Wayne has worked on most of the file system\, he only admits to working on its data management. These days\, Wayne serves as the CTO for Spectrum Scale and ESS.\n\n\n	Ted HooverTed Hoover is a Program Director within IBM’s Spectrum Scale product development organisation.  Ted is responsible for the worldwide development of Spectrum Scale cloud\, container\, and performance engineering teams.
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-strategy-update/
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200713T160000
DTEND;TZID=Europe/London:20200713T173000
DTSTAMP:20260514T104612
CREATED:20200611T073812Z
LAST-MODIFIED:20220128T180933Z
UID:1805-1594656000-1594661400@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 002 - Best Practices for building a stretched cluster
DESCRIPTION:Talk 2 in the SSUG::Digital series looks at how to build a stretched cluster. What are the best practices? What pitfalls are there? Why would you consider a stretched cluster built with Spectrum Scale\, as opposed to one of the alternative approaches to high availability? How do stretched clusters work\, and what considerations go into planning a successful stretched cluster? We will examine the theory behind Spectrum Scale stretched clusters\, review some best practices for designing stretched clusters\, and talk about a few cases where stretched clusters have been successfully deployed. \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n \nDownload slides here \n  \nQ&A\nQ: For DR use case where ClusterA and ClusterB are the 2 separate data centres (DC A and DCB)\, do I need my Tiebreaker Quorum node installed in Data Centre C?\nA: (This is covered in the presentation). It is recommended to have the tiebreaker quorum node at a third site but it could be in one of the sites with the caveat that if that site goes down the second site will not be able to stay up. \nQ: The documentation shows a high speed shared storage is needed…does it mean that san fabric should be merged over ISL for volume allocation across site?\nA: When using Spectrum Scale replication for stretch clusters there is no need to for the SAN to be extended across the sites. The stretched cluster architecture described in the presentation works even when underlying storage does not replicate the data across sites. \nQ: Will there be any performance difference between extended SAN and accessing NSD over network using their owner?\nA: Well aside from the protocol difference block vs file\, it depends on the type of connectivity you have to SAN vs network. Spectrum Scale has been placing more resiliency in recent releases for what to do for network behaviour (eg recently proactiveReconnet feature was added to Spectrum Scale). \nQ: Does 10ms latency required between SiteA\, SiteB and also Tiebreaker quorum node? Can my tiebreaker quorum node have higher latency? \nA: Yes\, the third site can have a higher latency but it should still be “within reason”. So maybe double that number ie 20ms. It is recommended to keep it under a second. \nQ: Is tiebreaker node hosted on AWS or any other Cloud Providers a supported configuration?\nA: Yes\, we have customers who using a public cloud for their third site. \nQ: What is the RPO and RTO?\nA: Remember that this is synchronous replication. So as long as you don’t run out of space on your storage there is 0 RPO. RTO answer depends on your workload and infrastructure. It depends on the rate of change of data change\, your storage and the WAN. \nQ: How to check/measure the rate of data change?\nA: This really depends on the application and the rate of data change by the application. If you already have implemented Spectrum Scale\, you can use the historical data from performance monitoring within Spectrum Scale to estimate the rate of data change. \nQ: Do you have any general tips/recommendations regarding CES in a stretched cluster?\nA: The CES nodes in your cluster need to be split between the two sites as they are still part of single cluster. SMB performs its own locking with ctdb component. Thus the latency between the CES nodes needs to be fairly small value. Also be aware if you have different address spaces on two sites\, there may not be an automated failover of services and you may need to manually perform the same. \n \nUser group host: Bill Anderson\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBioSocial connections\n\n\n\n\n	Lindsay ToddAs part of the Software Defined Infrastructure team at the IBM Washington Systems Center (WSC)\,  Lindsay provides deep technical expertise with Spectrum Scale (GPFS)\, leveraging prior experience he gained while using it as a customer himself at a university supercomputing center. Lindsay continues exploring and using Spectrum Scale for WSC infrastructure needs\, as well as helping many clients use it to build innovative solutions to their business problems.Blog: https://parallelstorage.com\nLinkedIn: https://linkedin.com/in/rltodd
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200618T160000
DTEND;TZID=Europe/London:20200618T173000
DTSTAMP:20260514T104612
CREATED:20200603T184628Z
LAST-MODIFIED:20220128T180956Z
UID:1749-1592496000-1592501400@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 001 - What is new in Spectrum Scale 5.0.5?
DESCRIPTION:At each of our user group events\, we pretty much always start off with “What’s new in XXX release?” and with Spectrum Scale 5.0.5 having just been release\, we’re doing the same with the new series of SSUG::Digital events. \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n  \n \nDownload slides here \n  \nBlog Post: What is new in Spectrum Scale 5.0.5? \nQ&A\nQ: ­How would one go about obtaining an IBM Contact? ­\nA: ­­Do you have an IBM Sales rep already? That would be the first person to contact. ­If you purchased through a business partner\, that is your first point of contact. \nQ: ­ Will EUS releases going to be consumed preferentially by the ESS code distributions? Maybe would make it easier to coordinate Spectrum Scale and ESS code levels when we need to update Spectrum Scale. ­\nA: ­That’s the intention. Of course\, based on exact timing of releases and ESS needs for new functions\, it might not work out in all cases. ­ \nQ: ­Regarding Thin Provisioning support: Are there any test cycle for other vendors like Hitachi already happening? ­\nA: Contact IBM to discuss the specific vendor requirements. ­If you have a specific piece of hardware you want to see supported\, file an RFE. \nQ: ­Does the support of compression mean that you will also support the FCM Modules in an ESS 3000 or in storages like the IBM Flash Systems 9000/7000/5000? ­\nA: ­It is under evaluation to support FCM Modules of IBM Flash Systems and future ESS models with future Spectrum Scale releases. ­ \nQ: ­Any estimate on performance differences between Spectrum Scale 4.2.3 and Spectrum Scale 5.0.5­?\nA: There are incremental performance improvements in every Spectrum Scale releases. There is a significant performance jump from Spectrum Scale 4.2.3 to Spectrum Scale 5.0.0 to meet the performance commitments for CORAL. Some performance improvements have been covered at previous User Group Meetings and are available on (https://www.spectrumscaleug.org/presentations). It is also planned to provide a performance update in a future Expert Talk. \nQ: ­Are there plans to set the all-to-all daemon connections to defaults? ­\nA: ­No as default at the moment. See Expert Talk 004 “Performance Update” for more details:\nhttps://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talks-update-on-performance-enhancements-in-spectrum-scale \nQ: ­Is all-to-all connection establishment limited to the nodes inside a cluster or does it include all nodes from remote clusters that are already connected to a FS? ­\nA: Local and remote­. \nQ: ­”cp –preserver=xattr” feature is it something that 5.0.5 will enable for copies to “5.0.5”? Aka migrating data from 4.X to 5.X\, or only from 5.0.5 to future versions? ­\nA: ­You can copy files from Spectrum Scale 5.0.5 to previous and further version preserving extended attributes. You cannot copy the extended attributes from previous versions of Spectrum Scale to Spectrum Scale 5.0.5. \nQ: ­Is the “cp –preserve…” a function of the RHEL release\, or is there a version of cp included with Spectrum Scale? ­\nA: ­The system calls listxattr\, getxattr\, and setxattr were extended to retrieve ACLs as extended attributes. ­ \nQ: ­So\, then the version of the Spectrum Scale filesystem doesn’t matter\, just the version of RHEL? (for the cp question). ­\nA: ­Those system calls are extended by Spectrum Scale Scale at the VFS layer\, so it would depend on the version of Spectrum Scale (and kernel extensions). ­ \nQ: ­Are there any plans for Scale Protocols to support SMB transparent failover? ­\nA: ­As functionality is introduced into Samba code base\, IBM look into how they can pick support up for that. It’s a topic being discussed in the Samba community for this type of failover. ­ However\, it’s known to be a hard problem. \nQ: ­Any news on NFS4.1 support? ­\nA: ­It is in plan to support it later this year (subject to IBM plan commitment disclaimer)­. \nQ: ­On Spectrum Scale 4.2.3 and RHEL 7\, we’ve had problems with the Ganesha daemon using steadily more memory over time\, requiring us to failover / stop / start / failback periodically. Has Ganesha’s memory requirements been reduced in Spectrum Scale 5.0.5\, or is there better visibil­ity into what is driving memory usage?\nA: ­This issue was traced back to a C library memory allocation fragmentation issue.  There was a fix put into the Ganesha code to force the release of this unused fragmented memory.  This fix was made available last year in the 5.0.x release stream. ­ \nQ: ­Hallo All\, what is now the strategy to support object protocols like S3. It is missing the currency­.\nA: ­We have renewed focus on Object protocol.  We plan to support the Train release in the fall release.  Going forward we will try to update the Swift/S3 version once a year to make sure it stays current (subject to IBM plan commitment disclaimer). ­­If you have specific interest in S3 applications\, please contact us as we would like to hear about your requirements and use cases. \nQ: ­We are working on a deployment of CSI 1.1.0. When is snapshot support happening.?\nA: ­Snapshot support is planned to be available in a CSI driver update coming in late 3Q early 4Q 2020 (subject to IBM plan commitment disclaimer).­ \nQ: Are there any news about the restriction of GUI HA with CSI?\nA: ­This is a high priority requirement for the fall release.  It’s not officially committed yet\, but we are definitely trying to squeeze it in (subject to IBM plan commitment disclaimer). \nUser group host: Simon Thompson\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBioSocial connections\n\n\n\n\n	Chris MaestasChris is an Executive Architect for IBM File and Object Storage Solutions with over 20 years of experience deploying and designing IT systems for clients in various spaces. He has experience scaling performance and availability with a variety of file systems technologies. He has developed benchmark frameworks to test out systems for reliability and validate research performance data. He also has led global enablement sessions online and face to face where discussing how best to position mature technologies like Spectrum Scale with emerging technologies in Cloud\, Object\, Container or AI spaces.Twitter: @cdmaestas\nLinkedIn: https://www.linkedin.com/in/cdmaestas\n\n\n	Mathias DietzMathias works in the Spectrum Scale development team in Kelsterbach (Germany) as a software architect responsible for Reliability\, Availability and Serviceability (RAS). Part of his role is to drive reliability improvement into Spectrum Scale and improve Health & Performance monitoring\, Proactive Services and CES failover.\n\n\n	Ismael Solis MorenoIsmael works in the Spectrum Scale development team in Guadalajara Mexico as a data scientist and performance analyst. He is responsible for evaluating Spectrum Scale new features and releases performance. Part of his role is to analyze datasets to identify points of performance improvement providing insights to the development teams.LinkedIn: https://www.linkedin.com/in/ismaelsm
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-what-is-new-in-spectrum-scale-5-0-5/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
END:VCALENDAR