BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Spectrum Scale User Group - ECPv6.15.10//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.spectrumscaleug.org
X-WR-CALDESC:Events for Spectrum Scale User Group
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20190331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20191027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20211031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200713T160000
DTEND;TZID=Europe/London:20200713T173000
DTSTAMP:20260405T191924
CREATED:20200611T073812Z
LAST-MODIFIED:20220128T180933Z
UID:1805-1594656000-1594661400@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 002 - Best Practices for building a stretched cluster
DESCRIPTION:Talk 2 in the SSUG::Digital series looks at how to build a stretched cluster. What are the best practices? What pitfalls are there? Why would you consider a stretched cluster built with Spectrum Scale\, as opposed to one of the alternative approaches to high availability? How do stretched clusters work\, and what considerations go into planning a successful stretched cluster? We will examine the theory behind Spectrum Scale stretched clusters\, review some best practices for designing stretched clusters\, and talk about a few cases where stretched clusters have been successfully deployed. \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n \nDownload slides here \n  \nQ&A\nQ: For DR use case where ClusterA and ClusterB are the 2 separate data centres (DC A and DCB)\, do I need my Tiebreaker Quorum node installed in Data Centre C?\nA: (This is covered in the presentation). It is recommended to have the tiebreaker quorum node at a third site but it could be in one of the sites with the caveat that if that site goes down the second site will not be able to stay up. \nQ: The documentation shows a high speed shared storage is needed…does it mean that san fabric should be merged over ISL for volume allocation across site?\nA: When using Spectrum Scale replication for stretch clusters there is no need to for the SAN to be extended across the sites. The stretched cluster architecture described in the presentation works even when underlying storage does not replicate the data across sites. \nQ: Will there be any performance difference between extended SAN and accessing NSD over network using their owner?\nA: Well aside from the protocol difference block vs file\, it depends on the type of connectivity you have to SAN vs network. Spectrum Scale has been placing more resiliency in recent releases for what to do for network behaviour (eg recently proactiveReconnet feature was added to Spectrum Scale). \nQ: Does 10ms latency required between SiteA\, SiteB and also Tiebreaker quorum node? Can my tiebreaker quorum node have higher latency? \nA: Yes\, the third site can have a higher latency but it should still be “within reason”. So maybe double that number ie 20ms. It is recommended to keep it under a second. \nQ: Is tiebreaker node hosted on AWS or any other Cloud Providers a supported configuration?\nA: Yes\, we have customers who using a public cloud for their third site. \nQ: What is the RPO and RTO?\nA: Remember that this is synchronous replication. So as long as you don’t run out of space on your storage there is 0 RPO. RTO answer depends on your workload and infrastructure. It depends on the rate of change of data change\, your storage and the WAN. \nQ: How to check/measure the rate of data change?\nA: This really depends on the application and the rate of data change by the application. If you already have implemented Spectrum Scale\, you can use the historical data from performance monitoring within Spectrum Scale to estimate the rate of data change. \nQ: Do you have any general tips/recommendations regarding CES in a stretched cluster?\nA: The CES nodes in your cluster need to be split between the two sites as they are still part of single cluster. SMB performs its own locking with ctdb component. Thus the latency between the CES nodes needs to be fairly small value. Also be aware if you have different address spaces on two sites\, there may not be an automated failover of services and you may need to manually perform the same. \n \nUser group host: Bill Anderson\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBioSocial connections\n\n\n\n\n	Lindsay ToddAs part of the Software Defined Infrastructure team at the IBM Washington Systems Center (WSC)\,  Lindsay provides deep technical expertise with Spectrum Scale (GPFS)\, leveraging prior experience he gained while using it as a customer himself at a university supercomputing center. Lindsay continues exploring and using Spectrum Scale for WSC infrastructure needs\, as well as helping many clients use it to build innovative solutions to their business problems.Blog: https://parallelstorage.com\nLinkedIn: https://linkedin.com/in/rltodd
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/
LOCATION:Digital Event
CATEGORIES:Expert Talks
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200727T160000
DTEND;TZID=Europe/London:20200727T173000
DTSTAMP:20260405T191924
CREATED:20200618T102210Z
LAST-MODIFIED:20220128T181050Z
UID:1817-1595865600-1595871000@www.spectrumscaleug.org
SUMMARY:SSUG::Digital: 003 - Strategy Update
DESCRIPTION:Spectrum Scale Strategy UpdateToday is the AI era and we are going through huge explosion of data. Besides the AI revolution\, we have clouds\, hybrid clouds and data is moving from “on-prem” to various clouds\, multi-clouds and back. Coupled with this data growth\, Hardware is evolving with an increasing factor of 10. The IBM Spectrum Scale team continues to Invest heavily in adding exciting new features and technology to maintain its leadership as a premier file system. In this session\, Wayne Sawdon (CTO) and Ted Hoover (Program Director) of the Spectrum Scale development team will give an overview of recent\, upcoming features and strategy for Spectrum Scale.\n \n        \n        \n            \n                \n                    \n                \n               \n            \n        \n        \n \nDownload slides here \nQ&A\nNone \nUser group host: Bob Oesterlin\nSpeakers:\n\n\n\n\n	Speaker NamePhotoBio\n\n\n\n\n	Wayne SawdonWayne joined IBM in 1982 and worked on a variety of research projects including the QuickSilver Transactional Operating System. He spend most of the 90's on educational leave at Carnegie Mellon University working on Distributed Shared Memory and Software Defined Computer Architecture. Upon returning he joined the TigerShark research project which became IBM's General Parallel File System. Although Wayne has worked on most of the file system\, he only admits to working on its data management. These days\, Wayne serves as the CTO for Spectrum Scale and ESS.\n\n\n	Ted HooverTed Hoover is a Program Director within IBM’s Spectrum Scale product development organisation.  Ted is responsible for the worldwide development of Spectrum Scale cloud\, container\, and performance engineering teams.
URL:https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-strategy-update/
CATEGORIES:Expert Talks
END:VEVENT
END:VCALENDAR