Author: Simon Lorenz
Publisher: IBM Redbooks
ISBN: 0738459097
Category : Computers
Languages : en
Pages : 80
Book Description
This IBM® Redpaper publication describes the architecture, installation procedure, and results for running a typical training application that works on an automotive data set in an orchestrated and secured environment that provides horizontal scalability of GPU resources across physical node boundaries for deep neural network (DNN) workloads. This paper is mostly relevant for systems engineers, system administrators, or system architects that are responsible for data center infrastructure management and typical day-to-day operations such as system monitoring, operational control, asset management, and security audits. This paper also describes IBM Spectrum® LSF® as a workload manager and IBM Spectrum Discover as a metadata search engine to find the right data for an inference job and automate the data science workflow. With the help of this solution, the data location, which may be on different storage systems, and time of availability for the AI job can be fully abstracted, which provides valuable information for data scientists.
Deployment and Usage Guide for Running AI Workloads on Red Hat OpenShift and NVIDIA DGX Systems with IBM Spectrum Scale
Author: Simon Lorenz
Publisher: IBM Redbooks
ISBN: 0738459097
Category : Computers
Languages : en
Pages : 80
Book Description
This IBM® Redpaper publication describes the architecture, installation procedure, and results for running a typical training application that works on an automotive data set in an orchestrated and secured environment that provides horizontal scalability of GPU resources across physical node boundaries for deep neural network (DNN) workloads. This paper is mostly relevant for systems engineers, system administrators, or system architects that are responsible for data center infrastructure management and typical day-to-day operations such as system monitoring, operational control, asset management, and security audits. This paper also describes IBM Spectrum® LSF® as a workload manager and IBM Spectrum Discover as a metadata search engine to find the right data for an inference job and automate the data science workflow. With the help of this solution, the data location, which may be on different storage systems, and time of availability for the AI job can be fully abstracted, which provides valuable information for data scientists.
Publisher: IBM Redbooks
ISBN: 0738459097
Category : Computers
Languages : en
Pages : 80
Book Description
This IBM® Redpaper publication describes the architecture, installation procedure, and results for running a typical training application that works on an automotive data set in an orchestrated and secured environment that provides horizontal scalability of GPU resources across physical node boundaries for deep neural network (DNN) workloads. This paper is mostly relevant for systems engineers, system administrators, or system architects that are responsible for data center infrastructure management and typical day-to-day operations such as system monitoring, operational control, asset management, and security audits. This paper also describes IBM Spectrum® LSF® as a workload manager and IBM Spectrum Discover as a metadata search engine to find the right data for an inference job and automate the data science workflow. With the help of this solution, the data location, which may be on different storage systems, and time of availability for the AI job can be fully abstracted, which provides valuable information for data scientists.
Data Accelerator for AI and Analytics
Author: Simon Lorenz
Publisher: IBM Redbooks
ISBN: 0738459321
Category : Computers
Languages : en
Pages : 88
Book Description
This IBM® Redpaper publication focuses on data orchestration in enterprise data pipelines. It provides details about data orchestration and how to address typical challenges that customers face when dealing with large and ever-growing amounts of data for data analytics. While the amount of data increases steadily, artificial intelligence (AI) workloads must speed up to deliver insights and business value in a timely manner. This paper provides a solution that addresses these needs: Data Accelerator for AI and Analytics (DAAA). A proof of concept (PoC) is described in detail. This paper focuses on the functions that are provided by the Data Accelerator for AI and Analytics solution, which simplifies the daily work of data scientists and system administrators. This solution helps increase the efficiency of storage systems and data processing to obtain results faster while eliminating unnecessary data copies and associated data management.
Publisher: IBM Redbooks
ISBN: 0738459321
Category : Computers
Languages : en
Pages : 88
Book Description
This IBM® Redpaper publication focuses on data orchestration in enterprise data pipelines. It provides details about data orchestration and how to address typical challenges that customers face when dealing with large and ever-growing amounts of data for data analytics. While the amount of data increases steadily, artificial intelligence (AI) workloads must speed up to deliver insights and business value in a timely manner. This paper provides a solution that addresses these needs: Data Accelerator for AI and Analytics (DAAA). A proof of concept (PoC) is described in detail. This paper focuses on the functions that are provided by the Data Accelerator for AI and Analytics solution, which simplifies the daily work of data scientists and system administrators. This solution helps increase the efficiency of storage systems and data processing to obtain results faster while eliminating unnecessary data copies and associated data management.
IBM Spectrum LSF Suite: Installation Best Practices Guide
Author: Dino Quintero
Publisher: IBM Redbooks
ISBN: 0738458570
Category : Computers
Languages : en
Pages : 94
Book Description
This IBM® Redpaper publication describes IBM Spectrum® LSF® Suite best practices installation topics, application checks for workload management, and high availability configurations by using theoretical knowledge and hands-on exercises. These findings are documented by way of sample scenarios. This publication addresses topics for sellers, IT architects, IT specialists, and anyone who wants to implement and manage a high-performing workload management solution with LSF. Moreover, this guide provides documentation to transfer how-to-skills to the technical teams, and solution guidance to the sales team. This publication compliments documentation that is available at IBM Knowledge Center, and aligns with educational materials that are provided by IBM Systems.
Publisher: IBM Redbooks
ISBN: 0738458570
Category : Computers
Languages : en
Pages : 94
Book Description
This IBM® Redpaper publication describes IBM Spectrum® LSF® Suite best practices installation topics, application checks for workload management, and high availability configurations by using theoretical knowledge and hands-on exercises. These findings are documented by way of sample scenarios. This publication addresses topics for sellers, IT architects, IT specialists, and anyone who wants to implement and manage a high-performing workload management solution with LSF. Moreover, this guide provides documentation to transfer how-to-skills to the technical teams, and solution guidance to the sales team. This publication compliments documentation that is available at IBM Knowledge Center, and aligns with educational materials that are provided by IBM Systems.
Implementation Guide for IBM Elastic Storage System 5000
Author: Brian Herr
Publisher: IBM Redbooks
ISBN: 0738459224
Category : Computers
Languages : en
Pages : 130
Book Description
This IBM® Redbooks® publication introduces and describes the IBM Elastic Storage® Server 5000 (ESS 5000) as a scalable, high-performance data and file management solution. The solution is built on proven IBM Spectrum® Scale technology, formerly IBM General Parallel File System (IBM GPFS). ESS is a modern implementation of software-defined storage, making it easier for you to deploy fast, highly scalable storage for AI and big data. With the lightning-fast NVMe storage technology and industry-leading file management capabilities of IBM Spectrum Scale, the ESS 3000 and ESS 5000 nodes can grow to over YB scalability and can be integrated into a federated global storage system. By consolidating storage requirements from the edge to the core data center — including kubernetes and Red Hat OpenShift — IBM ESS can reduce inefficiency, lower acquisition costs, simplify storage management, eliminate data silos, support multiple demanding workloads, and deliver high performance throughout your organization. This book provides a technical overview of the ESS 5000 solution and helps you to plan the installation of the environment. We also explain the use cases where we believe it fits best. Our goal is to position this book as the starting point document for customers that would use the ESS 5000 as part of their IBM Spectrum Scale setups. This book is targeted toward technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) who are responsible for delivering cost-effective storage solutions with ESS 5000.
Publisher: IBM Redbooks
ISBN: 0738459224
Category : Computers
Languages : en
Pages : 130
Book Description
This IBM® Redbooks® publication introduces and describes the IBM Elastic Storage® Server 5000 (ESS 5000) as a scalable, high-performance data and file management solution. The solution is built on proven IBM Spectrum® Scale technology, formerly IBM General Parallel File System (IBM GPFS). ESS is a modern implementation of software-defined storage, making it easier for you to deploy fast, highly scalable storage for AI and big data. With the lightning-fast NVMe storage technology and industry-leading file management capabilities of IBM Spectrum Scale, the ESS 3000 and ESS 5000 nodes can grow to over YB scalability and can be integrated into a federated global storage system. By consolidating storage requirements from the edge to the core data center — including kubernetes and Red Hat OpenShift — IBM ESS can reduce inefficiency, lower acquisition costs, simplify storage management, eliminate data silos, support multiple demanding workloads, and deliver high performance throughout your organization. This book provides a technical overview of the ESS 5000 solution and helps you to plan the installation of the environment. We also explain the use cases where we believe it fits best. Our goal is to position this book as the starting point document for customers that would use the ESS 5000 as part of their IBM Spectrum Scale setups. This book is targeted toward technical professionals (consultants, technical support staff, IT Architects, and IT Specialists) who are responsible for delivering cost-effective storage solutions with ESS 5000.
IBM FlashSystem Best Practices and Performance Guidelines
Author: Anil K Nayak
Publisher: IBM Redbooks
ISBN: 0738459704
Category : Computers
Languages : en
Pages : 594
Book Description
This IBM Redbooks publication captures several of the preferred practices and describes the performance gains that can be achieved by implementing the IBM FlashSystem® products. These practices are based on field experience. This book highlights configuration guidelines and preferred practices for the storage area network (SAN) topology, clustered system, back-end storage, storage pools and managed disks, volumes, Remote Copy services, and hosts. It explains how you can optimize disk performance with the IBM System Storage Easy Tier® function. It also provides preferred practices for monitoring, maintaining, and troubleshooting. This book is intended for experienced storage, SAN, IBM FlashSystem, SAN Volume Controller, and IBM Storwize® administrators and technicians. Understanding this book requires advanced knowledge of these environments.
Publisher: IBM Redbooks
ISBN: 0738459704
Category : Computers
Languages : en
Pages : 594
Book Description
This IBM Redbooks publication captures several of the preferred practices and describes the performance gains that can be achieved by implementing the IBM FlashSystem® products. These practices are based on field experience. This book highlights configuration guidelines and preferred practices for the storage area network (SAN) topology, clustered system, back-end storage, storage pools and managed disks, volumes, Remote Copy services, and hosts. It explains how you can optimize disk performance with the IBM System Storage Easy Tier® function. It also provides preferred practices for monitoring, maintaining, and troubleshooting. This book is intended for experienced storage, SAN, IBM FlashSystem, SAN Volume Controller, and IBM Storwize® administrators and technicians. Understanding this book requires advanced knowledge of these environments.
Learning Deep Learning
Author: Magnus Ekman
Publisher: Addison-Wesley Professional
ISBN: 0137470290
Category : Computers
Languages : en
Pages : 1106
Book Description
NVIDIA's Full-Color Guide to Deep Learning: All You Need to Get Started and Get Results "To enable everyone to be part of this historic revolution requires the democratization of AI knowledge and resources. This book is timely and relevant towards accomplishing these lofty goals." -- From the foreword by Dr. Anima Anandkumar, Bren Professor, Caltech, and Director of ML Research, NVIDIA "Ekman uses a learning technique that in our experience has proven pivotal to success—asking the reader to think about using DL techniques in practice. His straightforward approach is refreshing, and he permits the reader to dream, just a bit, about where DL may yet take us." -- From the foreword by Dr. Craig Clawson, Director, NVIDIA Deep Learning Institute Deep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning. Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation See how DL frameworks make it easier to develop more complicated and useful neural networks Discover how convolutional neural networks (CNNs) revolutionize image classification and analysis Apply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequences Master NLP with sequence-to-sequence networks and the Transformer architecture Build applications for natural language translation and image captioning NVIDIA's invention of the GPU sparked the PC gaming market. The company's pioneering work in accelerated computing--a supercharged form of computing at the intersection of computer graphics, high-performance computing, and AI--is reshaping trillion-dollar industries, such as transportation, healthcare, and manufacturing, and fueling the growth of many others. Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
Publisher: Addison-Wesley Professional
ISBN: 0137470290
Category : Computers
Languages : en
Pages : 1106
Book Description
NVIDIA's Full-Color Guide to Deep Learning: All You Need to Get Started and Get Results "To enable everyone to be part of this historic revolution requires the democratization of AI knowledge and resources. This book is timely and relevant towards accomplishing these lofty goals." -- From the foreword by Dr. Anima Anandkumar, Bren Professor, Caltech, and Director of ML Research, NVIDIA "Ekman uses a learning technique that in our experience has proven pivotal to success—asking the reader to think about using DL techniques in practice. His straightforward approach is refreshing, and he permits the reader to dream, just a bit, about where DL may yet take us." -- From the foreword by Dr. Craig Clawson, Director, NVIDIA Deep Learning Institute Deep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning. Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation See how DL frameworks make it easier to develop more complicated and useful neural networks Discover how convolutional neural networks (CNNs) revolutionize image classification and analysis Apply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequences Master NLP with sequence-to-sequence networks and the Transformer architecture Build applications for natural language translation and image captioning NVIDIA's invention of the GPU sparked the PC gaming market. The company's pioneering work in accelerated computing--a supercharged form of computing at the intersection of computer graphics, high-performance computing, and AI--is reshaping trillion-dollar industries, such as transportation, healthcare, and manufacturing, and fueling the growth of many others. Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
3F: FUTURE FINTECH FRAMEWORK
Author: KARTIK SWAMINATHAN
Publisher: Notion Press
ISBN: 1637147244
Category : Business & Economics
Languages : en
Pages : 117
Book Description
Fintech (Financial Technology) segment is growing exponentially, presenting lots of business and career opportunities. The Fintech ecosystem is also changing rapidly, with digital technology, innovations, regulations, market structures and convergence with other sectors. Given such rapid changes and complex developments, there is need for a Simplified and Futuristic Framework that helps you to: - - Understand various key aspects, trends & developments in Fintech - Ideate & Create Innovative Fintech Solutions - Be Future Ready both Professionally and in terms of Solutions you work on That is exactly what 3F: Future Fintech Framework is all about. You can read the book and understand, even if you know very little about Financial Services or Technology. So, if you are interested in Fintech or associated with it in some form, this book is a must read.
Publisher: Notion Press
ISBN: 1637147244
Category : Business & Economics
Languages : en
Pages : 117
Book Description
Fintech (Financial Technology) segment is growing exponentially, presenting lots of business and career opportunities. The Fintech ecosystem is also changing rapidly, with digital technology, innovations, regulations, market structures and convergence with other sectors. Given such rapid changes and complex developments, there is need for a Simplified and Futuristic Framework that helps you to: - - Understand various key aspects, trends & developments in Fintech - Ideate & Create Innovative Fintech Solutions - Be Future Ready both Professionally and in terms of Solutions you work on That is exactly what 3F: Future Fintech Framework is all about. You can read the book and understand, even if you know very little about Financial Services or Technology. So, if you are interested in Fintech or associated with it in some form, this book is a must read.
Exascale Scientific Applications
Author: Tjerk P. Straatsma
Publisher: CRC Press
ISBN: 1351999230
Category : Computers
Languages : en
Pages : 847
Book Description
From the Foreword: "The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere." — Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA "This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read." — Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan "Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures." — Christophe Calvin, CEA, Fundamental Research Division, Saclay, France "Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.’ — Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA
Publisher: CRC Press
ISBN: 1351999230
Category : Computers
Languages : en
Pages : 847
Book Description
From the Foreword: "The authors of the chapters in this book are the pioneers who will explore the exascale frontier. The path forward will not be easy... These authors, along with their colleagues who will produce these powerful computer systems will, with dedication and determination, overcome the scalability problem, discover the new algorithms needed to achieve exascale performance for the broad range of applications that they represent, and create the new tools needed to support the development of scalable and portable science and engineering applications. Although the focus is on exascale computers, the benefits will permeate all of science and engineering because the technologies developed for the exascale computers of tomorrow will also power the petascale servers and terascale workstations of tomorrow. These affordable computing capabilities will empower scientists and engineers everywhere." — Thom H. Dunning, Jr., Pacific Northwest National Laboratory and University of Washington, Seattle, Washington, USA "This comprehensive summary of applications targeting Exascale at the three DoE labs is a must read." — Rio Yokota, Tokyo Institute of Technology, Tokyo, Japan "Numerical simulation is now a need in many fields of science, technology, and industry. The complexity of the simulated systems coupled with the massive use of data makes HPC essential to move towards predictive simulations. Advances in computer architecture have so far permitted scientific advances, but at the cost of continually adapting algorithms and applications. The next technological breakthroughs force us to rethink the applications by taking energy consumption into account. These profound modifications require not only anticipation and sharing but also a paradigm shift in application design to ensure the sustainability of developments by guaranteeing a certain independence of the applications to the profound modifications of the architectures: it is the passage from optimal performance to the portability of performance. It is the challenge of this book to demonstrate by example the approach that one can adopt for the development of applications offering performance portability in spite of the profound changes of the computing architectures." — Christophe Calvin, CEA, Fundamental Research Division, Saclay, France "Three editors, one from each of the High Performance Computer Centers at Lawrence Berkeley, Argonne, and Oak Ridge National Laboratories, have compiled a very useful set of chapters aimed at describing software developments for the next generation exa-scale computers. Such a book is needed for scientists and engineers to see where the field is going and how they will be able to exploit such architectures for their own work. The book will also benefit students as it provides insights into how to develop software for such computer architectures. Overall, this book fills an important need in showing how to design and implement algorithms for exa-scale architectures which are heterogeneous and have unique memory systems. The book discusses issues with developing user codes for these architectures and how to address these issues including actual coding examples.’ — Dr. David A. Dixon, Robert Ramsay Chair, The University of Alabama, Tuscaloosa, Alabama, USA
Rise of the Machines
Author: Thomas Rid
Publisher: Scribe Publications
ISBN: 1925307603
Category : History
Languages : en
Pages : 374
Book Description
Thomas Rid’s revelatory history of cybernetics pulls together disparate threads in the history of technology, from the invention of radar and pilotless flying bombs in World War Two to today’s age of CCTV, cryptocurrencies and Oculus Rift, to make plain that our current anxieties about privacy and security will be emphatically at the crux of the new digital future that we have been steadily, sometimes inadvertently, creating for ourselves. Rise of the Machines makes a singular and significant contribution to the advancement of our clearer understanding of that future – and of the past that has generated it. PRAISE FOR THOMAS RID ‘A fascinating survey of the oscillating hopes and fears expressed by the cybernetic mythos.’ The Wall Street Journal ‘Thoughtful, enlightening … a mélange of history, media studies, political science, military engineering and, yes, etymology … A meticulous yet startling alternate history of computation.’ New Scientist
Publisher: Scribe Publications
ISBN: 1925307603
Category : History
Languages : en
Pages : 374
Book Description
Thomas Rid’s revelatory history of cybernetics pulls together disparate threads in the history of technology, from the invention of radar and pilotless flying bombs in World War Two to today’s age of CCTV, cryptocurrencies and Oculus Rift, to make plain that our current anxieties about privacy and security will be emphatically at the crux of the new digital future that we have been steadily, sometimes inadvertently, creating for ourselves. Rise of the Machines makes a singular and significant contribution to the advancement of our clearer understanding of that future – and of the past that has generated it. PRAISE FOR THOMAS RID ‘A fascinating survey of the oscillating hopes and fears expressed by the cybernetic mythos.’ The Wall Street Journal ‘Thoughtful, enlightening … a mélange of history, media studies, political science, military engineering and, yes, etymology … A meticulous yet startling alternate history of computation.’ New Scientist
A Deployment Guide for IBM Spectrum Scale Unified File and Object Storage
Author: Dean Hildebrand
Publisher: IBM Redbooks
ISBN: 0738455997
Category : Computers
Languages : en
Pages : 74
Book Description
Because of the explosion of unstructured data that is generated by individuals and organizations, a new storage paradigm that is called object storage has been developed. Object storage stores data in a flat namespace that scales to trillions of objects. The design of object storage also simplifies how users access data, supporting new types of applications and allowing users to access data by using various methods, including mobile devices and web applications. Data distribution and management are also simplified, allowing greater collaboration across the globe. OpenStack Swift is an emerging open source object storage software platform that is widely used for cloud storage. IBM® Spectrum Scale, which is based on IBM General Parallel File System (IBM GPFSTM) technology, is a high-performance and proven product that is used to store data for thousands of mission-critical commercial installations worldwide. Throughout this IBM RedpaperTM publication, IBM SpectrumTM Scale is used to refer to GPFS. The examples in this paper are based on IBM Spectrum ScaleTM V4.2.2. IBM Spectrum Scale also automates common storage management tasks, such as tiering and archiving at scale. Together, IBM Spectrum Scale and OpenStack Swift provide an enterprise-class object storage solution that efficiently stores, distributes, and retains critical data. This paper provides instructions about setting up and configuring IBM Spectrum Scale Object Storage that is based on OpenStack Swift. It also provides an initial set of preferred practices that ensure optimal performance and reliability. This paper is intended for administrators who are familiar with IBM Spectrum Scale and OpenStack Swift components.
Publisher: IBM Redbooks
ISBN: 0738455997
Category : Computers
Languages : en
Pages : 74
Book Description
Because of the explosion of unstructured data that is generated by individuals and organizations, a new storage paradigm that is called object storage has been developed. Object storage stores data in a flat namespace that scales to trillions of objects. The design of object storage also simplifies how users access data, supporting new types of applications and allowing users to access data by using various methods, including mobile devices and web applications. Data distribution and management are also simplified, allowing greater collaboration across the globe. OpenStack Swift is an emerging open source object storage software platform that is widely used for cloud storage. IBM® Spectrum Scale, which is based on IBM General Parallel File System (IBM GPFSTM) technology, is a high-performance and proven product that is used to store data for thousands of mission-critical commercial installations worldwide. Throughout this IBM RedpaperTM publication, IBM SpectrumTM Scale is used to refer to GPFS. The examples in this paper are based on IBM Spectrum ScaleTM V4.2.2. IBM Spectrum Scale also automates common storage management tasks, such as tiering and archiving at scale. Together, IBM Spectrum Scale and OpenStack Swift provide an enterprise-class object storage solution that efficiently stores, distributes, and retains critical data. This paper provides instructions about setting up and configuring IBM Spectrum Scale Object Storage that is based on OpenStack Swift. It also provides an initial set of preferred practices that ensure optimal performance and reliability. This paper is intended for administrators who are familiar with IBM Spectrum Scale and OpenStack Swift components.