Author: Mohammad Sadoghi
Publisher: Springer Nature
ISBN: 3031018702
Category : Computers
Languages : en
Pages : 122
Book Description
The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory technologies promise fast non-volatile storage and expose unchartered trade-offs for transactional durability, such as exploiting byte-addressable hot and cold storage through persistent programming that promotes simpler recovery protocols. In the meantime, the plateauing single-threaded processor performance has brought massive concurrency within a single node, first in the form of multi-core, and now with many-core and heterogeneous processors. The exciting possibility to reshape the storage, transaction, logging, and recovery layers of next-generation systems on emerging hardware have prompted the database research community to vigorously debate the trade-offs between specialized kernels that narrowly focus on transaction processing performance vs. designs that permit transactionally consistent data accesses from decision support and analytical workloads. In this book, we aim to classify and distill the new body of work on transaction processing that has surfaced in the last decade to navigate researchers and practitioners through this intricate research subject.
Transaction Processing on Modern Hardware
Author: Mohammad Sadoghi
Publisher: Springer Nature
ISBN: 3031018702
Category : Computers
Languages : en
Pages : 122
Book Description
The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory technologies promise fast non-volatile storage and expose unchartered trade-offs for transactional durability, such as exploiting byte-addressable hot and cold storage through persistent programming that promotes simpler recovery protocols. In the meantime, the plateauing single-threaded processor performance has brought massive concurrency within a single node, first in the form of multi-core, and now with many-core and heterogeneous processors. The exciting possibility to reshape the storage, transaction, logging, and recovery layers of next-generation systems on emerging hardware have prompted the database research community to vigorously debate the trade-offs between specialized kernels that narrowly focus on transaction processing performance vs. designs that permit transactionally consistent data accesses from decision support and analytical workloads. In this book, we aim to classify and distill the new body of work on transaction processing that has surfaced in the last decade to navigate researchers and practitioners through this intricate research subject.
Publisher: Springer Nature
ISBN: 3031018702
Category : Computers
Languages : en
Pages : 122
Book Description
The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory technologies promise fast non-volatile storage and expose unchartered trade-offs for transactional durability, such as exploiting byte-addressable hot and cold storage through persistent programming that promotes simpler recovery protocols. In the meantime, the plateauing single-threaded processor performance has brought massive concurrency within a single node, first in the form of multi-core, and now with many-core and heterogeneous processors. The exciting possibility to reshape the storage, transaction, logging, and recovery layers of next-generation systems on emerging hardware have prompted the database research community to vigorously debate the trade-offs between specialized kernels that narrowly focus on transaction processing performance vs. designs that permit transactionally consistent data accesses from decision support and analytical workloads. In this book, we aim to classify and distill the new body of work on transaction processing that has surfaced in the last decade to navigate researchers and practitioners through this intricate research subject.
Databases on Modern Hardware
Author: Anastasia Ailamaki
Publisher: Springer Nature
ISBN: 3031018583
Category : Computers
Languages : en
Pages : 101
Book Description
Data management systems enable various influential applications from high-performance online services (e.g., social networks like Twitter and Facebook or financial markets) to big data analytics (e.g., scientific exploration, sensor networks, business intelligence). As a result, data management systems have been one of the main drivers for innovations in the database and computer architecture communities for several decades. Recent hardware trends require software to take advantage of the abundant parallelism existing in modern and future hardware. The traditional design of the data management systems, however, faces inherent scalability problems due to its tightly coupled components. In addition, it cannot exploit the full capability of the aggressive micro-architectural features of modern processors. As a result, today's most commonly used server types remain largely underutilized leading to a huge waste of hardware resources and energy. In this book, we shed light on the challenges present while running DBMS on modern multicore hardware. We divide the material into two dimensions of scalability: implicit/vertical and explicit/horizontal. The first part of the book focuses on the vertical dimension: it describes the instruction- and data-level parallelism opportunities in a core coming from the hardware and software side. In addition, it examines the sources of under-utilization in a modern processor and presents insights and hardware/software techniques to better exploit the microarchitectural resources of a processor by improving cache locality at the right level of the memory hierarchy. The second part focuses on the horizontal dimension, i.e., scalability bottlenecks of database applications at the level of multicore and multisocket multicore architectures. It first presents a systematic way of eliminating such bottlenecks in online transaction processing workloads, which is based on minimizing unbounded communication, and shows several techniques that minimize bottlenecks in major components of database management systems. Then, it demonstrates the data and work sharing opportunities for analytical workloads, and reviews advanced scheduling mechanisms that are aware of nonuniform memory accesses and alleviate bandwidth saturation.
Publisher: Springer Nature
ISBN: 3031018583
Category : Computers
Languages : en
Pages : 101
Book Description
Data management systems enable various influential applications from high-performance online services (e.g., social networks like Twitter and Facebook or financial markets) to big data analytics (e.g., scientific exploration, sensor networks, business intelligence). As a result, data management systems have been one of the main drivers for innovations in the database and computer architecture communities for several decades. Recent hardware trends require software to take advantage of the abundant parallelism existing in modern and future hardware. The traditional design of the data management systems, however, faces inherent scalability problems due to its tightly coupled components. In addition, it cannot exploit the full capability of the aggressive micro-architectural features of modern processors. As a result, today's most commonly used server types remain largely underutilized leading to a huge waste of hardware resources and energy. In this book, we shed light on the challenges present while running DBMS on modern multicore hardware. We divide the material into two dimensions of scalability: implicit/vertical and explicit/horizontal. The first part of the book focuses on the vertical dimension: it describes the instruction- and data-level parallelism opportunities in a core coming from the hardware and software side. In addition, it examines the sources of under-utilization in a modern processor and presents insights and hardware/software techniques to better exploit the microarchitectural resources of a processor by improving cache locality at the right level of the memory hierarchy. The second part focuses on the horizontal dimension, i.e., scalability bottlenecks of database applications at the level of multicore and multisocket multicore architectures. It first presents a systematic way of eliminating such bottlenecks in online transaction processing workloads, which is based on minimizing unbounded communication, and shows several techniques that minimize bottlenecks in major components of database management systems. Then, it demonstrates the data and work sharing opportunities for analytical workloads, and reviews advanced scheduling mechanisms that are aware of nonuniform memory accesses and alleviate bandwidth saturation.
Transaction Processing on Modern Hardware
Author: Mohammad Sadoghi
Publisher: Morgan & Claypool Publishers
ISBN: 1681735008
Category : Computers
Languages : en
Pages : 140
Book Description
The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory technologies promise fast non-volatile storage and expose unchartered trade-offs for transactional durability, such as exploiting byte-addressable hot and cold storage through persistent programming that promotes simpler recovery protocols. In the meantime, the plateauing single-threaded processor performance has brought massive concurrency within a single node, first in the form of multi-core, and now with many-core and heterogeneous processors. The exciting possibility to reshape the storage, transaction, logging, and recovery layers of next-generation systems on emerging hardware have prompted the database research community to vigorously debate the trade-offs between specialized kernels that narrowly focus on transaction processing performance vs. designs that permit transactionally consistent data accesses from decision support and analytical workloads. In this book, we aim to classify and distill the new body of work on transaction processing that has surfaced in the last decade to navigate researchers and practitioners through this intricate research subject.
Publisher: Morgan & Claypool Publishers
ISBN: 1681735008
Category : Computers
Languages : en
Pages : 140
Book Description
The last decade has brought groundbreaking developments in transaction processing. This resurgence of an otherwise mature research area has spurred from the diminishing cost per GB of DRAM that allows many transaction processing workloads to be entirely memory-resident. This shift demanded a pause to fundamentally rethink the architecture of database systems. The data storage lexicon has now expanded beyond spinning disks and RAID levels to include the cache hierarchy, memory consistency models, cache coherence and write invalidation costs, NUMA regions, and coherence domains. New memory technologies promise fast non-volatile storage and expose unchartered trade-offs for transactional durability, such as exploiting byte-addressable hot and cold storage through persistent programming that promotes simpler recovery protocols. In the meantime, the plateauing single-threaded processor performance has brought massive concurrency within a single node, first in the form of multi-core, and now with many-core and heterogeneous processors. The exciting possibility to reshape the storage, transaction, logging, and recovery layers of next-generation systems on emerging hardware have prompted the database research community to vigorously debate the trade-offs between specialized kernels that narrowly focus on transaction processing performance vs. designs that permit transactionally consistent data accesses from decision support and analytical workloads. In this book, we aim to classify and distill the new body of work on transaction processing that has surfaced in the last decade to navigate researchers and practitioners through this intricate research subject.
Transaction Processing
Author: Jim Gray
Publisher: Elsevier
ISBN: 0080519555
Category : Computers
Languages : en
Pages : 1122
Book Description
The key to client/server computing.Transaction processing techniques are deeply ingrained in the fields ofdatabases and operating systems and are used to monitor, control and updateinformation in modern computer systems. This book will show you how large,distributed, heterogeneous computer systems can be made to work reliably.Using transactions as a unifying conceptual framework, the authors show howto build high-performance distributed systems and high-availabilityapplications with finite budgets and risk. The authors provide detailed explanations of why various problems occur aswell as practical, usable techniques for their solution. Throughout the book,examples and techniques are drawn from the most successful commercial andresearch systems. Extensive use of compilable C code fragments demonstratesthe many transaction processing algorithms presented in the book. The bookwill be valuable to anyone interested in implementing distributed systemsor client/server architectures.
Publisher: Elsevier
ISBN: 0080519555
Category : Computers
Languages : en
Pages : 1122
Book Description
The key to client/server computing.Transaction processing techniques are deeply ingrained in the fields ofdatabases and operating systems and are used to monitor, control and updateinformation in modern computer systems. This book will show you how large,distributed, heterogeneous computer systems can be made to work reliably.Using transactions as a unifying conceptual framework, the authors show howto build high-performance distributed systems and high-availabilityapplications with finite budgets and risk. The authors provide detailed explanations of why various problems occur aswell as practical, usable techniques for their solution. Throughout the book,examples and techniques are drawn from the most successful commercial andresearch systems. Extensive use of compilable C code fragments demonstratesthe many transaction processing algorithms presented in the book. The bookwill be valuable to anyone interested in implementing distributed systemsor client/server architectures.
Encyclopedia of Information Science and Technology, Fifth Edition
Author: Khosrow-Pour D.B.A., Mehdi
Publisher: IGI Global
ISBN: 1799834808
Category : Computers
Languages : en
Pages : 1966
Book Description
The rise of intelligence and computation within technology has created an eruption of potential applications in numerous professional industries. Techniques such as data analysis, cloud computing, machine learning, and others have altered the traditional processes of various disciplines including healthcare, economics, transportation, and politics. Information technology in today’s world is beginning to uncover opportunities for experts in these fields that they are not yet aware of. The exposure of specific instances in which these devices are being implemented will assist other specialists in how to successfully utilize these transformative tools with the appropriate amount of discretion, safety, and awareness. Considering the level of diverse uses and practices throughout the globe, the fifth edition of the Encyclopedia of Information Science and Technology series continues the enduring legacy set forth by its predecessors as a premier reference that contributes the most cutting-edge concepts and methodologies to the research community. The Encyclopedia of Information Science and Technology, Fifth Edition is a three-volume set that includes 136 original and previously unpublished research chapters that present multidisciplinary research and expert insights into new methods and processes for understanding modern technological tools and their applications as well as emerging theories and ethical controversies surrounding the field of information science. Highlighting a wide range of topics such as natural language processing, decision support systems, and electronic government, this book offers strategies for implementing smart devices and analytics into various professional disciplines. The techniques discussed in this publication are ideal for IT professionals, developers, computer scientists, practitioners, managers, policymakers, engineers, data analysts, and programmers seeking to understand the latest developments within this field and who are looking to apply new tools and policies in their practice. Additionally, academicians, researchers, and students in fields that include but are not limited to software engineering, cybersecurity, information technology, media and communications, urban planning, computer science, healthcare, economics, environmental science, data management, and political science will benefit from the extensive knowledge compiled within this publication.
Publisher: IGI Global
ISBN: 1799834808
Category : Computers
Languages : en
Pages : 1966
Book Description
The rise of intelligence and computation within technology has created an eruption of potential applications in numerous professional industries. Techniques such as data analysis, cloud computing, machine learning, and others have altered the traditional processes of various disciplines including healthcare, economics, transportation, and politics. Information technology in today’s world is beginning to uncover opportunities for experts in these fields that they are not yet aware of. The exposure of specific instances in which these devices are being implemented will assist other specialists in how to successfully utilize these transformative tools with the appropriate amount of discretion, safety, and awareness. Considering the level of diverse uses and practices throughout the globe, the fifth edition of the Encyclopedia of Information Science and Technology series continues the enduring legacy set forth by its predecessors as a premier reference that contributes the most cutting-edge concepts and methodologies to the research community. The Encyclopedia of Information Science and Technology, Fifth Edition is a three-volume set that includes 136 original and previously unpublished research chapters that present multidisciplinary research and expert insights into new methods and processes for understanding modern technological tools and their applications as well as emerging theories and ethical controversies surrounding the field of information science. Highlighting a wide range of topics such as natural language processing, decision support systems, and electronic government, this book offers strategies for implementing smart devices and analytics into various professional disciplines. The techniques discussed in this publication are ideal for IT professionals, developers, computer scientists, practitioners, managers, policymakers, engineers, data analysts, and programmers seeking to understand the latest developments within this field and who are looking to apply new tools and policies in their practice. Additionally, academicians, researchers, and students in fields that include but are not limited to software engineering, cybersecurity, information technology, media and communications, urban planning, computer science, healthcare, economics, environmental science, data management, and political science will benefit from the extensive knowledge compiled within this publication.
Handling Priority Inversion in Time-Constrained Distributed Databases
Author: Shanker, Udai
Publisher: IGI Global
ISBN: 1799824934
Category : Computers
Languages : en
Pages : 338
Book Description
In the computer science industry, high levels of performance remain the focal point in software engineering. This quest has made current systems exceedingly complex, as practitioners strive to discover novel approaches to increase the capabilities of modern computer structures. A prevalent area of research in recent years is scalable transaction processing and its usage in large databases and cloud computing. Despite its popularity, there remains a need for significant research in the understanding of scalability and its performance within distributed databases. Handling Priority Inversion in Time-Constrained Distributed Databases provides emerging research exploring the theoretical and practical aspects of database transaction processing frameworks and improving their performance using modern technologies and algorithms. Featuring coverage on a broad range of topics such as consistency mechanisms, real-time systems, and replica management, this book is ideally designed for IT professionals, computing specialists, developers, researchers, data engineers, executives, academics, and students seeking research on current trends and developments in distributed computing and databases.
Publisher: IGI Global
ISBN: 1799824934
Category : Computers
Languages : en
Pages : 338
Book Description
In the computer science industry, high levels of performance remain the focal point in software engineering. This quest has made current systems exceedingly complex, as practitioners strive to discover novel approaches to increase the capabilities of modern computer structures. A prevalent area of research in recent years is scalable transaction processing and its usage in large databases and cloud computing. Despite its popularity, there remains a need for significant research in the understanding of scalability and its performance within distributed databases. Handling Priority Inversion in Time-Constrained Distributed Databases provides emerging research exploring the theoretical and practical aspects of database transaction processing frameworks and improving their performance using modern technologies and algorithms. Featuring coverage on a broad range of topics such as consistency mechanisms, real-time systems, and replica management, this book is ideally designed for IT professionals, computing specialists, developers, researchers, data engineers, executives, academics, and students seeking research on current trends and developments in distributed computing and databases.
Encyclopedia of Organizational Knowledge, Administration, and Technology
Author: Khosrow-Pour D.B.A., Mehdi
Publisher: IGI Global
ISBN: 1799834743
Category : Business & Economics
Languages : en
Pages : 2734
Book Description
For any organization to be successful, it must operate in such a manner that knowledge and information, human resources, and technology are continually taken into consideration and managed effectively. Business concepts are always present regardless of the field or industry – in education, government, healthcare, not-for-profit, engineering, hospitality/tourism, among others. Maintaining organizational awareness and a strategic frame of mind is critical to meeting goals, gaining competitive advantage, and ultimately ensuring sustainability. The Encyclopedia of Organizational Knowledge, Administration, and Technology is an inaugural five-volume publication that offers 193 completely new and previously unpublished articles authored by leading experts on the latest concepts, issues, challenges, innovations, and opportunities covering all aspects of modern organizations. Moreover, it is comprised of content that highlights major breakthroughs, discoveries, and authoritative research results as they pertain to all aspects of organizational growth and development including methodologies that can help companies thrive and analytical tools that assess an organization’s internal health and performance. Insights are offered in key topics such as organizational structure, strategic leadership, information technology management, and business analytics, among others. The knowledge compiled in this publication is designed for entrepreneurs, managers, executives, investors, economic analysts, computer engineers, software programmers, human resource departments, and other industry professionals seeking to understand the latest tools to emerge from this field and who are looking to incorporate them in their practice. Additionally, academicians, researchers, and students in fields that include but are not limited to business, management science, organizational development, entrepreneurship, sociology, corporate psychology, computer science, and information technology will benefit from the research compiled within this publication.
Publisher: IGI Global
ISBN: 1799834743
Category : Business & Economics
Languages : en
Pages : 2734
Book Description
For any organization to be successful, it must operate in such a manner that knowledge and information, human resources, and technology are continually taken into consideration and managed effectively. Business concepts are always present regardless of the field or industry – in education, government, healthcare, not-for-profit, engineering, hospitality/tourism, among others. Maintaining organizational awareness and a strategic frame of mind is critical to meeting goals, gaining competitive advantage, and ultimately ensuring sustainability. The Encyclopedia of Organizational Knowledge, Administration, and Technology is an inaugural five-volume publication that offers 193 completely new and previously unpublished articles authored by leading experts on the latest concepts, issues, challenges, innovations, and opportunities covering all aspects of modern organizations. Moreover, it is comprised of content that highlights major breakthroughs, discoveries, and authoritative research results as they pertain to all aspects of organizational growth and development including methodologies that can help companies thrive and analytical tools that assess an organization’s internal health and performance. Insights are offered in key topics such as organizational structure, strategic leadership, information technology management, and business analytics, among others. The knowledge compiled in this publication is designed for entrepreneurs, managers, executives, investors, economic analysts, computer engineers, software programmers, human resource departments, and other industry professionals seeking to understand the latest tools to emerge from this field and who are looking to incorporate them in their practice. Additionally, academicians, researchers, and students in fields that include but are not limited to business, management science, organizational development, entrepreneurship, sociology, corporate psychology, computer science, and information technology will benefit from the research compiled within this publication.
Department of Defense Appropriations for 1991: Automatic data processing programs
Author: United States. Congress. House. Committee on Appropriations. Subcommittee on Department of Defense
Publisher:
ISBN:
Category : Government publications
Languages : en
Pages : 720
Book Description
Publisher:
ISBN:
Category : Government publications
Languages : en
Pages : 720
Book Description
Fault-Tolerant Distributed Transactions on Blockchain
Author: Suyash Gupta
Publisher: Springer Nature
ISBN: 303101877X
Category : Computers
Languages : en
Pages : 248
Book Description
Since the introduction of Bitcoin—the first widespread application driven by blockchain—the interest of the public and private sectors in blockchain has skyrocketed. In recent years, blockchain-based fabrics have been used to address challenges in diverse fields such as trade, food production, property rights, identity-management, aid delivery, health care, and fraud prevention. This widespread interest follows from fundamental concepts on which blockchains are built that together embed the notion of trust, upon which blockchains are built. 1. Blockchains provide data transparancy. Data in a blockchain is stored in the form of a ledger, which contains an ordered history of all the transactions. This facilitates oversight and auditing. 2. Blockchains ensure data integrity by using strong cryptographic primitives. This guarantees that transactions accepted by the blockchain are authenticated by its issuer, are immutable, and cannot be repudiated by the issuer. This ensures accountability. 3. Blockchains are decentralized, democratic, and resilient. They use consensus-based replication to decentralize the ledger among many independent participants. Thus, it can operate completely decentralized and does not require trust in a single authority. Additions to the chain are performed by consensus, in which all participants have a democratic voice in maintaining the integrity of the blockchain. Due to the usage of replication and consensus, blockchains are also highly resilient to malicious attacks even when a significant portion of the participants are malicious. It further increases the opportunity for fairness and equity through democratization. These fundamental concepts and the technologies behind them—a generic ledger-based data model, cryptographically ensured data integrity, and consensus-based replication—prove to be a powerful and inspiring combination, a catalyst to promote computational trust. In this book, we present an in-depth study of blockchain, unraveling its revolutionary promise to instill computational trust in society, all carefully tailored to a broad audience including students, researchers, and practitioners. We offer a comprehensive overview of theoretical limitations and practical usability of consensus protocols while examining the diverse landscape of how blockchains are manifested in their permissioned and permissionless forms.
Publisher: Springer Nature
ISBN: 303101877X
Category : Computers
Languages : en
Pages : 248
Book Description
Since the introduction of Bitcoin—the first widespread application driven by blockchain—the interest of the public and private sectors in blockchain has skyrocketed. In recent years, blockchain-based fabrics have been used to address challenges in diverse fields such as trade, food production, property rights, identity-management, aid delivery, health care, and fraud prevention. This widespread interest follows from fundamental concepts on which blockchains are built that together embed the notion of trust, upon which blockchains are built. 1. Blockchains provide data transparancy. Data in a blockchain is stored in the form of a ledger, which contains an ordered history of all the transactions. This facilitates oversight and auditing. 2. Blockchains ensure data integrity by using strong cryptographic primitives. This guarantees that transactions accepted by the blockchain are authenticated by its issuer, are immutable, and cannot be repudiated by the issuer. This ensures accountability. 3. Blockchains are decentralized, democratic, and resilient. They use consensus-based replication to decentralize the ledger among many independent participants. Thus, it can operate completely decentralized and does not require trust in a single authority. Additions to the chain are performed by consensus, in which all participants have a democratic voice in maintaining the integrity of the blockchain. Due to the usage of replication and consensus, blockchains are also highly resilient to malicious attacks even when a significant portion of the participants are malicious. It further increases the opportunity for fairness and equity through democratization. These fundamental concepts and the technologies behind them—a generic ledger-based data model, cryptographically ensured data integrity, and consensus-based replication—prove to be a powerful and inspiring combination, a catalyst to promote computational trust. In this book, we present an in-depth study of blockchain, unraveling its revolutionary promise to instill computational trust in society, all carefully tailored to a broad audience including students, researchers, and practitioners. We offer a comprehensive overview of theoretical limitations and practical usability of consensus protocols while examining the diverse landscape of how blockchains are manifested in their permissioned and permissionless forms.
On Transactional Concurrency Control
Author: Goetz Graefe
Publisher: Springer Nature
ISBN: 3031018737
Category : Computers
Languages : en
Pages : 383
Book Description
This book contains a number of chapters on transactional database concurrency control. This volume's entire sequence of chapters can summarized as follows: A two-sentence summary of the volume's entire sequence of chapters is this: traditional locking techniques can be improved in multiple dimensions, notably in lock scopes (sizes), lock modes (increment, decrement, and more), lock durations (late acquisition, early release), and lock acquisition sequence (to avoid deadlocks). Even if some of these improvements can be transferred to optimistic concurrency control, notably a fine granularity of concurrency control with serializable transaction isolation including phantom protection, pessimistic concurrency control is categorically superior to optimistic concurrency control, i.e., independent of application, workload, deployment, hardware, and software implementation.
Publisher: Springer Nature
ISBN: 3031018737
Category : Computers
Languages : en
Pages : 383
Book Description
This book contains a number of chapters on transactional database concurrency control. This volume's entire sequence of chapters can summarized as follows: A two-sentence summary of the volume's entire sequence of chapters is this: traditional locking techniques can be improved in multiple dimensions, notably in lock scopes (sizes), lock modes (increment, decrement, and more), lock durations (late acquisition, early release), and lock acquisition sequence (to avoid deadlocks). Even if some of these improvements can be transferred to optimistic concurrency control, notably a fine granularity of concurrency control with serializable transaction isolation including phantom protection, pessimistic concurrency control is categorically superior to optimistic concurrency control, i.e., independent of application, workload, deployment, hardware, and software implementation.