Author: Matthew Topol
Publisher: Packt Publishing Ltd
ISBN: 183546968X
Category : Computers
Languages : en
Pages : 406
Book Description
Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format Key Features Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionApache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author’s 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange. This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You’ll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You’ll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You’ll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications. By the end of this book, you’ll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow.What you will learn Use Apache Arrow libraries to access data files, both locally and in the cloud Understand the zero-copy elements of the Apache Arrow format Improve the read performance of data pipelines by memory-mapping Arrow files Produce and consume Apache Arrow data efficiently by sharing memory with the C API Leverage the Arrow compute engine, Acero, to perform complex operations Create Arrow Flight servers and clients for transferring data quickly Build the Arrow libraries locally and contribute to the community Who this book is for This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you’re building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book.
In-Memory Analytics with Apache Arrow
Author: Matthew Topol
Publisher: Packt Publishing Ltd
ISBN: 183546968X
Category : Computers
Languages : en
Pages : 406
Book Description
Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format Key Features Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionApache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author’s 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange. This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You’ll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You’ll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You’ll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications. By the end of this book, you’ll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow.What you will learn Use Apache Arrow libraries to access data files, both locally and in the cloud Understand the zero-copy elements of the Apache Arrow format Improve the read performance of data pipelines by memory-mapping Arrow files Produce and consume Apache Arrow data efficiently by sharing memory with the C API Leverage the Arrow compute engine, Acero, to perform complex operations Create Arrow Flight servers and clients for transferring data quickly Build the Arrow libraries locally and contribute to the community Who this book is for This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you’re building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book.
Publisher: Packt Publishing Ltd
ISBN: 183546968X
Category : Computers
Languages : en
Pages : 406
Book Description
Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format Key Features Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionApache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author’s 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange. This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You’ll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You’ll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You’ll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications. By the end of this book, you’ll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow.What you will learn Use Apache Arrow libraries to access data files, both locally and in the cloud Understand the zero-copy elements of the Apache Arrow format Improve the read performance of data pipelines by memory-mapping Arrow files Produce and consume Apache Arrow data efficiently by sharing memory with the C API Leverage the Arrow compute engine, Acero, to perform complex operations Create Arrow Flight servers and clients for transferring data quickly Build the Arrow libraries locally and contribute to the community Who this book is for This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you’re building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book.
In-Memory Analytics with Apache Arrow
Author: Matthew Topol
Publisher: Packt Publishing Ltd
ISBN: 1801073430
Category : Computers
Languages : en
Pages : 392
Book Description
Process tabular data and build high-performance query engines on modern CPUs and GPUs using Apache Arrow, a standardized language-independent memory format, for optimal performance Key Features Learn about Apache Arrow's data types and interoperability with pandas and Parquet Work with Apache Arrow Flight RPC, Compute, and Dataset APIs to produce and consume tabular data Reviewed, contributed, and supported by Dremio, the co-creator of Apache Arrow Book DescriptionApache Arrow is designed to accelerate analytics and allow the exchange of data across big data systems easily. In-Memory Analytics with Apache Arrow begins with a quick overview of the Apache Arrow format, before moving on to helping you to understand Arrow’s versatility and benefits as you walk through a variety of real-world use cases. You'll cover key tasks such as enhancing data science workflows with Arrow, using Arrow and Apache Parquet with Apache Spark and Jupyter for better performance and hassle-free data translation, as well as working with Perspective, an open source interactive graphical and tabular analysis tool for browsers. As you advance, you'll explore the different data interchange and storage formats and become well-versed with the relationships between Arrow, Parquet, Feather, Protobuf, Flatbuffers, JSON, and CSV. In addition to understanding the basic structure of the Arrow Flight and Flight SQL protocols, you'll learn about Dremio’s usage of Apache Arrow to enhance SQL analytics and discover how Arrow can be used in web-based browser apps. Finally, you'll get to grips with the upcoming features of Arrow to help you stay ahead of the curve. By the end of this book, you will have all the building blocks to create useful, efficient, and powerful analytical services and utilities with Apache Arrow.What you will learn Use Apache Arrow libraries to access data files both locally and in the cloud Understand the zero-copy elements of the Apache Arrow format Improve read performance by memory-mapping files with Apache Arrow Produce or consume Apache Arrow data efficiently using a C API Use the Apache Arrow Compute APIs to perform complex operations Create Arrow Flight servers and clients for transferring data quickly Build the Arrow libraries locally and contribute back to the community Who this book is for This book is for developers, data analysts, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. This book will also be useful for any engineers who are working on building utilities for data analytics and query engines, or otherwise working with tabular data, regardless of the programming language. Some familiarity with basic concepts of data analysis will help you to get the most out of this book but isn't required. Code examples are provided in the C++, Go, and Python programming languages.
Publisher: Packt Publishing Ltd
ISBN: 1801073430
Category : Computers
Languages : en
Pages : 392
Book Description
Process tabular data and build high-performance query engines on modern CPUs and GPUs using Apache Arrow, a standardized language-independent memory format, for optimal performance Key Features Learn about Apache Arrow's data types and interoperability with pandas and Parquet Work with Apache Arrow Flight RPC, Compute, and Dataset APIs to produce and consume tabular data Reviewed, contributed, and supported by Dremio, the co-creator of Apache Arrow Book DescriptionApache Arrow is designed to accelerate analytics and allow the exchange of data across big data systems easily. In-Memory Analytics with Apache Arrow begins with a quick overview of the Apache Arrow format, before moving on to helping you to understand Arrow’s versatility and benefits as you walk through a variety of real-world use cases. You'll cover key tasks such as enhancing data science workflows with Arrow, using Arrow and Apache Parquet with Apache Spark and Jupyter for better performance and hassle-free data translation, as well as working with Perspective, an open source interactive graphical and tabular analysis tool for browsers. As you advance, you'll explore the different data interchange and storage formats and become well-versed with the relationships between Arrow, Parquet, Feather, Protobuf, Flatbuffers, JSON, and CSV. In addition to understanding the basic structure of the Arrow Flight and Flight SQL protocols, you'll learn about Dremio’s usage of Apache Arrow to enhance SQL analytics and discover how Arrow can be used in web-based browser apps. Finally, you'll get to grips with the upcoming features of Arrow to help you stay ahead of the curve. By the end of this book, you will have all the building blocks to create useful, efficient, and powerful analytical services and utilities with Apache Arrow.What you will learn Use Apache Arrow libraries to access data files both locally and in the cloud Understand the zero-copy elements of the Apache Arrow format Improve read performance by memory-mapping files with Apache Arrow Produce or consume Apache Arrow data efficiently using a C API Use the Apache Arrow Compute APIs to perform complex operations Create Arrow Flight servers and clients for transferring data quickly Build the Arrow libraries locally and contribute back to the community Who this book is for This book is for developers, data analysts, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. This book will also be useful for any engineers who are working on building utilities for data analytics and query engines, or otherwise working with tabular data, regardless of the programming language. Some familiarity with basic concepts of data analysis will help you to get the most out of this book but isn't required. Code examples are provided in the C++, Go, and Python programming languages.
Getting Started with DuckDB
Author: Simon Aubury
Publisher: Packt Publishing Ltd
ISBN: 1803232536
Category : Computers
Languages : en
Pages : 382
Book Description
Analyze and transform data efficiently with DuckDB, a versatile, modern, in-process SQL database Key Features Use DuckDB to rapidly load, transform, and query data across a range of sources and formats Gain practical experience using SQL, Python, and R to effectively analyze data Learn how open source tools and cloud services in the broader data ecosystem complement DuckDB’s versatile capabilities Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionDuckDB is a fast in-process analytical database. Getting Started with DuckDB offers a practical overview of its usage. You'll learn to load, transform, and query various data formats, including CSV, JSON, and Parquet. The book covers DuckDB's optimizations, SQL enhancements, and extensions for specialized applications. Working with examples in SQL, Python, and R, you'll explore analyzing public datasets and discover tools enhancing DuckDB workflows. This guide suits both experienced and new data practitioners, quickly equipping you to apply DuckDB's capabilities in analytical projects. You'll gain proficiency in using DuckDB for diverse tasks, enabling effective integration into your data workflows.What you will learn Understand the properties and applications of a columnar in-process database Use SQL to load, transform, and query a range of data formats Discover DuckDB's rich extensions and learn how to apply them Use nested data types to model semi-structured data and extract and model JSON data Integrate DuckDB into your Python and R analytical workflows Effectively leverage DuckDB's convenient SQL enhancements Explore the wider ecosystem and pathways for building DuckDB-powered data applications Who this book is for If you’re interested in expanding your analytical toolkit, this book is for you. It will be particularly valuable for data analysts wanting to rapidly explore and query complex data, data and software engineers looking for a lean and versatile data processing tool, along with data scientists needing a scalable data manipulation library that integrates seamlessly with Python and R. You will get the most from this book if you have some familiarity with SQL and foundational database concepts, as well as exposure to a programming language such as Python or R.
Publisher: Packt Publishing Ltd
ISBN: 1803232536
Category : Computers
Languages : en
Pages : 382
Book Description
Analyze and transform data efficiently with DuckDB, a versatile, modern, in-process SQL database Key Features Use DuckDB to rapidly load, transform, and query data across a range of sources and formats Gain practical experience using SQL, Python, and R to effectively analyze data Learn how open source tools and cloud services in the broader data ecosystem complement DuckDB’s versatile capabilities Purchase of the print or Kindle book includes a free PDF eBook Book DescriptionDuckDB is a fast in-process analytical database. Getting Started with DuckDB offers a practical overview of its usage. You'll learn to load, transform, and query various data formats, including CSV, JSON, and Parquet. The book covers DuckDB's optimizations, SQL enhancements, and extensions for specialized applications. Working with examples in SQL, Python, and R, you'll explore analyzing public datasets and discover tools enhancing DuckDB workflows. This guide suits both experienced and new data practitioners, quickly equipping you to apply DuckDB's capabilities in analytical projects. You'll gain proficiency in using DuckDB for diverse tasks, enabling effective integration into your data workflows.What you will learn Understand the properties and applications of a columnar in-process database Use SQL to load, transform, and query a range of data formats Discover DuckDB's rich extensions and learn how to apply them Use nested data types to model semi-structured data and extract and model JSON data Integrate DuckDB into your Python and R analytical workflows Effectively leverage DuckDB's convenient SQL enhancements Explore the wider ecosystem and pathways for building DuckDB-powered data applications Who this book is for If you’re interested in expanding your analytical toolkit, this book is for you. It will be particularly valuable for data analysts wanting to rapidly explore and query complex data, data and software engineers looking for a lean and versatile data processing tool, along with data scientists needing a scalable data manipulation library that integrates seamlessly with Python and R. You will get the most from this book if you have some familiarity with SQL and foundational database concepts, as well as exposure to a programming language such as Python or R.
Delta Lake: The Definitive Guide
Author: Denny Lee
Publisher: "O'Reilly Media, Inc."
ISBN: 1098151909
Category : Computers
Languages : en
Pages : 391
Book Description
Ready to simplify the process of building data lakehouses and data pipelines at scale? In this practical guide, learn how Delta Lake is helping data engineers, data scientists, and data analysts overcome key data reliability challenges with modern data engineering and management techniques. Authors Denny Lee, Tristen Wentling, Scott Haines, and Prashanth Babu (with contributions from Delta Lake maintainer R. Tyler Croy) share expert insights on all things Delta Lake--including how to run batch and streaming jobs concurrently and accelerate the usability of your data. You'll also uncover how ACID transactions bring reliability to data lakehouses at scale. This book helps you: Understand key data reliability challenges and how Delta Lake solves them Explain the critical role of Delta transaction logs as a single source of truth Learn the Delta Lake ecosystem with technologies like Apache Flink, Kafka, and Trino Architect data lakehouses with the medallion architecture Optimize Delta Lake performance with features like deletion vectors and liquid clustering
Publisher: "O'Reilly Media, Inc."
ISBN: 1098151909
Category : Computers
Languages : en
Pages : 391
Book Description
Ready to simplify the process of building data lakehouses and data pipelines at scale? In this practical guide, learn how Delta Lake is helping data engineers, data scientists, and data analysts overcome key data reliability challenges with modern data engineering and management techniques. Authors Denny Lee, Tristen Wentling, Scott Haines, and Prashanth Babu (with contributions from Delta Lake maintainer R. Tyler Croy) share expert insights on all things Delta Lake--including how to run batch and streaming jobs concurrently and accelerate the usability of your data. You'll also uncover how ACID transactions bring reliability to data lakehouses at scale. This book helps you: Understand key data reliability challenges and how Delta Lake solves them Explain the critical role of Delta transaction logs as a single source of truth Learn the Delta Lake ecosystem with technologies like Apache Flink, Kafka, and Trino Architect data lakehouses with the medallion architecture Optimize Delta Lake performance with features like deletion vectors and liquid clustering
Kickstart Modern Android Development with Jetpack and Kotlin
Author: Catalin Ghita
Publisher: Packt Publishing Ltd
ISBN: 1801818215
Category : Computers
Languages : en
Pages : 472
Book Description
Explore modern Android development in Kotlin 1.6.10 with this condensed hands-on guide to building reliable apps using libraries such as Compose, ViewModel, Hilt, Retrofit, Flow, and more Key Features Explore Jetpack libraries and other modern technologies for Android development Improve the architectural design of your Android apps Enhance the quality of your Android projects’ code bases and applications using the latest libraries Book DescriptionWith Jetpack libraries, you can build and design high-quality, robust Android apps that have an improved architecture and work consistently across different versions and devices. This book will help you understand how Jetpack allows developers to follow best practices and architectural patterns when building Android apps while also eliminating boilerplate code. Developers working with Android and Kotlin will be able to put their knowledge to work with this condensed practical guide to building apps with the most popular Jetpack libraries, including Jetpack Compose, ViewModel, Hilt, Room, Paging, Lifecycle, and Navigation. You'll get to grips with relevant libraries and architectural patterns, including popular libraries in the Android ecosystem such as Retrofit, Coroutines, and Flow while building modern applications with real-world data. By the end of this Android app development book, you'll have learned how to leverage Jetpack libraries and your knowledge of architectural concepts for building, designing, and testing robust Android applications for various use cases.What you will learn Integrate popular Jetpack libraries such as Compose, ViewModel, Hilt, and Navigation into real Android apps with Kotlin Apply modern app architecture concepts such as MVVM, dependency injection, and clean architecture Explore Android libraries such as Retrofit, Coroutines, and Flow Integrate Compose with the rest of the Jetpack libraries or other popular Android libraries Work with other Jetpack libraries such as Paging and Room while integrating a real REST API that supports pagination Test Compose UI and the application logic through unit tests Who this book is for This book is for junior and intermediate-level Android developers looking to level up their Android development skills to develop high-quality apps using Jetpack libraries and other cutting-edge technologies. Beginners with knowledge of Android development fundamentals will also find this book useful. Familiarity with Kotlin is assumed.
Publisher: Packt Publishing Ltd
ISBN: 1801818215
Category : Computers
Languages : en
Pages : 472
Book Description
Explore modern Android development in Kotlin 1.6.10 with this condensed hands-on guide to building reliable apps using libraries such as Compose, ViewModel, Hilt, Retrofit, Flow, and more Key Features Explore Jetpack libraries and other modern technologies for Android development Improve the architectural design of your Android apps Enhance the quality of your Android projects’ code bases and applications using the latest libraries Book DescriptionWith Jetpack libraries, you can build and design high-quality, robust Android apps that have an improved architecture and work consistently across different versions and devices. This book will help you understand how Jetpack allows developers to follow best practices and architectural patterns when building Android apps while also eliminating boilerplate code. Developers working with Android and Kotlin will be able to put their knowledge to work with this condensed practical guide to building apps with the most popular Jetpack libraries, including Jetpack Compose, ViewModel, Hilt, Room, Paging, Lifecycle, and Navigation. You'll get to grips with relevant libraries and architectural patterns, including popular libraries in the Android ecosystem such as Retrofit, Coroutines, and Flow while building modern applications with real-world data. By the end of this Android app development book, you'll have learned how to leverage Jetpack libraries and your knowledge of architectural concepts for building, designing, and testing robust Android applications for various use cases.What you will learn Integrate popular Jetpack libraries such as Compose, ViewModel, Hilt, and Navigation into real Android apps with Kotlin Apply modern app architecture concepts such as MVVM, dependency injection, and clean architecture Explore Android libraries such as Retrofit, Coroutines, and Flow Integrate Compose with the rest of the Jetpack libraries or other popular Android libraries Work with other Jetpack libraries such as Paging and Room while integrating a real REST API that supports pagination Test Compose UI and the application logic through unit tests Who this book is for This book is for junior and intermediate-level Android developers looking to level up their Android development skills to develop high-quality apps using Jetpack libraries and other cutting-edge technologies. Beginners with knowledge of Android development fundamentals will also find this book useful. Familiarity with Kotlin is assumed.
In-Memory Analytics with Apache Arrow - Second Edition
Author: Matthew Topol
Publisher: Packt Publishing
ISBN: 9781835461228
Category : Computers
Languages : en
Pages : 0
Book Description
Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format Key Features: - Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet - Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data - Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects - Purchase of the print or Kindle book includes a free PDF eBook Book Description: Apache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author's 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange. This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You'll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You'll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You'll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications. By the end of this book, you'll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow. What You Will Learn: - Use Apache Arrow libraries to access data files, both locally and in the cloud - Understand the zero-copy elements of the Apache Arrow format - Improve the read performance of data pipelines by memory-mapping Arrow files - Produce and consume Apache Arrow data efficiently by sharing memory with the C API - Leverage the Arrow compute engine, Acero, to perform complex operations - Create Arrow Flight servers and clients for transferring data quickly - Build the Arrow libraries locally and contribute to the community Who this book is for: This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you're building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book. Table of Contents - Getting Started with Apache Arrow - Working with Key Arrow Specifications - Format and Memory Handling - Crossing the Language Barrier with the Arrow C Data API - Acero: A Streaming Arrow Execution Engine - Using the Arrow Datasets API - Exploring Apache Arrow Flight RPC - Understanding Arrow Database Connectivity (ADBC) - Using Arrow with Machine Learning Workflows - Powered by Apache Arrow - How to Leave Your Mark on Arrow - Future Development and Plans
Publisher: Packt Publishing
ISBN: 9781835461228
Category : Computers
Languages : en
Pages : 0
Book Description
Harness the power of Apache Arrow to optimize tabular data processing and develop robust, high-performance data systems with its standardized, language-independent columnar memory format Key Features: - Explore Apache Arrow's data types and integration with pandas, Polars, and Parquet - Work with Arrow libraries such as Flight SQL, Acero compute engine, and Dataset APIs for tabular data - Enhance and accelerate machine learning data pipelines using Apache Arrow and its subprojects - Purchase of the print or Kindle book includes a free PDF eBook Book Description: Apache Arrow is an open source, columnar in-memory data format designed for efficient data processing and analytics. This book harnesses the author's 15 years of experience to show you a standardized way to work with tabular data across various programming languages and environments, enabling high-performance data processing and exchange. This updated second edition gives you an overview of the Arrow format, highlighting its versatility and benefits through real-world use cases. It guides you through enhancing data science workflows, optimizing performance with Apache Parquet and Spark, and ensuring seamless data translation. You'll explore data interchange and storage formats, and Arrow's relationships with Parquet, Protocol Buffers, FlatBuffers, JSON, and CSV. You'll also discover Apache Arrow subprojects, including Flight, SQL, Database Connectivity, and nanoarrow. You'll learn to streamline machine learning workflows, use Arrow Dataset APIs, and integrate with popular analytical data systems such as Snowflake, Dremio, and DuckDB. The latter chapters provide real-world examples and case studies of products powered by Apache Arrow, providing practical insights into its applications. By the end of this book, you'll have all the building blocks to create efficient and powerful analytical services and utilities with Apache Arrow. What You Will Learn: - Use Apache Arrow libraries to access data files, both locally and in the cloud - Understand the zero-copy elements of the Apache Arrow format - Improve the read performance of data pipelines by memory-mapping Arrow files - Produce and consume Apache Arrow data efficiently by sharing memory with the C API - Leverage the Arrow compute engine, Acero, to perform complex operations - Create Arrow Flight servers and clients for transferring data quickly - Build the Arrow libraries locally and contribute to the community Who this book is for: This book is for developers, data engineers, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. Whether you're building utilities for data analytics and query engines, or building full pipelines with tabular data, this book can help you out regardless of your preferred programming language. A basic understanding of data analysis concepts is needed, but not necessary. Code examples are provided using C++, Python, and Go throughout the book. Table of Contents - Getting Started with Apache Arrow - Working with Key Arrow Specifications - Format and Memory Handling - Crossing the Language Barrier with the Arrow C Data API - Acero: A Streaming Arrow Execution Engine - Using the Arrow Datasets API - Exploring Apache Arrow Flight RPC - Understanding Arrow Database Connectivity (ADBC) - Using Arrow with Machine Learning Workflows - Powered by Apache Arrow - How to Leave Your Mark on Arrow - Future Development and Plans
Cleaning Data for Effective Data Science
Author: David Mertz
Publisher: Packt Publishing Ltd
ISBN: 1801074402
Category : Mathematics
Languages : en
Pages : 499
Book Description
Think about your data intelligently and ask the right questions Key FeaturesMaster data cleaning techniques necessary to perform real-world data science and machine learning tasksSpot common problems with dirty data and develop flexible solutions from first principlesTest and refine your newly acquired skills through detailed exercises at the end of each chapterBook Description Data cleaning is the all-important first step to successful data science, data analysis, and machine learning. If you work with any kind of data, this book is your go-to resource, arming you with the insights and heuristics experienced data scientists had to learn the hard way. In a light-hearted and engaging exploration of different tools, techniques, and datasets real and fictitious, Python veteran David Mertz teaches you the ins and outs of data preparation and the essential questions you should be asking of every piece of data you work with. Using a mixture of Python, R, and common command-line tools, Cleaning Data for Effective Data Science follows the data cleaning pipeline from start to end, focusing on helping you understand the principles underlying each step of the process. You'll look at data ingestion of a vast range of tabular, hierarchical, and other data formats, impute missing values, detect unreliable data and statistical anomalies, and generate synthetic features. The long-form exercises at the end of each chapter let you get hands-on with the skills you've acquired along the way, also providing a valuable resource for academic courses. What you will learnIngest and work with common data formats like JSON, CSV, SQL and NoSQL databases, PDF, and binary serialized data structuresUnderstand how and why we use tools such as pandas, SciPy, scikit-learn, Tidyverse, and BashApply useful rules and heuristics for assessing data quality and detecting bias, like Benford’s law and the 68-95-99.7 ruleIdentify and handle unreliable data and outliers, examining z-score and other statistical propertiesImpute sensible values into missing data and use sampling to fix imbalancesUse dimensionality reduction, quantization, one-hot encoding, and other feature engineering techniques to draw out patterns in your dataWork carefully with time series data, performing de-trending and interpolationWho this book is for This book is designed to benefit software developers, data scientists, aspiring data scientists, teachers, and students who work with data. If you want to improve your rigor in data hygiene or are looking for a refresher, this book is for you. Basic familiarity with statistics, general concepts in machine learning, knowledge of a programming language (Python or R), and some exposure to data science are helpful.
Publisher: Packt Publishing Ltd
ISBN: 1801074402
Category : Mathematics
Languages : en
Pages : 499
Book Description
Think about your data intelligently and ask the right questions Key FeaturesMaster data cleaning techniques necessary to perform real-world data science and machine learning tasksSpot common problems with dirty data and develop flexible solutions from first principlesTest and refine your newly acquired skills through detailed exercises at the end of each chapterBook Description Data cleaning is the all-important first step to successful data science, data analysis, and machine learning. If you work with any kind of data, this book is your go-to resource, arming you with the insights and heuristics experienced data scientists had to learn the hard way. In a light-hearted and engaging exploration of different tools, techniques, and datasets real and fictitious, Python veteran David Mertz teaches you the ins and outs of data preparation and the essential questions you should be asking of every piece of data you work with. Using a mixture of Python, R, and common command-line tools, Cleaning Data for Effective Data Science follows the data cleaning pipeline from start to end, focusing on helping you understand the principles underlying each step of the process. You'll look at data ingestion of a vast range of tabular, hierarchical, and other data formats, impute missing values, detect unreliable data and statistical anomalies, and generate synthetic features. The long-form exercises at the end of each chapter let you get hands-on with the skills you've acquired along the way, also providing a valuable resource for academic courses. What you will learnIngest and work with common data formats like JSON, CSV, SQL and NoSQL databases, PDF, and binary serialized data structuresUnderstand how and why we use tools such as pandas, SciPy, scikit-learn, Tidyverse, and BashApply useful rules and heuristics for assessing data quality and detecting bias, like Benford’s law and the 68-95-99.7 ruleIdentify and handle unreliable data and outliers, examining z-score and other statistical propertiesImpute sensible values into missing data and use sampling to fix imbalancesUse dimensionality reduction, quantization, one-hot encoding, and other feature engineering techniques to draw out patterns in your dataWork carefully with time series data, performing de-trending and interpolationWho this book is for This book is designed to benefit software developers, data scientists, aspiring data scientists, teachers, and students who work with data. If you want to improve your rigor in data hygiene or are looking for a refresher, this book is for you. Basic familiarity with statistics, general concepts in machine learning, knowledge of a programming language (Python or R), and some exposure to data science are helpful.
Disruptive Analytics
Author: Thomas W. Dinsmore
Publisher: Apress
ISBN: 1484213114
Category : Computers
Languages : en
Pages : 276
Book Description
Learn all you need to know about seven key innovations disrupting business analytics today. These innovations—the open source business model, cloud analytics, the Hadoop ecosystem, Spark and in-memory analytics, streaming analytics, Deep Learning, and self-service analytics—are radically changing how businesses use data for competitive advantage. Taken together, they are disrupting the business analytics value chain, creating new opportunities. Enterprises who seize the opportunity will thrive and prosper, while others struggle and decline: disrupt or be disrupted. Disruptive Business Analytics provides strategies to profit from disruption. It shows you how to organize for insight, build and provision an open source stack, how to practice lean data warehousing, and how to assimilate disruptive innovations into an organization. Through a short history of business analytics and a detailed survey of products and services, analytics authority Thomas W. Dinsmore provides a practical explanation of the most compelling innovations available today. What You'll Learn Discover how the open source business model works and how to make it work for you See how cloud computing completely changes the economics of analytics Harness the power of Hadoop and its ecosystem Find out why Apache Spark is everywhere Discover the potential of streaming and real-time analytics Learn what Deep Learning can do and why it matters See how self-service analytics can change the way organizations do business Who This Book Is For Corporate actors at all levels of responsibility for analytics: analysts, CIOs, CTOs, strategic decision makers, managers, systems architects, technical marketers, product developers, IT personnel, and consultants.
Publisher: Apress
ISBN: 1484213114
Category : Computers
Languages : en
Pages : 276
Book Description
Learn all you need to know about seven key innovations disrupting business analytics today. These innovations—the open source business model, cloud analytics, the Hadoop ecosystem, Spark and in-memory analytics, streaming analytics, Deep Learning, and self-service analytics—are radically changing how businesses use data for competitive advantage. Taken together, they are disrupting the business analytics value chain, creating new opportunities. Enterprises who seize the opportunity will thrive and prosper, while others struggle and decline: disrupt or be disrupted. Disruptive Business Analytics provides strategies to profit from disruption. It shows you how to organize for insight, build and provision an open source stack, how to practice lean data warehousing, and how to assimilate disruptive innovations into an organization. Through a short history of business analytics and a detailed survey of products and services, analytics authority Thomas W. Dinsmore provides a practical explanation of the most compelling innovations available today. What You'll Learn Discover how the open source business model works and how to make it work for you See how cloud computing completely changes the economics of analytics Harness the power of Hadoop and its ecosystem Find out why Apache Spark is everywhere Discover the potential of streaming and real-time analytics Learn what Deep Learning can do and why it matters See how self-service analytics can change the way organizations do business Who This Book Is For Corporate actors at all levels of responsibility for analytics: analysts, CIOs, CTOs, strategic decision makers, managers, systems architects, technical marketers, product developers, IT personnel, and consultants.
Practical Machine Learning with Spark
Author: Gourav Gupta
Publisher: BPB Publications
ISBN: 9391392083
Category : Computers
Languages : en
Pages : 501
Book Description
Explore the cosmic secrets of Distributed Processing for Deep Learning applications KEY FEATURES ● In-depth practical demonstration of ML/DL concepts using Distributed Framework. ● Covers graphical illustrations and visual explanations for ML/DL pipelines. ● Includes live codebase for each of NLP, computer vision and machine learning applications. DESCRIPTION This book provides the reader with an up-to-date explanation of Machine Learning and an in-depth, comprehensive, and straightforward understanding of the architectural techniques used to evaluate and anticipate the futuristic insights of data using Apache Spark. The book walks readers by setting up Hadoop and Spark installations on-premises, Docker, and AWS. Readers will learn about Spark MLib and how to utilize it in supervised and unsupervised machine learning scenarios. With the help of Spark, some of the most prominent technologies, such as natural language processing and computer vision, are evaluated and demonstrated in a realistic setting. Using the capabilities of Apache Spark, this book discusses the fundamental components that underlie each of these natural language processing, computer vision, and machine learning technologies, as well as how you can incorporate these technologies into your business processes. Towards the end of the book, readers will learn about several deep learning frameworks, such as TensorFlow and PyTorch. Readers will also learn to execute distributed processing of deep learning problems using the Spark programming language WHAT YOU WILL LEARN ●Learn how to get started with machine learning projects using Spark. ● Witness how to use Spark MLib's design for machine learning and deep learning operations. ● Use Spark in tasks involving NLP, unsupervised learning, and computer vision. ● Experiment with Spark in a cloud environment and with AI pipeline workflows. ● Run deep learning applications on a distributed network. WHO THIS BOOK IS FOR This book is valuable for data engineers, machine learning engineers, data scientists, data architects, business analysts, and technical consultants worldwide. It would be beneficial to have some familiarity with the fundamentals of Hadoop and Python. TABLE OF CONTENTS 1. Introduction to Machine Learning 2. Apache Spark Environment Setup and Configuration 3. Apache Spark 4. Apache Spark MLlib 5. Supervised Learning with Spark 6. Un-Supervised Learning with Apache Spark 7. Natural Language Processing with Apache Spark 8. Recommendation Engine with Distributed Framework 9. Deep Learning with Spark 10. Computer Vision with Apache Spark
Publisher: BPB Publications
ISBN: 9391392083
Category : Computers
Languages : en
Pages : 501
Book Description
Explore the cosmic secrets of Distributed Processing for Deep Learning applications KEY FEATURES ● In-depth practical demonstration of ML/DL concepts using Distributed Framework. ● Covers graphical illustrations and visual explanations for ML/DL pipelines. ● Includes live codebase for each of NLP, computer vision and machine learning applications. DESCRIPTION This book provides the reader with an up-to-date explanation of Machine Learning and an in-depth, comprehensive, and straightforward understanding of the architectural techniques used to evaluate and anticipate the futuristic insights of data using Apache Spark. The book walks readers by setting up Hadoop and Spark installations on-premises, Docker, and AWS. Readers will learn about Spark MLib and how to utilize it in supervised and unsupervised machine learning scenarios. With the help of Spark, some of the most prominent technologies, such as natural language processing and computer vision, are evaluated and demonstrated in a realistic setting. Using the capabilities of Apache Spark, this book discusses the fundamental components that underlie each of these natural language processing, computer vision, and machine learning technologies, as well as how you can incorporate these technologies into your business processes. Towards the end of the book, readers will learn about several deep learning frameworks, such as TensorFlow and PyTorch. Readers will also learn to execute distributed processing of deep learning problems using the Spark programming language WHAT YOU WILL LEARN ●Learn how to get started with machine learning projects using Spark. ● Witness how to use Spark MLib's design for machine learning and deep learning operations. ● Use Spark in tasks involving NLP, unsupervised learning, and computer vision. ● Experiment with Spark in a cloud environment and with AI pipeline workflows. ● Run deep learning applications on a distributed network. WHO THIS BOOK IS FOR This book is valuable for data engineers, machine learning engineers, data scientists, data architects, business analysts, and technical consultants worldwide. It would be beneficial to have some familiarity with the fundamentals of Hadoop and Python. TABLE OF CONTENTS 1. Introduction to Machine Learning 2. Apache Spark Environment Setup and Configuration 3. Apache Spark 4. Apache Spark MLlib 5. Supervised Learning with Spark 6. Un-Supervised Learning with Apache Spark 7. Natural Language Processing with Apache Spark 8. Recommendation Engine with Distributed Framework 9. Deep Learning with Spark 10. Computer Vision with Apache Spark
Big Data Analytics
Author: Ulrich Matter
Publisher: CRC Press
ISBN: 1000932737
Category : Mathematics
Languages : en
Pages : 389
Book Description
Successfully navigating the data-driven economy presupposes a certain understanding of the technologies and methods to gain insights from Big Data. This book aims to help data science practitioners to successfully manage the transition to Big Data. Building on familiar content from applied econometrics and business analytics, this book introduces the reader to the basic concepts of Big Data Analytics. The focus of the book is on how to productively apply econometric and machine learning techniques with large, complex data sets, as well as on all the steps involved before analysing the data (data storage, data import, data preparation). The book combines conceptual and theoretical material with the practical application of the concepts using R and SQL. The reader will thus acquire the skills to analyse large data sets, both locally and in the cloud. Various code examples and tutorials, focused on empirical economic and business research, illustrate practical techniques to handle and analyse Big Data. Key Features: - Includes many code examples in R and SQL, with R/SQL scripts freely provided online. - Extensive use of real datasets from empirical economic research and business analytics, with data files freely provided online. - Leads students and practitioners to think critically about where the bottlenecks are in practical data analysis tasks with large data sets, and how to address them. The book is a valuable resource for data science practitioners, graduate students and researchers who aim to gain insights from big data in the context of research questions in business, economics, and the social sciences.
Publisher: CRC Press
ISBN: 1000932737
Category : Mathematics
Languages : en
Pages : 389
Book Description
Successfully navigating the data-driven economy presupposes a certain understanding of the technologies and methods to gain insights from Big Data. This book aims to help data science practitioners to successfully manage the transition to Big Data. Building on familiar content from applied econometrics and business analytics, this book introduces the reader to the basic concepts of Big Data Analytics. The focus of the book is on how to productively apply econometric and machine learning techniques with large, complex data sets, as well as on all the steps involved before analysing the data (data storage, data import, data preparation). The book combines conceptual and theoretical material with the practical application of the concepts using R and SQL. The reader will thus acquire the skills to analyse large data sets, both locally and in the cloud. Various code examples and tutorials, focused on empirical economic and business research, illustrate practical techniques to handle and analyse Big Data. Key Features: - Includes many code examples in R and SQL, with R/SQL scripts freely provided online. - Extensive use of real datasets from empirical economic research and business analytics, with data files freely provided online. - Leads students and practitioners to think critically about where the bottlenecks are in practical data analysis tasks with large data sets, and how to address them. The book is a valuable resource for data science practitioners, graduate students and researchers who aim to gain insights from big data in the context of research questions in business, economics, and the social sciences.