Author: Dimitri Bertsekas
Publisher: Athena Scientific
ISBN: 1886529175
Category : Computers
Languages : en
Pages : 229
Book Description
The purpose of this book is to propose and develop a new conceptual framework for approximate Dynamic Programming (DP) and Reinforcement Learning (RL). This framework centers around two algorithms, which are designed largely independently of each other and operate in synergy through the powerful mechanism of Newton's method. We call these the off-line training and the on-line play algorithms; the names are borrowed from some of the major successes of RL involving games. Primary examples are the recent (2017) AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon). In these game contexts, the off-line training algorithm is the method used to teach the program how to evaluate positions and to generate good moves at any given position, while the on-line play algorithm is the method used to play in real time against human or computer opponents. Both AlphaZero and TD-Gammon were trained off-line extensively using neural networks and an approximate version of the fundamental DP algorithm of policy iteration. Yet the AlphaZero player that was obtained off-line is not used directly during on-line play (it is too inaccurate due to approximation errors that are inherent in off-line neural network training). Instead a separate on-line player is used to select moves, based on multistep lookahead minimization and a terminal position evaluator that was trained using experience with the off-line player. The on-line player performs a form of policy improvement, which is not degraded by neural network approximations. As a result, it greatly improves the performance of the off-line player. Similarly, TD-Gammon performs on-line a policy improvement step using one-step or two-step lookahead minimization, which is not degraded by neural network approximations. To this end it uses an off-line neural network-trained terminal position evaluator, and importantly it also extends its on-line lookahead by rollout (simulation with the one-step lookahead player that is based on the position evaluator). Significantly, the synergy between off-line training and on-line play also underlies Model Predictive Control (MPC), a major control system design methodology that has been extensively developed since the 1980s. This synergy can be understood in terms of abstract models of infinite horizon DP and simple geometrical constructions, and helps to explain the all-important stability issues within the MPC context. An additional benefit of policy improvement by approximation in value space, not observed in the context of games (which have stable rules and environment), is that it works well with changing problem parameters and on-line replanning, similar to indirect adaptive control. Here the Bellman equation is perturbed due to the parameter changes, but approximation in value space still operates as a Newton step. An essential requirement here is that a system model is estimated on-line through some identification method, and is used during the one-step or multistep lookahead minimization process. In this monograph we aim to provide insights (often based on visualization), which explain the beneficial effects of on-line decision making on top of off-line training. In the process, we will bring out the strong connections between the artificial intelligence view of RL, and the control theory views of MPC and adaptive control. Moreover, we will show that in addition to MPC and adaptive control, our conceptual framework can be effectively integrated with other important methodologies such as multiagent systems and decentralized control, discrete and Bayesian optimization, and heuristic algorithms for discrete optimization. One of our principal aims is to show, through the algorithmic ideas of Newton's method and the unifying principles of abstract DP, that the AlphaZero/TD-Gammon methodology of approximation in value space and rollout applies very broadly to deterministic and stochastic optimal control problems. Newton's method here is used for the solution of Bellman's equation, an operator equation that applies universally within DP with both discrete and continuous state and control spaces, as well as finite and infinite horizon.
Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control
Author: Dimitri Bertsekas
Publisher: Athena Scientific
ISBN: 1886529175
Category : Computers
Languages : en
Pages : 229
Book Description
The purpose of this book is to propose and develop a new conceptual framework for approximate Dynamic Programming (DP) and Reinforcement Learning (RL). This framework centers around two algorithms, which are designed largely independently of each other and operate in synergy through the powerful mechanism of Newton's method. We call these the off-line training and the on-line play algorithms; the names are borrowed from some of the major successes of RL involving games. Primary examples are the recent (2017) AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon). In these game contexts, the off-line training algorithm is the method used to teach the program how to evaluate positions and to generate good moves at any given position, while the on-line play algorithm is the method used to play in real time against human or computer opponents. Both AlphaZero and TD-Gammon were trained off-line extensively using neural networks and an approximate version of the fundamental DP algorithm of policy iteration. Yet the AlphaZero player that was obtained off-line is not used directly during on-line play (it is too inaccurate due to approximation errors that are inherent in off-line neural network training). Instead a separate on-line player is used to select moves, based on multistep lookahead minimization and a terminal position evaluator that was trained using experience with the off-line player. The on-line player performs a form of policy improvement, which is not degraded by neural network approximations. As a result, it greatly improves the performance of the off-line player. Similarly, TD-Gammon performs on-line a policy improvement step using one-step or two-step lookahead minimization, which is not degraded by neural network approximations. To this end it uses an off-line neural network-trained terminal position evaluator, and importantly it also extends its on-line lookahead by rollout (simulation with the one-step lookahead player that is based on the position evaluator). Significantly, the synergy between off-line training and on-line play also underlies Model Predictive Control (MPC), a major control system design methodology that has been extensively developed since the 1980s. This synergy can be understood in terms of abstract models of infinite horizon DP and simple geometrical constructions, and helps to explain the all-important stability issues within the MPC context. An additional benefit of policy improvement by approximation in value space, not observed in the context of games (which have stable rules and environment), is that it works well with changing problem parameters and on-line replanning, similar to indirect adaptive control. Here the Bellman equation is perturbed due to the parameter changes, but approximation in value space still operates as a Newton step. An essential requirement here is that a system model is estimated on-line through some identification method, and is used during the one-step or multistep lookahead minimization process. In this monograph we aim to provide insights (often based on visualization), which explain the beneficial effects of on-line decision making on top of off-line training. In the process, we will bring out the strong connections between the artificial intelligence view of RL, and the control theory views of MPC and adaptive control. Moreover, we will show that in addition to MPC and adaptive control, our conceptual framework can be effectively integrated with other important methodologies such as multiagent systems and decentralized control, discrete and Bayesian optimization, and heuristic algorithms for discrete optimization. One of our principal aims is to show, through the algorithmic ideas of Newton's method and the unifying principles of abstract DP, that the AlphaZero/TD-Gammon methodology of approximation in value space and rollout applies very broadly to deterministic and stochastic optimal control problems. Newton's method here is used for the solution of Bellman's equation, an operator equation that applies universally within DP with both discrete and continuous state and control spaces, as well as finite and infinite horizon.
Publisher: Athena Scientific
ISBN: 1886529175
Category : Computers
Languages : en
Pages : 229
Book Description
The purpose of this book is to propose and develop a new conceptual framework for approximate Dynamic Programming (DP) and Reinforcement Learning (RL). This framework centers around two algorithms, which are designed largely independently of each other and operate in synergy through the powerful mechanism of Newton's method. We call these the off-line training and the on-line play algorithms; the names are borrowed from some of the major successes of RL involving games. Primary examples are the recent (2017) AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon). In these game contexts, the off-line training algorithm is the method used to teach the program how to evaluate positions and to generate good moves at any given position, while the on-line play algorithm is the method used to play in real time against human or computer opponents. Both AlphaZero and TD-Gammon were trained off-line extensively using neural networks and an approximate version of the fundamental DP algorithm of policy iteration. Yet the AlphaZero player that was obtained off-line is not used directly during on-line play (it is too inaccurate due to approximation errors that are inherent in off-line neural network training). Instead a separate on-line player is used to select moves, based on multistep lookahead minimization and a terminal position evaluator that was trained using experience with the off-line player. The on-line player performs a form of policy improvement, which is not degraded by neural network approximations. As a result, it greatly improves the performance of the off-line player. Similarly, TD-Gammon performs on-line a policy improvement step using one-step or two-step lookahead minimization, which is not degraded by neural network approximations. To this end it uses an off-line neural network-trained terminal position evaluator, and importantly it also extends its on-line lookahead by rollout (simulation with the one-step lookahead player that is based on the position evaluator). Significantly, the synergy between off-line training and on-line play also underlies Model Predictive Control (MPC), a major control system design methodology that has been extensively developed since the 1980s. This synergy can be understood in terms of abstract models of infinite horizon DP and simple geometrical constructions, and helps to explain the all-important stability issues within the MPC context. An additional benefit of policy improvement by approximation in value space, not observed in the context of games (which have stable rules and environment), is that it works well with changing problem parameters and on-line replanning, similar to indirect adaptive control. Here the Bellman equation is perturbed due to the parameter changes, but approximation in value space still operates as a Newton step. An essential requirement here is that a system model is estimated on-line through some identification method, and is used during the one-step or multistep lookahead minimization process. In this monograph we aim to provide insights (often based on visualization), which explain the beneficial effects of on-line decision making on top of off-line training. In the process, we will bring out the strong connections between the artificial intelligence view of RL, and the control theory views of MPC and adaptive control. Moreover, we will show that in addition to MPC and adaptive control, our conceptual framework can be effectively integrated with other important methodologies such as multiagent systems and decentralized control, discrete and Bayesian optimization, and heuristic algorithms for discrete optimization. One of our principal aims is to show, through the algorithmic ideas of Newton's method and the unifying principles of abstract DP, that the AlphaZero/TD-Gammon methodology of approximation in value space and rollout applies very broadly to deterministic and stochastic optimal control problems. Newton's method here is used for the solution of Bellman's equation, an operator equation that applies universally within DP with both discrete and continuous state and control spaces, as well as finite and infinite horizon.
A Course in Reinforcement Learning
Author: Dimitri Bertsekas
Publisher: Athena Scientific
ISBN: 1886529493
Category : Computers
Languages : en
Pages : 421
Book Description
These lecture notes were prepared for use in the 2023 ASU research-oriented course on Reinforcement Learning (RL) that I have offered in each of the last five years. Their purpose is to give an overview of the RL methodology, particularly as it relates to problems of optimal and suboptimal decision and control, as well as discrete optimization. There are two major methodological RL approaches: approximation in value space, where we approximate in some way the optimal value function, and approximation in policy space, whereby we construct a (generally suboptimal) policy by using optimization over a suitably restricted class of policies.The lecture notes focus primarily on approximation in value space, with limited coverage of approximation in policy space. However, they are structured so that they can be easily supplemented by an instructor who wishes to go into approximation in policy space in greater detail, using any of a number of available sources, including the author's 2019 RL book. While in these notes we deemphasize mathematical proofs, there is considerable related analysis, which supports our conclusions and can be found in the author's recent RL and DP books. These books also contain additional material on off-line training of neural networks, on the use of policy gradient methods for approximation in policy space, and on aggregation.
Publisher: Athena Scientific
ISBN: 1886529493
Category : Computers
Languages : en
Pages : 421
Book Description
These lecture notes were prepared for use in the 2023 ASU research-oriented course on Reinforcement Learning (RL) that I have offered in each of the last five years. Their purpose is to give an overview of the RL methodology, particularly as it relates to problems of optimal and suboptimal decision and control, as well as discrete optimization. There are two major methodological RL approaches: approximation in value space, where we approximate in some way the optimal value function, and approximation in policy space, whereby we construct a (generally suboptimal) policy by using optimization over a suitably restricted class of policies.The lecture notes focus primarily on approximation in value space, with limited coverage of approximation in policy space. However, they are structured so that they can be easily supplemented by an instructor who wishes to go into approximation in policy space in greater detail, using any of a number of available sources, including the author's 2019 RL book. While in these notes we deemphasize mathematical proofs, there is considerable related analysis, which supports our conclusions and can be found in the author's recent RL and DP books. These books also contain additional material on off-line training of neural networks, on the use of policy gradient methods for approximation in policy space, and on aggregation.
Reinforcement Learning and Optimal Control
Author: Dimitri Bertsekas
Publisher: Athena Scientific
ISBN: 1886529396
Category : Computers
Languages : en
Pages : 388
Book Description
This book considers large and challenging multistage decision problems, which can be solved in principle by dynamic programming (DP), but their exact solution is computationally intractable. We discuss solution methods that rely on approximations to produce suboptimal policies with adequate performance. These methods are collectively known by several essentially equivalent names: reinforcement learning, approximate dynamic programming, neuro-dynamic programming. They have been at the forefront of research for the last 25 years, and they underlie, among others, the recent impressive successes of self-learning in the context of games such as chess and Go. Our subject has benefited greatly from the interplay of ideas from optimal control and from artificial intelligence, as it relates to reinforcement learning and simulation-based neural network methods. One of the aims of the book is to explore the common boundary between these two fields and to form a bridge that is accessible by workers with background in either field. Another aim is to organize coherently the broad mosaic of methods that have proved successful in practice while having a solid theoretical and/or logical foundation. This may help researchers and practitioners to find their way through the maze of competing ideas that constitute the current state of the art. This book relates to several of our other books: Neuro-Dynamic Programming (Athena Scientific, 1996), Dynamic Programming and Optimal Control (4th edition, Athena Scientific, 2017), Abstract Dynamic Programming (2nd edition, Athena Scientific, 2018), and Nonlinear Programming (Athena Scientific, 2016). However, the mathematical style of this book is somewhat different. While we provide a rigorous, albeit short, mathematical account of the theory of finite and infinite horizon dynamic programming, and some fundamental approximation methods, we rely more on intuitive explanations and less on proof-based insights. Moreover, our mathematical requirements are quite modest: calculus, a minimal use of matrix-vector algebra, and elementary probability (mathematically complicated arguments involving laws of large numbers and stochastic convergence are bypassed in favor of intuitive explanations). The book illustrates the methodology with many examples and illustrations, and uses a gradual expository approach, which proceeds along four directions: (a) From exact DP to approximate DP: We first discuss exact DP algorithms, explain why they may be difficult to implement, and then use them as the basis for approximations. (b) From finite horizon to infinite horizon problems: We first discuss finite horizon exact and approximate DP methodologies, which are intuitive and mathematically simple, and then progress to infinite horizon problems. (c) From deterministic to stochastic models: We often discuss separately deterministic and stochastic problems, since deterministic problems are simpler and offer special advantages for some of our methods. (d) From model-based to model-free implementations: We first discuss model-based implementations, and then we identify schemes that can be appropriately modified to work with a simulator. The book is related and supplemented by the companion research monograph Rollout, Policy Iteration, and Distributed Reinforcement Learning (Athena Scientific, 2020), which focuses more closely on several topics related to rollout, approximate policy iteration, multiagent problems, discrete and Bayesian optimization, and distributed computation, which are either discussed in less detail or not covered at all in the present book. The author's website contains class notes, and a series of videolectures and slides from a 2021 course at ASU, which address a selection of topics from both books.
Publisher: Athena Scientific
ISBN: 1886529396
Category : Computers
Languages : en
Pages : 388
Book Description
This book considers large and challenging multistage decision problems, which can be solved in principle by dynamic programming (DP), but their exact solution is computationally intractable. We discuss solution methods that rely on approximations to produce suboptimal policies with adequate performance. These methods are collectively known by several essentially equivalent names: reinforcement learning, approximate dynamic programming, neuro-dynamic programming. They have been at the forefront of research for the last 25 years, and they underlie, among others, the recent impressive successes of self-learning in the context of games such as chess and Go. Our subject has benefited greatly from the interplay of ideas from optimal control and from artificial intelligence, as it relates to reinforcement learning and simulation-based neural network methods. One of the aims of the book is to explore the common boundary between these two fields and to form a bridge that is accessible by workers with background in either field. Another aim is to organize coherently the broad mosaic of methods that have proved successful in practice while having a solid theoretical and/or logical foundation. This may help researchers and practitioners to find their way through the maze of competing ideas that constitute the current state of the art. This book relates to several of our other books: Neuro-Dynamic Programming (Athena Scientific, 1996), Dynamic Programming and Optimal Control (4th edition, Athena Scientific, 2017), Abstract Dynamic Programming (2nd edition, Athena Scientific, 2018), and Nonlinear Programming (Athena Scientific, 2016). However, the mathematical style of this book is somewhat different. While we provide a rigorous, albeit short, mathematical account of the theory of finite and infinite horizon dynamic programming, and some fundamental approximation methods, we rely more on intuitive explanations and less on proof-based insights. Moreover, our mathematical requirements are quite modest: calculus, a minimal use of matrix-vector algebra, and elementary probability (mathematically complicated arguments involving laws of large numbers and stochastic convergence are bypassed in favor of intuitive explanations). The book illustrates the methodology with many examples and illustrations, and uses a gradual expository approach, which proceeds along four directions: (a) From exact DP to approximate DP: We first discuss exact DP algorithms, explain why they may be difficult to implement, and then use them as the basis for approximations. (b) From finite horizon to infinite horizon problems: We first discuss finite horizon exact and approximate DP methodologies, which are intuitive and mathematically simple, and then progress to infinite horizon problems. (c) From deterministic to stochastic models: We often discuss separately deterministic and stochastic problems, since deterministic problems are simpler and offer special advantages for some of our methods. (d) From model-based to model-free implementations: We first discuss model-based implementations, and then we identify schemes that can be appropriately modified to work with a simulator. The book is related and supplemented by the companion research monograph Rollout, Policy Iteration, and Distributed Reinforcement Learning (Athena Scientific, 2020), which focuses more closely on several topics related to rollout, approximate policy iteration, multiagent problems, discrete and Bayesian optimization, and distributed computation, which are either discussed in less detail or not covered at all in the present book. The author's website contains class notes, and a series of videolectures and slides from a 2021 course at ASU, which address a selection of topics from both books.
Rollout, Policy Iteration, and Distributed Reinforcement Learning
Author: Dimitri Bertsekas
Publisher: Athena Scientific
ISBN: 1886529078
Category : Computers
Languages : en
Pages : 498
Book Description
The purpose of this book is to develop in greater depth some of the methods from the author's Reinforcement Learning and Optimal Control recently published textbook (Athena Scientific, 2019). In particular, we present new research, relating to systems involving multiple agents, partitioned architectures, and distributed asynchronous computation. We pay special attention to the contexts of dynamic programming/policy iteration and control theory/model predictive control. We also discuss in some detail the application of the methodology to challenging discrete/combinatorial optimization problems, such as routing, scheduling, assignment, and mixed integer programming, including the use of neural network approximations within these contexts. The book focuses on the fundamental idea of policy iteration, i.e., start from some policy, and successively generate one or more improved policies. If just one improved policy is generated, this is called rollout, which, based on broad and consistent computational experience, appears to be one of the most versatile and reliable of all reinforcement learning methods. In this book, rollout algorithms are developed for both discrete deterministic and stochastic DP problems, and the development of distributed implementations in both multiagent and multiprocessor settings, aiming to take advantage of parallelism. Approximate policy iteration is more ambitious than rollout, but it is a strictly off-line method, and it is generally far more computationally intensive. This motivates the use of parallel and distributed computation. One of the purposes of the monograph is to discuss distributed (possibly asynchronous) methods that relate to rollout and policy iteration, both in the context of an exact and an approximate implementation involving neural networks or other approximation architectures. Much of the new research is inspired by the remarkable AlphaZero chess program, where policy iteration, value and policy networks, approximate lookahead minimization, and parallel computation all play an important role.
Publisher: Athena Scientific
ISBN: 1886529078
Category : Computers
Languages : en
Pages : 498
Book Description
The purpose of this book is to develop in greater depth some of the methods from the author's Reinforcement Learning and Optimal Control recently published textbook (Athena Scientific, 2019). In particular, we present new research, relating to systems involving multiple agents, partitioned architectures, and distributed asynchronous computation. We pay special attention to the contexts of dynamic programming/policy iteration and control theory/model predictive control. We also discuss in some detail the application of the methodology to challenging discrete/combinatorial optimization problems, such as routing, scheduling, assignment, and mixed integer programming, including the use of neural network approximations within these contexts. The book focuses on the fundamental idea of policy iteration, i.e., start from some policy, and successively generate one or more improved policies. If just one improved policy is generated, this is called rollout, which, based on broad and consistent computational experience, appears to be one of the most versatile and reliable of all reinforcement learning methods. In this book, rollout algorithms are developed for both discrete deterministic and stochastic DP problems, and the development of distributed implementations in both multiagent and multiprocessor settings, aiming to take advantage of parallelism. Approximate policy iteration is more ambitious than rollout, but it is a strictly off-line method, and it is generally far more computationally intensive. This motivates the use of parallel and distributed computation. One of the purposes of the monograph is to discuss distributed (possibly asynchronous) methods that relate to rollout and policy iteration, both in the context of an exact and an approximate implementation involving neural networks or other approximation architectures. Much of the new research is inspired by the remarkable AlphaZero chess program, where policy iteration, value and policy networks, approximate lookahead minimization, and parallel computation all play an important role.
Reinforcement Learning and Optimal Control
Author: Dimitri P. Bertsekas
Publisher:
ISBN: 9787302540328
Category : Artificial intelligence
Languages : zh-CN
Pages : 373
Book Description
Publisher:
ISBN: 9787302540328
Category : Artificial intelligence
Languages : zh-CN
Pages : 373
Book Description
Stochastic Optimal Control
Author: Dimitri P. Bertsekas
Publisher:
ISBN: 9780120932603
Category : Dynamic programming
Languages : en
Pages : 323
Book Description
Publisher:
ISBN: 9780120932603
Category : Dynamic programming
Languages : en
Pages : 323
Book Description
AI and education
Author: Miao, Fengchun
Publisher: UNESCO Publishing
ISBN: 9231004476
Category : Political Science
Languages : en
Pages : 50
Book Description
Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and ultimately accelerate the progress towards SDG 4. However, these rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. This publication offers guidance for policy-makers on how best to leverage the opportunities and address the risks, presented by the growing connection between AI and education. It starts with the essentials of AI: definitions, techniques and technologies. It continues with a detailed analysis of the emerging trends and implications of AI for teaching and learning, including how we can ensure the ethical, inclusive and equitable use of AI in education, how education can prepare humans to live and work with AI, and how AI can be applied to enhance education. It finally introduces the challenges of harnessing AI to achieve SDG 4 and offers concrete actionable recommendations for policy-makers to plan policies and programmes for local contexts. [Publisher summary, ed]
Publisher: UNESCO Publishing
ISBN: 9231004476
Category : Political Science
Languages : en
Pages : 50
Book Description
Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and ultimately accelerate the progress towards SDG 4. However, these rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks. This publication offers guidance for policy-makers on how best to leverage the opportunities and address the risks, presented by the growing connection between AI and education. It starts with the essentials of AI: definitions, techniques and technologies. It continues with a detailed analysis of the emerging trends and implications of AI for teaching and learning, including how we can ensure the ethical, inclusive and equitable use of AI in education, how education can prepare humans to live and work with AI, and how AI can be applied to enhance education. It finally introduces the challenges of harnessing AI to achieve SDG 4 and offers concrete actionable recommendations for policy-makers to plan policies and programmes for local contexts. [Publisher summary, ed]
Understanding the impact of artificial intelligence on skills development
Author: UNESCO International Centre for Technical and Vocational Education and Training
Publisher: UNESCO Publishing
ISBN: 9231004468
Category : Political Science
Languages : en
Pages : 56
Book Description
Publisher: UNESCO Publishing
ISBN: 9231004468
Category : Political Science
Languages : en
Pages : 56
Book Description
The Secret of Our Success
Author: Joseph Henrich
Publisher: Princeton University Press
ISBN: 0691178437
Category : Psychology
Languages : en
Pages : 464
Book Description
How our collective intelligence has helped us to evolve and prosper Humans are a puzzling species. On the one hand, we struggle to survive on our own in the wild, often failing to overcome even basic challenges, like obtaining food, building shelters, or avoiding predators. On the other hand, human groups have produced ingenious technologies, sophisticated languages, and complex institutions that have permitted us to successfully expand into a vast range of diverse environments. What has enabled us to dominate the globe, more than any other species, while remaining virtually helpless as lone individuals? This book shows that the secret of our success lies not in our innate intelligence, but in our collective brains—on the ability of human groups to socially interconnect and learn from one another over generations. Drawing insights from lost European explorers, clever chimpanzees, mobile hunter-gatherers, neuroscientific findings, ancient bones, and the human genome, Joseph Henrich demonstrates how our collective brains have propelled our species' genetic evolution and shaped our biology. Our early capacities for learning from others produced many cultural innovations, such as fire, cooking, water containers, plant knowledge, and projectile weapons, which in turn drove the expansion of our brains and altered our physiology, anatomy, and psychology in crucial ways. Later on, some collective brains generated and recombined powerful concepts, such as the lever, wheel, screw, and writing, while also creating the institutions that continue to alter our motivations and perceptions. Henrich shows how our genetics and biology are inextricably interwoven with cultural evolution, and how culture-gene interactions launched our species on an extraordinary evolutionary trajectory. Tracking clues from our ancient past to the present, The Secret of Our Success explores how the evolution of both our cultural and social natures produce a collective intelligence that explains both our species' immense success and the origins of human uniqueness.
Publisher: Princeton University Press
ISBN: 0691178437
Category : Psychology
Languages : en
Pages : 464
Book Description
How our collective intelligence has helped us to evolve and prosper Humans are a puzzling species. On the one hand, we struggle to survive on our own in the wild, often failing to overcome even basic challenges, like obtaining food, building shelters, or avoiding predators. On the other hand, human groups have produced ingenious technologies, sophisticated languages, and complex institutions that have permitted us to successfully expand into a vast range of diverse environments. What has enabled us to dominate the globe, more than any other species, while remaining virtually helpless as lone individuals? This book shows that the secret of our success lies not in our innate intelligence, but in our collective brains—on the ability of human groups to socially interconnect and learn from one another over generations. Drawing insights from lost European explorers, clever chimpanzees, mobile hunter-gatherers, neuroscientific findings, ancient bones, and the human genome, Joseph Henrich demonstrates how our collective brains have propelled our species' genetic evolution and shaped our biology. Our early capacities for learning from others produced many cultural innovations, such as fire, cooking, water containers, plant knowledge, and projectile weapons, which in turn drove the expansion of our brains and altered our physiology, anatomy, and psychology in crucial ways. Later on, some collective brains generated and recombined powerful concepts, such as the lever, wheel, screw, and writing, while also creating the institutions that continue to alter our motivations and perceptions. Henrich shows how our genetics and biology are inextricably interwoven with cultural evolution, and how culture-gene interactions launched our species on an extraordinary evolutionary trajectory. Tracking clues from our ancient past to the present, The Secret of Our Success explores how the evolution of both our cultural and social natures produce a collective intelligence that explains both our species' immense success and the origins of human uniqueness.
Business and Consumer Analytics: New Ideas
Author: Pablo Moscato
Publisher: Springer
ISBN: 3030062228
Category : Computers
Languages : en
Pages : 1000
Book Description
This two-volume handbook presents a collection of novel methodologies with applications and illustrative examples in the areas of data-driven computational social sciences. Throughout this handbook, the focus is kept specifically on business and consumer-oriented applications with interesting sections ranging from clustering and network analysis, meta-analytics, memetic algorithms, machine learning, recommender systems methodologies, parallel pattern mining and data mining to specific applications in market segmentation, travel, fashion or entertainment analytics. A must-read for anyone in data-analytics, marketing, behavior modelling and computational social science, interested in the latest applications of new computer science methodologies. The chapters are contributed by leading experts in the associated fields.The chapters cover technical aspects at different levels, some of which are introductory and could be used for teaching. Some chapters aim at building a common understanding of the methodologies and recent application areas including the introduction of new theoretical results in the complexity of core problems. Business and marketing professionals may use the book to familiarize themselves with some important foundations of data science. The work is a good starting point to establish an open dialogue of communication between professionals and researchers from different fields. Together, the two volumes present a number of different new directions in Business and Customer Analytics with an emphasis in personalization of services, the development of new mathematical models and new algorithms, heuristics and metaheuristics applied to the challenging problems in the field. Sections of the book have introductory material to more specific and advanced themes in some of the chapters, allowing the volumes to be used as an advanced textbook. Clustering, Proximity Graphs, Pattern Mining, Frequent Itemset Mining, Feature Engineering, Network and Community Detection, Network-based Recommending Systems and Visualization, are some of the topics in the first volume. Techniques on Memetic Algorithms and their applications to Business Analytics and Data Science are surveyed in the second volume; applications in Team Orienteering, Competitive Facility-location, and Visualization of Products and Consumers are also discussed. The second volume also includes an introduction to Meta-Analytics, and to the application areas of Fashion and Travel Analytics. Overall, the two-volume set helps to describe some fundamentals, acts as a bridge between different disciplines, and presents important results in a rapidly moving field combining powerful optimization techniques allied to new mathematical models critical for personalization of services. Academics and professionals working in the area of business anyalytics, data science, operations research and marketing will find this handbook valuable as a reference. Students studying these fields will find this handbook useful and helpful as a secondary textbook.
Publisher: Springer
ISBN: 3030062228
Category : Computers
Languages : en
Pages : 1000
Book Description
This two-volume handbook presents a collection of novel methodologies with applications and illustrative examples in the areas of data-driven computational social sciences. Throughout this handbook, the focus is kept specifically on business and consumer-oriented applications with interesting sections ranging from clustering and network analysis, meta-analytics, memetic algorithms, machine learning, recommender systems methodologies, parallel pattern mining and data mining to specific applications in market segmentation, travel, fashion or entertainment analytics. A must-read for anyone in data-analytics, marketing, behavior modelling and computational social science, interested in the latest applications of new computer science methodologies. The chapters are contributed by leading experts in the associated fields.The chapters cover technical aspects at different levels, some of which are introductory and could be used for teaching. Some chapters aim at building a common understanding of the methodologies and recent application areas including the introduction of new theoretical results in the complexity of core problems. Business and marketing professionals may use the book to familiarize themselves with some important foundations of data science. The work is a good starting point to establish an open dialogue of communication between professionals and researchers from different fields. Together, the two volumes present a number of different new directions in Business and Customer Analytics with an emphasis in personalization of services, the development of new mathematical models and new algorithms, heuristics and metaheuristics applied to the challenging problems in the field. Sections of the book have introductory material to more specific and advanced themes in some of the chapters, allowing the volumes to be used as an advanced textbook. Clustering, Proximity Graphs, Pattern Mining, Frequent Itemset Mining, Feature Engineering, Network and Community Detection, Network-based Recommending Systems and Visualization, are some of the topics in the first volume. Techniques on Memetic Algorithms and their applications to Business Analytics and Data Science are surveyed in the second volume; applications in Team Orienteering, Competitive Facility-location, and Visualization of Products and Consumers are also discussed. The second volume also includes an introduction to Meta-Analytics, and to the application areas of Fashion and Travel Analytics. Overall, the two-volume set helps to describe some fundamentals, acts as a bridge between different disciplines, and presents important results in a rapidly moving field combining powerful optimization techniques allied to new mathematical models critical for personalization of services. Academics and professionals working in the area of business anyalytics, data science, operations research and marketing will find this handbook valuable as a reference. Students studying these fields will find this handbook useful and helpful as a secondary textbook.