Construction of Non-Standard Markov Chain Models with Applications PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Construction of Non-Standard Markov Chain Models with Applications PDF full book. Access full book title Construction of Non-Standard Markov Chain Models with Applications by Dongmei Zhu. Download full books in PDF and EPUB format.

Construction of Non-Standard Markov Chain Models with Applications

Construction of Non-Standard Markov Chain Models with Applications PDF Author: Dongmei Zhu
Publisher: Open Dissertation Press
ISBN: 9781361345429
Category :
Languages : en
Pages :

Book Description
This dissertation, "Construction of Non-standard Markov Chain Models With Applications" by Dongmei, Zhu, 朱冬梅, was obtained from The University of Hong Kong (Pokfulam, Hong Kong) and is being sold pursuant to Creative Commons: Attribution 3.0 Hong Kong License. The content of this dissertation has not been altered in any way. We have altered the formatting in order to facilitate the ease of printing and reading of the dissertation. All rights not granted by the above license are retained by the author. Abstract: In this thesis, the properties of some non-standard Markov chain models and their corresponding parameter estimation methods are investigated. Several practical applications and extensions are also discussed. The estimation of model parameters plays a key role in the real-world applications of Markov chain models. Some widely used estimation methods for Markov chain models are based on the existence of stationary vectors. In this thesis, some weaker sufficient conditions for the existence of stationary vectors for highorder Markov chain models, multivariate Markov chain models and high-order multivariate Markov chain models are proposed. Furthermore, for multivariate Markov chain models, a new estimation method based on minimizing the prediction error is proposed. Numerical experiments are conducted to demonstrate the efficiency of the proposed estimation methods with an application in demand prediction. Hidden Markov Model (HMM) is a bivariate stochastic process such that one of the process is hidden and the other is observable. The distribution of observable sequence depends on the hidden sequence. In a traditional HMM, the hidden states directly affect the observable states but not vice versa. However, in reality, observable sequence may also have effect on the hidden sequence. For this reason, the concept of Interactive Hidden Markov Model (IHMM) is introduced, whose key idea is that the transitions of the hidden states depend on the observable states too. In this thesis, efforts are devoted in building a highorder IHMM where the probability laws governing both observable and hidden states can be written as a pair of high-order stochastic difference equations. We also propose a new model by capturing the effect of observable sequence on the hidden sequence through using the threshold principle. In this case, reference probability methods are adopted in estimating the optimal model parameters, while for unknown threshold parameter, Akaike Information Criterion (AIC) is used. We explore asset allocation problems from both domestic and foreign perspective where asset price dynamics follows autoregressive HMM. The object of an investor is not only to maximize the expected utility of the terminal wealth, but also to ensure that the risk of the portfolio described by the Value-at-Risk (VaR) does not exceed a specified level. In many decision processes, fuzziness is a major source of imprecision. As a perception of usual Markov chains, the definition of fuzzy Markov chains is introduced. Compared to traditional Markov chain models, fuzzy Markov chains are relatively new and many properties of them are still unknown. Due to the potential applications of fuzzy Markov chains, we provide some characterizations to ensure the ergodicity of these chains under both max-min and max-product compositions. DOI: 10.5353/th_b5295517 Subjects: Markov processes

Construction of Non-Standard Markov Chain Models with Applications

Construction of Non-Standard Markov Chain Models with Applications PDF Author: Dongmei Zhu
Publisher: Open Dissertation Press
ISBN: 9781361345429
Category :
Languages : en
Pages :

Book Description
This dissertation, "Construction of Non-standard Markov Chain Models With Applications" by Dongmei, Zhu, 朱冬梅, was obtained from The University of Hong Kong (Pokfulam, Hong Kong) and is being sold pursuant to Creative Commons: Attribution 3.0 Hong Kong License. The content of this dissertation has not been altered in any way. We have altered the formatting in order to facilitate the ease of printing and reading of the dissertation. All rights not granted by the above license are retained by the author. Abstract: In this thesis, the properties of some non-standard Markov chain models and their corresponding parameter estimation methods are investigated. Several practical applications and extensions are also discussed. The estimation of model parameters plays a key role in the real-world applications of Markov chain models. Some widely used estimation methods for Markov chain models are based on the existence of stationary vectors. In this thesis, some weaker sufficient conditions for the existence of stationary vectors for highorder Markov chain models, multivariate Markov chain models and high-order multivariate Markov chain models are proposed. Furthermore, for multivariate Markov chain models, a new estimation method based on minimizing the prediction error is proposed. Numerical experiments are conducted to demonstrate the efficiency of the proposed estimation methods with an application in demand prediction. Hidden Markov Model (HMM) is a bivariate stochastic process such that one of the process is hidden and the other is observable. The distribution of observable sequence depends on the hidden sequence. In a traditional HMM, the hidden states directly affect the observable states but not vice versa. However, in reality, observable sequence may also have effect on the hidden sequence. For this reason, the concept of Interactive Hidden Markov Model (IHMM) is introduced, whose key idea is that the transitions of the hidden states depend on the observable states too. In this thesis, efforts are devoted in building a highorder IHMM where the probability laws governing both observable and hidden states can be written as a pair of high-order stochastic difference equations. We also propose a new model by capturing the effect of observable sequence on the hidden sequence through using the threshold principle. In this case, reference probability methods are adopted in estimating the optimal model parameters, while for unknown threshold parameter, Akaike Information Criterion (AIC) is used. We explore asset allocation problems from both domestic and foreign perspective where asset price dynamics follows autoregressive HMM. The object of an investor is not only to maximize the expected utility of the terminal wealth, but also to ensure that the risk of the portfolio described by the Value-at-Risk (VaR) does not exceed a specified level. In many decision processes, fuzziness is a major source of imprecision. As a perception of usual Markov chains, the definition of fuzzy Markov chains is introduced. Compared to traditional Markov chain models, fuzzy Markov chains are relatively new and many properties of them are still unknown. Due to the potential applications of fuzzy Markov chains, we provide some characterizations to ensure the ergodicity of these chains under both max-min and max-product compositions. DOI: 10.5353/th_b5295517 Subjects: Markov processes

Construction of Non-standard Markov Chain Models with Applications

Construction of Non-standard Markov Chain Models with Applications PDF Author: 朱冬梅
Publisher:
ISBN:
Category : Markov processes
Languages : en
Pages : 146

Book Description


Markov Chains: Models, Algorithms and Applications

Markov Chains: Models, Algorithms and Applications PDF Author: Wai-Ki Ching
Publisher: Springer Science & Business Media
ISBN: 038729337X
Category : Mathematics
Languages : en
Pages : 212

Book Description
Markov chains are a particularly powerful and widely used tool for analyzing a variety of stochastic (probabilistic) systems over time. This monograph will present a series of Markov models, starting from the basic models and then building up to higher-order models. Included in the higher-order discussions are multivariate models, higher-order multivariate models, and higher-order hidden models. In each case, the focus is on the important kinds of applications that can be made with the class of models being considered in the current chapter. Special attention is given to numerical algorithms that can efficiently solve the models. Therefore, Markov Chains: Models, Algorithms and Applications outlines recent developments of Markov chain models for modeling queueing sequences, Internet, re-manufacturing systems, reverse logistics, inventory systems, bio-informatics, DNA sequences, genetic networks, data mining, and many other practical systems.

Markov Chains and Decision Processes for Engineers and Managers

Markov Chains and Decision Processes for Engineers and Managers PDF Author: Theodore J. Sheskin
Publisher: CRC Press
ISBN: 9781420051117
Category : Technology & Engineering
Languages : en
Pages : 0

Book Description
Recognized as a powerful tool for dealing with uncertainty, Markov modeling can enhance your ability to analyze complex production and service systems. However, most books on Markov chains or decision processes are often either highly theoretical, with few examples, or highly prescriptive, with little justification for the steps of the algorithms used to solve Markov models. Providing a unified treatment of Markov chains and Markov decision processes in a single volume, Markov Chains and Decision Processes for Engineers and Managers supplies a highly detailed description of the construction and solution of Markov models that facilitates their application to diverse processes. Organized around Markov chain structure, the book begins with descriptions of Markov chain states, transitions, structure, and models, and then discusses steady state distributions and passage to a target state in a regular Markov chain. The author treats canonical forms and passage to target states or to classes of target states for reducible Markov chains. He adds an economic dimension by associating rewards with states, thereby linking a Markov chain to a Markov decision process, and then adds decisions to create a Markov decision process, enabling an analyst to choose among alternative Markov chains with rewards so as to maximize expected rewards. An introduction to state reduction and hidden Markov chains rounds out the coverage. In a presentation that balances algorithms and applications, the author provides explanations of the logical relationships that underpin the formulas or algorithms through informal derivations, and devotes considerable attention to the construction of Markov models. He constructs simplified Markov models for a wide assortment of processes such as the weather, gambling, diffusion of gases, a waiting line, inventory, component replacement, machine maintenance, selling a stock, a charge account, a career path, patient flow in a hospital, marketing, and a production line. This treatment helps you harness the power of Markov modeling and apply it to your organization’s processes.

Optimization and Games for Controllable Markov Chains

Optimization and Games for Controllable Markov Chains PDF Author: Julio B. Clempner
Publisher: Springer Nature
ISBN: 3031435753
Category : Technology & Engineering
Languages : en
Pages : 340

Book Description
This book considers a class of ergodic finite controllable Markov's chains. The main idea behind the method, described in this book, is to develop the original discrete optimization problems (or game models) in the space of randomized formulations, where the variables stand in for the distributions (mixed strategies or preferences) of the original discrete (pure) strategies in the use. The following suppositions are made: a finite state space, a limited action space, continuity of the probabilities and rewards associated with the actions, and a necessity for accessibility. These hypotheses lead to the existence of an optimal policy. The best course of action is always stationary. It is either simple (i.e., nonrandomized stationary) or composed of two nonrandomized policies, which is equivalent to randomly selecting one of two simple policies throughout each epoch by tossing a biased coin. As a bonus, the optimization procedure just has to repeatedly solve the time-average dynamic programming equation, making it theoretically feasible to choose the optimum course of action under the global restriction. In the ergodic cases the state distributions, generated by the corresponding transition equations, exponentially quickly converge to their stationary (final) values. This makes it possible to employ all widely used optimization methods (such as Gradient-like procedures, Extra-proximal method, Lagrange's multipliers, Tikhonov's regularization), including the related numerical techniques. In the book we tackle different problems and theoretical Markov models like controllable and ergodic Markov chains, multi-objective Pareto front solutions, partially observable Markov chains, continuous-time Markov chains, Nash equilibrium and Stackelberg equilibrium, Lyapunov-like function in Markov chains, Best-reply strategy, Bayesian incentive-compatible mechanisms, Bayesian Partially Observable Markov Games, bargaining solutions for Nash and Kalai-Smorodinsky formulations, multi-traffic signal-control synchronization problem, Rubinstein's non-cooperative bargaining solutions, the transfer pricing problem as bargaining.

Dynamic Markov Bridges and Market Microstructure

Dynamic Markov Bridges and Market Microstructure PDF Author: Umut Çetin
Publisher: Springer
ISBN: 1493988352
Category : Mathematics
Languages : en
Pages : 239

Book Description
This book undertakes a detailed construction of Dynamic Markov Bridges using a combination of theory and real-world applications to drive home important concepts and methodologies. In Part I, theory is developed using tools from stochastic filtering, partial differential equations, Markov processes, and their interplay. Part II is devoted to the applications of the theory developed in Part I to asymmetric information models among financial agents, which include a strategic risk-neutral insider who possesses a private signal concerning the future value of the traded asset, non-strategic noise traders, and competitive risk-neutral market makers. A thorough analysis of optimality conditions for risk-neutral insiders is provided and the implications on equilibrium of non-Gaussian extensions are discussed. A Markov bridge, first considered by Paul Lévy in the context of Brownian motion, is a mathematical system that undergoes changes in value from one state to another when the initial and final states are fixed. Markov bridges have many applications as stochastic models of real-world processes, especially within the areas of Economics and Finance. The construction of a Dynamic Markov Bridge, a useful extension of Markov bridge theory, addresses several important questions concerning how financial markets function, among them: how the presence of an insider trader impacts market efficiency; how insider trading on financial markets can be detected; how information assimilates in market prices; and the optimal pricing policy of a particular market maker. Principles in this book will appeal to probabilists, statisticians, economists, researchers, and graduate students interested in Markov bridges and market microstructure theory.

Finite Markov Chains and Algorithmic Applications

Finite Markov Chains and Algorithmic Applications PDF Author: Olle Häggström
Publisher: Cambridge University Press
ISBN: 9780521890014
Category : Mathematics
Languages : en
Pages : 132

Book Description
Based on a lecture course given at Chalmers University of Technology, this 2002 book is ideal for advanced undergraduate or beginning graduate students. The author first develops the necessary background in probability theory and Markov chains before applying it to study a range of randomized algorithms with important applications in optimization and other problems in computing. Amongst the algorithms covered are the Markov chain Monte Carlo method, simulated annealing, and the recent Propp-Wilson algorithm. This book will appeal not only to mathematicians, but also to students of statistics and computer science. The subject matter is introduced in a clear and concise fashion and the numerous exercises included will help students to deepen their understanding.

University of Michigan Official Publication

University of Michigan Official Publication PDF Author: University of Michigan
Publisher: UM Libraries
ISBN:
Category : Education, Higher
Languages : en
Pages : 448

Book Description
Each number is the catalogue of a specific school or college of the University.

Markov Processes for Stochastic Modeling

Markov Processes for Stochastic Modeling PDF Author: Masaaki Kijima
Publisher: Springer
ISBN: 1489931325
Category : Mathematics
Languages : en
Pages : 345

Book Description
This book presents an algebraic development of the theory of countable state space Markov chains with discrete- and continuous-time parameters. A Markov chain is a stochastic process characterized by the Markov prop erty that the distribution of future depends only on the current state, not on the whole history. Despite its simple form of dependency, the Markov property has enabled us to develop a rich system of concepts and theorems and to derive many results that are useful in applications. In fact, the areas that can be modeled, with varying degrees of success, by Markov chains are vast and are still expanding. The aim of this book is a discussion of the time-dependent behavior, called the transient behavior, of Markov chains. From the practical point of view, when modeling a stochastic system by a Markov chain, there are many instances in which time-limiting results such as stationary distributions have no meaning. Or, even when the stationary distribution is of some importance, it is often dangerous to use the stationary result alone without knowing the transient behavior of the Markov chain. Not many books have paid much attention to this topic, despite its obvious importance.

Applications of Markov Chains in Chemical Engineering

Applications of Markov Chains in Chemical Engineering PDF Author: A. Tamir
Publisher: Elsevier
ISBN: 0080527396
Category : Mathematics
Languages : en
Pages : 617

Book Description
Markov chains make it possible to predict the future state of a system from its present state ignoring its past history. Surprisingly, despite the widespread use of Markov chains in many areas of science and technology, their applications in chemical engineering have been relatively meager. A possible reason for this phenomenon might be that books containing material on this subject have been written in such a way that the simplicity of Markov chains has been shadowed by the tedious mathematical derivations. Thus, the major objective of writing this book has been to try to change this situation.There are many advantages, detailed in Chapter 1, of using the discrete Markov-chain model in chemical engineering. Probably, the most important advantage is that physical models can be presented in a unified description via state vector and a one-step transition probability matrix. Consequently, a process is demonstrated solely by the probability of a system to occupy or not occupy a state.The book has been written in an easy and understandable form, where complex mathematical derivations are abandoned. The fundamentals of Markov chains are presented in Chapter 2 with examples from the bible, art and real life problems. An extremely wide collection is given of examples viz., reactions, reactors, reactions and reactors as well as combined processes, including their solution and a graphical presentation of it, all of which demonstrates the usefulness of applying Markov chains in chemical engineering.