Author: Stanley Roy Pliska
Publisher:
ISBN:
Category : Control theory
Languages : en
Pages : 122
Book Description
The paper is concerned with the optimal control of a one-dimensional stationary diffusion process on a compact interval. The drift and diffusion coefficients depend upon a stationary control assumed to be a piece-wise continuous function of the state. The costs generated by the process are functions of both the control and the sample path of the process. Mandl's concept of a controlled diffusion process is generalized by allowing the controls to be vector-valued with the set of admissible control actions defined by a piecewise continuous set-valued function on the state space. Both single and multi-person problems are considered. The main results include necessary and sufficient conditions for a control to be 'optimal' and conditions assuring the existence of a piecewise continuous optimal control. Applications are given to problems of controlling reservoirs, pollution, queues, investments, welfare, and warfare.
Single and Multi-person Controlled Diffusions
Author: Stanley Roy Pliska
Publisher:
ISBN:
Category : Control theory
Languages : en
Pages : 122
Book Description
The paper is concerned with the optimal control of a one-dimensional stationary diffusion process on a compact interval. The drift and diffusion coefficients depend upon a stationary control assumed to be a piece-wise continuous function of the state. The costs generated by the process are functions of both the control and the sample path of the process. Mandl's concept of a controlled diffusion process is generalized by allowing the controls to be vector-valued with the set of admissible control actions defined by a piecewise continuous set-valued function on the state space. Both single and multi-person problems are considered. The main results include necessary and sufficient conditions for a control to be 'optimal' and conditions assuring the existence of a piecewise continuous optimal control. Applications are given to problems of controlling reservoirs, pollution, queues, investments, welfare, and warfare.
Publisher:
ISBN:
Category : Control theory
Languages : en
Pages : 122
Book Description
The paper is concerned with the optimal control of a one-dimensional stationary diffusion process on a compact interval. The drift and diffusion coefficients depend upon a stationary control assumed to be a piece-wise continuous function of the state. The costs generated by the process are functions of both the control and the sample path of the process. Mandl's concept of a controlled diffusion process is generalized by allowing the controls to be vector-valued with the set of admissible control actions defined by a piecewise continuous set-valued function on the state space. Both single and multi-person problems are considered. The main results include necessary and sufficient conditions for a control to be 'optimal' and conditions assuring the existence of a piecewise continuous optimal control. Applications are given to problems of controlling reservoirs, pollution, queues, investments, welfare, and warfare.
Controlled Markov Processes
Author: Nicolaas M. van Dijk
Publisher:
ISBN:
Category :
Languages : en
Pages : 264
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 264
Book Description
Controlled Markov Processes
Author: N. M. van Dijk
Publisher:
ISBN:
Category : Control theory
Languages : en
Pages : 182
Book Description
Publisher:
ISBN:
Category : Control theory
Languages : en
Pages : 182
Book Description
Kybernetika
Annual Commencement
Government Reports Announcements
Gaining Momentum: Managing The Diffusion Of Innovations
Author: Joe Tidd
Publisher: World Scientific
ISBN: 1908978511
Category : Business & Economics
Languages : en
Pages : 446
Book Description
Diffusion, or the widespread adoption of innovations, is a critical yet under-researched topic. There is a wide gap between development and successful adoption of an innovation. Therefore, a better understanding of why and how an innovation is adopted can help develop realistic management and business plans. Most books on this topic use a single-discipline approach to explain the diffusion of innovations. This book adopts a multi-disciplinary and managerial process approach to understanding and promoting the adoption of innovations, based on the latest research and practice. It will be of interest to graduates and researchers in marketing, product development and innovation courses./a
Publisher: World Scientific
ISBN: 1908978511
Category : Business & Economics
Languages : en
Pages : 446
Book Description
Diffusion, or the widespread adoption of innovations, is a critical yet under-researched topic. There is a wide gap between development and successful adoption of an innovation. Therefore, a better understanding of why and how an innovation is adopted can help develop realistic management and business plans. Most books on this topic use a single-discipline approach to explain the diffusion of innovations. This book adopts a multi-disciplinary and managerial process approach to understanding and promoting the adoption of innovations, based on the latest research and practice. It will be of interest to graduates and researchers in marketing, product development and innovation courses./a
The Penny Cyclopedia of The Society for the Diffusion of Useful Knowledge
Author: Society for the Diffusion of Useful Knowledge (Great Britain)
Publisher:
ISBN:
Category :
Languages : en
Pages : 516
Book Description
Publisher:
ISBN:
Category :
Languages : en
Pages : 516
Book Description
Stochastic Controls
Author: Jiongmin Yong
Publisher: Springer Science & Business Media
ISBN: 9780387987231
Category : Mathematics
Languages : en
Pages : 472
Book Description
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
Publisher: Springer Science & Business Media
ISBN: 9780387987231
Category : Mathematics
Languages : en
Pages : 472
Book Description
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.