Cognitive Human Activity and Plan Recognition for Human-robot Collaboration PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Cognitive Human Activity and Plan Recognition for Human-robot Collaboration PDF full book. Access full book title Cognitive Human Activity and Plan Recognition for Human-robot Collaboration by Sang Uk Lee (Mechanical engineer). Download full books in PDF and EPUB format.

Cognitive Human Activity and Plan Recognition for Human-robot Collaboration

Cognitive Human Activity and Plan Recognition for Human-robot Collaboration PDF Author: Sang Uk Lee (Mechanical engineer)
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Book Description
With the growth of the robotics field, it is expected that robots will increasingly become part of our everyday lives. Consequently, there is an emerging need for humans and robots to work together. Human-robot collaboration has become an important topic in various domains, such as home assistance and manufacturing. For effective collaboration, robots must be able to recognize human activity and plan, while determining which functions would be helpful to humans. The problem of recognizing human activity and plan, known as human activity and plan recognition (HAPR), is considered to be the main bottleneck for successful collaboration. HAPR becomes even more complex in the analysis of visual inputs, such as RGB-D images. This thesis addresses this bottleneck by investigating how to perform an efficient and accurate vision-based HAPR for fluent collaboration in real-world applications. The following limitations of state-of-the-art HAPR studies are examined in this thesis. First, although learning-based, model-free approaches are gaining significant attention owing to recent advances in deep learning, they require significant amount of training data. This makes recognition inefficient. Second, previous studies recognized human activity and plan separately and sequentially. They recognized human activity first and subsequently plan. Separate and sequential recognition cannot consider the plan context while recognizing human activity because the plan context is not available during activity recognition. However, the plan context provides useful information for activity recognition. Thus, separate and sequential recognition is inaccurate. We pose a fundamental question: Do humans share the same limitations when recognizing others' activity and plan? To answer this question, we introduce a novel problem called cognitive HAPR. Cognitive HAPR attempts to improve the HAPR system by adopting three ideas motivated by how cognitive humans perform HAPR. The first idea is to apply symbolic reasoning based on the preconditions-and-effects structure of activities, which humans understand well. For example, let us assume that a person is getting a bowl. It is intuitive to understand or assume that the person's hand must be empty, as a precondition of this activity, and the person would be holding a bowl as an effect of this activity. We propose that such intuitive preconditions-and-effects structure of activities provides valuable domain knowledge for HAPR. The second idea is the application of commonsense spatial knowledge with qualitative representations. Several cognitive science studies have shown that humans efficiently and effectively perceive their surroundings by abstracting the scene using qualitative representations. Qualitative representations are more compact and effective than quantitative data such as 6-D poses (i.e., x, y, z, roll, pitch, and yaw) of objects. We propose qualitative spatial representation (QSR), a representation framework that describes the spatial information of objects in a qualitative manner, as a good qualitative representation tool for HAPR. We effectively model complex predicates relevant to activities through QSR statements using intuitive commonsense knowledge. This modeling of predicates also provides valuable domain knowledge for HAPR. The third idea is the application of context-aware human activity recognition using a plan context. Several cognitive science studies have proven that humans recognize activity and plan as a combined framework, instead of recognizing them separately and sequentially. Humans employ the plan context when recognizing activity using the combined framework. We proposed a combined model for HAPR that captures the Bayesian theory of mind (BToM) from cognitive science. This thesis presents a cognitive HAPR system called cognitively motivated plan and activity estimation system (COMPASS) that achieves the three ideas. We evaluate COMPASS in a home care scenario, called the activities of daily life (ADL). The ADL scenario takes place in a household environment where a human and robot collaborate to complete daily tasks. We use the ADL scenario to demonstrate that COMPASS resolves the two limitations of previous HAPR studies. First, by using a model-based approach, COMPASS requires significantly less training data compared to the case using a learning-based approach. This makes COMPASS more efficient. The two ideas of applying symbolic reasoning based on the preconditions-and-effects structure of activities and commonsense spatial knowledge with qualitative representations provide good domain knowledge for model-based recognition that requires minimal modeling effort. Second, by using a combined framework, COMPASS can perform context-aware human activity recognition using the plan context. This makes COMPASS more accurate compared to the case using the sequential model.

Cognitive Human Activity and Plan Recognition for Human-robot Collaboration

Cognitive Human Activity and Plan Recognition for Human-robot Collaboration PDF Author: Sang Uk Lee (Mechanical engineer)
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Book Description
With the growth of the robotics field, it is expected that robots will increasingly become part of our everyday lives. Consequently, there is an emerging need for humans and robots to work together. Human-robot collaboration has become an important topic in various domains, such as home assistance and manufacturing. For effective collaboration, robots must be able to recognize human activity and plan, while determining which functions would be helpful to humans. The problem of recognizing human activity and plan, known as human activity and plan recognition (HAPR), is considered to be the main bottleneck for successful collaboration. HAPR becomes even more complex in the analysis of visual inputs, such as RGB-D images. This thesis addresses this bottleneck by investigating how to perform an efficient and accurate vision-based HAPR for fluent collaboration in real-world applications. The following limitations of state-of-the-art HAPR studies are examined in this thesis. First, although learning-based, model-free approaches are gaining significant attention owing to recent advances in deep learning, they require significant amount of training data. This makes recognition inefficient. Second, previous studies recognized human activity and plan separately and sequentially. They recognized human activity first and subsequently plan. Separate and sequential recognition cannot consider the plan context while recognizing human activity because the plan context is not available during activity recognition. However, the plan context provides useful information for activity recognition. Thus, separate and sequential recognition is inaccurate. We pose a fundamental question: Do humans share the same limitations when recognizing others' activity and plan? To answer this question, we introduce a novel problem called cognitive HAPR. Cognitive HAPR attempts to improve the HAPR system by adopting three ideas motivated by how cognitive humans perform HAPR. The first idea is to apply symbolic reasoning based on the preconditions-and-effects structure of activities, which humans understand well. For example, let us assume that a person is getting a bowl. It is intuitive to understand or assume that the person's hand must be empty, as a precondition of this activity, and the person would be holding a bowl as an effect of this activity. We propose that such intuitive preconditions-and-effects structure of activities provides valuable domain knowledge for HAPR. The second idea is the application of commonsense spatial knowledge with qualitative representations. Several cognitive science studies have shown that humans efficiently and effectively perceive their surroundings by abstracting the scene using qualitative representations. Qualitative representations are more compact and effective than quantitative data such as 6-D poses (i.e., x, y, z, roll, pitch, and yaw) of objects. We propose qualitative spatial representation (QSR), a representation framework that describes the spatial information of objects in a qualitative manner, as a good qualitative representation tool for HAPR. We effectively model complex predicates relevant to activities through QSR statements using intuitive commonsense knowledge. This modeling of predicates also provides valuable domain knowledge for HAPR. The third idea is the application of context-aware human activity recognition using a plan context. Several cognitive science studies have proven that humans recognize activity and plan as a combined framework, instead of recognizing them separately and sequentially. Humans employ the plan context when recognizing activity using the combined framework. We proposed a combined model for HAPR that captures the Bayesian theory of mind (BToM) from cognitive science. This thesis presents a cognitive HAPR system called cognitively motivated plan and activity estimation system (COMPASS) that achieves the three ideas. We evaluate COMPASS in a home care scenario, called the activities of daily life (ADL). The ADL scenario takes place in a household environment where a human and robot collaborate to complete daily tasks. We use the ADL scenario to demonstrate that COMPASS resolves the two limitations of previous HAPR studies. First, by using a model-based approach, COMPASS requires significantly less training data compared to the case using a learning-based approach. This makes COMPASS more efficient. The two ideas of applying symbolic reasoning based on the preconditions-and-effects structure of activities and commonsense spatial knowledge with qualitative representations provide good domain knowledge for model-based recognition that requires minimal modeling effort. Second, by using a combined framework, COMPASS can perform context-aware human activity recognition using the plan context. This makes COMPASS more accurate compared to the case using the sequential model.

Cognitive Computing for Human-Robot Interaction

Cognitive Computing for Human-Robot Interaction PDF Author: Mamta Mittal
Publisher: Academic Press
ISBN: 0323856470
Category : Computers
Languages : en
Pages : 420

Book Description
Cognitive Computing for Human-Robot Interaction: Principles and Practices explores the efforts that should ultimately enable society to take advantage of the often-heralded potential of robots to provide economical and sustainable computing applications. This book discusses each of these applications, presents working implementations, and combines coherent and original deliberative architecture for human–robot interactions (HRI). Supported by experimental results, it shows how explicit knowledge management promises to be instrumental in building richer and more natural HRI, by pushing for pervasive, human-level semantics within the robot's deliberative system for sustainable computing applications. This book will be of special interest to academics, postgraduate students, and researchers working in the area of artificial intelligence and machine learning. Key features: Introduces several new contributions to the representation and management of humans in autonomous robotic systems; Explores the potential of cognitive computing, robots, and HRI to generate a deeper understanding and to provide a better contribution from robots to society; Engages with the potential repercussions of cognitive computing and HRI in the real world. Introduces several new contributions to the representation and management of humans in an autonomous robotic system Explores cognitive computing, robots and HRI, presenting a more in-depth understanding to make robots better for society Gives a challenging approach to those several repercussions of cognitive computing and HRI in the actual global scenario

New Frontiers in Human–Robot Interaction

New Frontiers in Human–Robot Interaction PDF Author: Kerstin Dautenhahn
Publisher: John Benjamins Publishing
ISBN: 9027283397
Category : Computers
Languages : en
Pages : 340

Book Description
Human–Robot Interaction (HRI) considers how people can interact with robots in order to enable robots to best interact with people. HRI presents many challenges with solutions requiring a unique combination of skills from many fields, including computer science, artificial intelligence, social sciences, ethology and engineering. We have specifically aimed this work to appeal to such a multi-disciplinary audience. This volume presents new and exciting material from HRI researchers who discuss research at the frontiers of HRI. The chapters address the human aspects of interaction, such as how a robot may understand, provide feedback and act as a social being in interaction with a human, to experimental studies and field implementations of human–robot collaboration ranging from joint action, robots practically and safely helping people in real world situations, robots helping people via rehabilitation and robots acquiring concepts from communication. This volume reflects current trends in this exciting research field.

Computational Human-Robot Interaction

Computational Human-Robot Interaction PDF Author: Andrea Thomaz
Publisher:
ISBN: 9781680832082
Category : Technology & Engineering
Languages : en
Pages : 140

Book Description
Computational Human-Robot Interaction provides the reader with a systematic overview of the field of Human-Robot Interaction over the past decade, with a focus on the computational frameworks, algorithms, techniques, and models currently used to enable robots to interact with humans.

Plan, Activity, and Intent Recognition

Plan, Activity, and Intent Recognition PDF Author: Gita Sukthankar
Publisher: Newnes
ISBN: 012401710X
Category : Computers
Languages : en
Pages : 423

Book Description
Plan recognition, activity recognition, and intent recognition together combine and unify techniques from user modeling, machine vision, intelligent user interfaces, human/computer interaction, autonomous and multi-agent systems, natural language understanding, and machine learning. Plan, Activity, and Intent Recognition explains the crucial role of these techniques in a wide variety of applications including: personal agent assistants computer and network security opponent modeling in games and simulation systems coordination in robots and software agents web e-commerce and collaborative filtering dialog modeling video surveillance smart homes In this book, follow the history of this research area and witness exciting new developments in the field made possible by improved sensors, increased computational power, and new application areas. Combines basic theory on algorithms for plan/activity recognition along with results from recent workshops and seminars Explains how to interpret and recognize plans and activities from sensor data Provides valuable background knowledge and assembles key concepts into one guide for researchers or students studying these disciplines

Human-Robot Interaction

Human-Robot Interaction PDF Author: Gholamreza Anbarjafari
Publisher: BoD – Books on Demand
ISBN: 178923316X
Category : Computers
Languages : en
Pages : 186

Book Description
This book takes the vocal and visual modalities and human-robot interaction applications into account by considering three main aspects, namely, social and affective robotics, robot navigation, and risk event recognition. This book can be a very good starting point for the scientists who are about to start their research work in the field of human-robot interaction.

Cognitive Assistant Supported Human-Robot Collaboration

Cognitive Assistant Supported Human-Robot Collaboration PDF Author: Cecilio Angulo
Publisher: Elsevier
ISBN: 0443221367
Category : Computers
Languages : en
Pages : 226

Book Description
Cognitive Assistant Supported Human-Robot Collaboration covers the design and development of cognitive assistants in the smart factory era, its application domains, challenges, and current state of the art in assistance systems with collaborative robotics and IoT technologies, standards, platforms, and solutions. This book also provides a sociotechnical view of collaborative work in human-robot teams, investigating specific methods and techniques to analyze assistance systems. This will provide readers with a comprehensive overview of how cognitive assistants function and work in human-robot teams. Introduces fundamental concepts of cognitive assistants and human-robot collaboration Investigates the optimization capabilities of human-cyber physical systems Discusses planning and implementation of cognitive assistant projects Explores concepts and design elements of human collaborative workspaces

Advances in Human-Robot Interaction

Advances in Human-Robot Interaction PDF Author: Erwin Prassler
Publisher: Springer Science & Business Media
ISBN: 9783540232117
Category : Technology & Engineering
Languages : en
Pages : 434

Book Description
"Advances in Human-Robot Interaction" provides a unique collection of recent research in human-robot interaction. It covers the basic important research areas ranging from multi-modal interfaces, interpretation, interaction, learning, or motion coordination to topics such as physical interaction, systems, and architectures. The book addresses key issues of human-robot interaction concerned with perception, modelling, control, planning and cognition, covering a wide spectrum of applications. This includes interaction and communication with robots in manufacturing environments and the collaboration and co-existence with assistive robots in domestic environments. Among the presented examples are a robotic bartender, a new programming paradigm for a cleaning robot, or an approach to interactive teaching of a robot assistant in manufacturing environment. This carefully edited book reports on contributions from leading German academic institutions and industrial companies brought together within MORPHA, a 4 year project on interaction and communication between humans and anthropomorphic robot assistants.

Human-Robot Interaction

Human-Robot Interaction PDF Author: Daisuke Chugo
Publisher: IntechOpen
ISBN: 9789533070513
Category : Technology & Engineering
Languages : en
Pages : 310

Book Description
Human-robot interaction (HRI) is the study of interactions between people (users) and robots. HRI is multidisciplinary with contributions from the fields of human-computer interaction, artificial intelligence, robotics, speech recognition, and social sciences (psychology, cognitive science, anthropology, and human factors). There has been a great deal of work done in the area of human-robot interaction to understand how a human interacts with a computer. However, there has been very little work done in understanding how people interact with robots. For robots becoming our friends, these studies will be required more and more.

Modelling Human Motion

Modelling Human Motion PDF Author: Nicoletta Noceti
Publisher: Springer Nature
ISBN: 3030467325
Category : Computers
Languages : en
Pages : 351

Book Description
The new frontiers of robotics research foresee future scenarios where artificial agents will leave the laboratory to progressively take part in the activities of our daily life. This will require robots to have very sophisticated perceptual and action skills in many intelligence-demanding applications, with particular reference to the ability to seamlessly interact with humans. It will be crucial for the next generation of robots to understand their human partners and at the same time to be intuitively understood by them. In this context, a deep understanding of human motion is essential for robotics applications, where the ability to detect, represent and recognize human dynamics and the capability for generating appropriate movements in response sets the scene for higher-level tasks. This book provides a comprehensive overview of this challenging research field, closing the loop between perception and action, and between human-studies and robotics. The book is organized in three main parts. The first part focuses on human motion perception, with contributions analyzing the neural substrates of human action understanding, how perception is influenced by motor control, and how it develops over time and is exploited in social contexts. The second part considers motion perception from the computational perspective, providing perspectives on cutting-edge solutions available from the Computer Vision and Machine Learning research fields, addressing higher-level perceptual tasks. Finally, the third part takes into account the implications for robotics, with chapters on how motor control is achieved in the latest generation of artificial agents and how such technologies have been exploited to favor human-robot interaction. This book considers the complete human-robot cycle, from an examination of how humans perceive motion and act in the world, to models for motion perception and control in artificial agents. In this respect, the book will provide insights into the perception and action loop in humans and machines, joining together aspects that are often addressed in independent investigations. As a consequence, this book positions itself in a field at the intersection of such different disciplines as Robotics, Neuroscience, Cognitive Science, Psychology, Computer Vision, and Machine Learning. By bridging these different research domains, the book offers a common reference point for researchers interested in human motion for different applications and from different standpoints, spanning Neuroscience, Human Motor Control, Robotics, Human-Robot Interaction, Computer Vision and Machine Learning. Chapter 'The Importance of the Affective Component of Movement in Action Understanding' of this book is available open access under a CC BY 4.0 license at link.springer.com.