Author: Mariam Kamkar
Publisher:
ISBN: 9789178710652
Category : Computer programs
Languages : en
Pages : 190
Book Description
Abstract: "The needs of maintenance and modification demand that large programs be decomposed into manageable parts. Program slicing is one method for such decomposition. A program slice with respect to a specified variable at some program point consists of those parts of the program that may directly or indirectly affect the value of that variable at the particular program point. This is useful for understanding dependences within programs. A static program slice is computed using static data and control flow analysis and is valid for all possible executions of the program. Static slices are often imprecise, i.e., they contain unnecessarily large parts of the program. Dynamic slices, however, are precise but are valid only for a single execution of the program. Interprocedural dynamic slices can be computed for programs with procedures, and these slices consists of all executed call statements which are relevant for the computation of the specified variable at the specified program point. This thesis presents the first technique for interprocedural dynamic slicing which deals with procedures/functions at the abstract level. This technique first generates summary information for each procedure call (or function application), then represents a program as a summary graph of dynamic dependences. A slice on this graph consists of vertices for all procedure calls of the program that affect the value of a given variable at the specified program point. The amount of information recorded by this method is considerably less than what is needed by previous methods for dynamic slicing, since it only depends on the size of the program's execution tree, i.e., the number of executed procedure calls, which is smaller than a trace of all executed statements. The interprocedural dynamic slicing method is applicable in at least two areas, program debugging and data flow testing. Both of these applications can be made more effective when using dynamic dependence information collected during program execution. We conclude that the interprocedural dynamic slicing method is superior to other slicing methods when precise dependence information for a specific set of input data values at the procedural abstraction level is relevant."
Interprocedural Dynamic Slicing with Applications to Debugging and Testing
Author: Mariam Kamkar
Publisher:
ISBN: 9789178710652
Category : Computer programs
Languages : en
Pages : 190
Book Description
Abstract: "The needs of maintenance and modification demand that large programs be decomposed into manageable parts. Program slicing is one method for such decomposition. A program slice with respect to a specified variable at some program point consists of those parts of the program that may directly or indirectly affect the value of that variable at the particular program point. This is useful for understanding dependences within programs. A static program slice is computed using static data and control flow analysis and is valid for all possible executions of the program. Static slices are often imprecise, i.e., they contain unnecessarily large parts of the program. Dynamic slices, however, are precise but are valid only for a single execution of the program. Interprocedural dynamic slices can be computed for programs with procedures, and these slices consists of all executed call statements which are relevant for the computation of the specified variable at the specified program point. This thesis presents the first technique for interprocedural dynamic slicing which deals with procedures/functions at the abstract level. This technique first generates summary information for each procedure call (or function application), then represents a program as a summary graph of dynamic dependences. A slice on this graph consists of vertices for all procedure calls of the program that affect the value of a given variable at the specified program point. The amount of information recorded by this method is considerably less than what is needed by previous methods for dynamic slicing, since it only depends on the size of the program's execution tree, i.e., the number of executed procedure calls, which is smaller than a trace of all executed statements. The interprocedural dynamic slicing method is applicable in at least two areas, program debugging and data flow testing. Both of these applications can be made more effective when using dynamic dependence information collected during program execution. We conclude that the interprocedural dynamic slicing method is superior to other slicing methods when precise dependence information for a specific set of input data values at the procedural abstraction level is relevant."
Publisher:
ISBN: 9789178710652
Category : Computer programs
Languages : en
Pages : 190
Book Description
Abstract: "The needs of maintenance and modification demand that large programs be decomposed into manageable parts. Program slicing is one method for such decomposition. A program slice with respect to a specified variable at some program point consists of those parts of the program that may directly or indirectly affect the value of that variable at the particular program point. This is useful for understanding dependences within programs. A static program slice is computed using static data and control flow analysis and is valid for all possible executions of the program. Static slices are often imprecise, i.e., they contain unnecessarily large parts of the program. Dynamic slices, however, are precise but are valid only for a single execution of the program. Interprocedural dynamic slices can be computed for programs with procedures, and these slices consists of all executed call statements which are relevant for the computation of the specified variable at the specified program point. This thesis presents the first technique for interprocedural dynamic slicing which deals with procedures/functions at the abstract level. This technique first generates summary information for each procedure call (or function application), then represents a program as a summary graph of dynamic dependences. A slice on this graph consists of vertices for all procedure calls of the program that affect the value of a given variable at the specified program point. The amount of information recorded by this method is considerably less than what is needed by previous methods for dynamic slicing, since it only depends on the size of the program's execution tree, i.e., the number of executed procedure calls, which is smaller than a trace of all executed statements. The interprocedural dynamic slicing method is applicable in at least two areas, program debugging and data flow testing. Both of these applications can be made more effective when using dynamic dependence information collected during program execution. We conclude that the interprocedural dynamic slicing method is superior to other slicing methods when precise dependence information for a specific set of input data values at the procedural abstraction level is relevant."
Automated and Algorithmic Debugging
Author: Peter A. Fritzson
Publisher: Springer Science & Business Media
ISBN: 9783540574170
Category : Computers
Languages : en
Pages : 392
Book Description
Debugging has always been a costly part of software development, and many attempts have been made to provide automatic computer support for this task.Automated debugging has seen major develoments over the last decade. Onesuccessful development is algorithmic debugging, which originated in logic programming but was later generalized to concurrent, imperative, and lazy functional languages. Important advances have also been made in knowledge-based program debugging, and in approaches to automated debugging based on static and dynamic program slicing based on dataflow and dependence analysis technology. This is the first collected volume of papers on automated debugging and presents latest developments, tutorial papers, and surveys.
Publisher: Springer Science & Business Media
ISBN: 9783540574170
Category : Computers
Languages : en
Pages : 392
Book Description
Debugging has always been a costly part of software development, and many attempts have been made to provide automatic computer support for this task.Automated debugging has seen major develoments over the last decade. Onesuccessful development is algorithmic debugging, which originated in logic programming but was later generalized to concurrent, imperative, and lazy functional languages. Important advances have also been made in knowledge-based program debugging, and in approaches to automated debugging based on static and dynamic program slicing based on dataflow and dependence analysis technology. This is the first collected volume of papers on automated debugging and presents latest developments, tutorial papers, and surveys.
The Compiler Design Handbook
Author: Y.N. Srikant
Publisher: CRC Press
ISBN: 1420043838
Category : Computers
Languages : en
Pages : 784
Book Description
Today’s embedded devices and sensor networks are becoming more and more sophisticated, requiring more efficient and highly flexible compilers. Engineers are discovering that many of the compilers in use today are ill-suited to meet the demands of more advanced computer architectures. Updated to include the latest techniques, The Compiler Design Handbook, Second Edition offers a unique opportunity for designers and researchers to update their knowledge, refine their skills, and prepare for emerging innovations. The completely revised handbook includes 14 new chapters addressing topics such as worst case execution time estimation, garbage collection, and energy aware compilation. The editors take special care to consider the growing proliferation of embedded devices, as well as the need for efficient techniques to debug faulty code. New contributors provide additional insight to chapters on register allocation, software pipelining, instruction scheduling, and type systems. Written by top researchers and designers from around the world, The Compiler Design Handbook, Second Edition gives designers the opportunity to incorporate and develop innovative techniques for optimization and code generation.
Publisher: CRC Press
ISBN: 1420043838
Category : Computers
Languages : en
Pages : 784
Book Description
Today’s embedded devices and sensor networks are becoming more and more sophisticated, requiring more efficient and highly flexible compilers. Engineers are discovering that many of the compilers in use today are ill-suited to meet the demands of more advanced computer architectures. Updated to include the latest techniques, The Compiler Design Handbook, Second Edition offers a unique opportunity for designers and researchers to update their knowledge, refine their skills, and prepare for emerging innovations. The completely revised handbook includes 14 new chapters addressing topics such as worst case execution time estimation, garbage collection, and energy aware compilation. The editors take special care to consider the growing proliferation of embedded devices, as well as the need for efficient techniques to debug faulty code. New contributors provide additional insight to chapters on register allocation, software pipelining, instruction scheduling, and type systems. Written by top researchers and designers from around the world, The Compiler Design Handbook, Second Edition gives designers the opportunity to incorporate and develop innovative techniques for optimization and code generation.
Software Engineering - ESEC/FSE '99
Author: Oskar Nierstrasz
Publisher: Springer
ISBN: 3540481664
Category : Computers
Languages : en
Pages : 536
Book Description
For the second time, the European Software Engineering Conference is being held jointly with the ACM SIGSOFT Symposium on the Foundations of Software Engine- ing (FSE). Although the two conferences have different origins and traditions, there is a significant overlap in intent and subject matter. Holding the conferences jointly when they are held in Europe helps to make these thematic links more explicit, and enco- ages researchers and practitioners to attend and submit papers to both events. The ESEC proceedings have traditionally been published by Springer-Verlag, as they are again this year, but by special arrangement, the proceedings will be distributed to members of ACM SIGSOFT, as is usually the case for FSE. ESEC/FSE is being held as a single event, rather than as a pair of collocated events. Submitted papers were therefore evaluated by a single program committee. ESEC/FSE represents a broad range of software engineering topics in (mainly) two continents, and consequently the program committee members were selected to represent a spectrum of both traditional and emerging software engineering topics. A total of 141 papers were submitted from around the globe. Of these, nearly half were classified as research - pers,aquarterasexperiencepapers,andtherestasbothresearchandexperiencepapers. Twenty-nine papers from five continents were selected for presentation and inclusion in the proceedings. Due to the large number of industrial experience reports submitted, we have also introduced this year two sessions on short case study presentations.
Publisher: Springer
ISBN: 3540481664
Category : Computers
Languages : en
Pages : 536
Book Description
For the second time, the European Software Engineering Conference is being held jointly with the ACM SIGSOFT Symposium on the Foundations of Software Engine- ing (FSE). Although the two conferences have different origins and traditions, there is a significant overlap in intent and subject matter. Holding the conferences jointly when they are held in Europe helps to make these thematic links more explicit, and enco- ages researchers and practitioners to attend and submit papers to both events. The ESEC proceedings have traditionally been published by Springer-Verlag, as they are again this year, but by special arrangement, the proceedings will be distributed to members of ACM SIGSOFT, as is usually the case for FSE. ESEC/FSE is being held as a single event, rather than as a pair of collocated events. Submitted papers were therefore evaluated by a single program committee. ESEC/FSE represents a broad range of software engineering topics in (mainly) two continents, and consequently the program committee members were selected to represent a spectrum of both traditional and emerging software engineering topics. A total of 141 papers were submitted from around the globe. Of these, nearly half were classified as research - pers,aquarterasexperiencepapers,andtherestasbothresearchandexperiencepapers. Twenty-nine papers from five continents were selected for presentation and inclusion in the proceedings. Due to the large number of industrial experience reports submitted, we have also introduced this year two sessions on short case study presentations.
Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models
Author: Martin Sjölund
Publisher: Linköping University Electronic Press
ISBN: 9175190710
Category : Debugging in computer science
Languages : en
Pages : 243
Book Description
Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort. However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed. As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail. Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly. Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information. The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself. Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.
Publisher: Linköping University Electronic Press
ISBN: 9175190710
Category : Debugging in computer science
Languages : en
Pages : 243
Book Description
Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort. However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed. As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail. Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly. Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information. The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself. Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.
Fifth International Workshop on Program Comprehension
Advances in Computers
Author:
Publisher: Academic Press
ISBN: 0080566758
Category : Science
Languages : en
Pages : 325
Book Description
Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in hardware and software and in computer theory, design, and applications. It has also provided contributorswith a medium in which they can examine their subjects in greater depth and breadth than that allowed by standard journal articles. As a result, many articles have become standard references that continue to be of significant, lasting value despite the rapid growth taking place in the field.
Publisher: Academic Press
ISBN: 0080566758
Category : Science
Languages : en
Pages : 325
Book Description
Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in hardware and software and in computer theory, design, and applications. It has also provided contributorswith a medium in which they can examine their subjects in greater depth and breadth than that allowed by standard journal articles. As a result, many articles have become standard references that continue to be of significant, lasting value despite the rapid growth taking place in the field.
Distributed Moving Base Driving Simulators
Author: Anders Andersson
Publisher: Linköping University Electronic Press
ISBN: 9176850900
Category :
Languages : en
Pages : 60
Book Description
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.
Publisher: Linköping University Electronic Press
ISBN: 9176850900
Category :
Languages : en
Pages : 60
Book Description
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation. This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation. The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.
Program Comprehension
Author: IEEE Computer Society
Publisher: Institute of Electrical & Electronics Engineers(IEEE)
ISBN: 9780769511313
Category : Computers
Languages : en
Pages : 342
Book Description
Based on the 9th IEEE International Workshop on Program Comprehension (IWPC 2001), this volume covers such topics as: software quality analysis; architecture recovery; reverse engineering; tools and environments; program comprehension studies; metrics and slicing; and clustering techniques.
Publisher: Institute of Electrical & Electronics Engineers(IEEE)
ISBN: 9780769511313
Category : Computers
Languages : en
Pages : 342
Book Description
Based on the 9th IEEE International Workshop on Program Comprehension (IWPC 2001), this volume covers such topics as: software quality analysis; architecture recovery; reverse engineering; tools and environments; program comprehension studies; metrics and slicing; and clustering techniques.
Robust Stream Reasoning Under Uncertainty
Author: Daniel de Leng
Publisher: Linköping University Electronic Press
ISBN: 9176850137
Category :
Languages : en
Pages : 234
Book Description
Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.
Publisher: Linköping University Electronic Press
ISBN: 9176850137
Category :
Languages : en
Pages : 234
Book Description
Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.