Abstracts of Papers
Session: Real-Time Systems    
Hierarchical Scheduling with Ada 2005   J. A. Pulido, S. Urueña,
J. Zamorano, T. Vardanega
and J. A. de la Puente
(Universidad Politecnica de Madrid, Spain
and Universita di Padova, Italy)
 

Hierarchical scheduling is a basic technique to achieve temporal isolation between applications in high-integrity systems when an integrated approach is opted for over traditional federation. While comparatively heavyweight approaches to hierarchical scheduling have been prevailing until now, the new scheduling features of Ada 2005 enable lighter-weight techniques to be used. This will expectedly result in increasing the efficiency and flexibility of hierarchical scheduling, thus enabling new ways to developing critical applications in Ada. The paper explores the new opportunities opened by Ada 2005 and proposes some concrete techniques for implementing hierarchical scheduling in the new version of the language.

 

     

A Comparison of Ada and Real-Time Java for Safety-Critical Applications
 
B. M. Brosgol and A. Wellings
(AdaCore, USA and University of York, UK)
 

Ada has long been used for developing safety-critical systems, and the upcoming Ada 2005 language revision extends this support. For various reasons Java has not been a serious choice in this domain. However, recent work based on the Real-Time Specification for Java promises to make Java technology a credible alternative. This paper discusses and compares Ada and the RTSJ with respect to the requirements for safety-critical systems, in particular how they can serve as the basis for subsets that can be used for developing safety-certified software.

 

   
POSIX Trace Based Behavioural Reflection   F. Valpereiro and L. M. Pinho
(Polytechnic Institute of Porto, Portugal)

Traditional Real-Time Operating Systems (RTOS) are not designed to accommodate application specific requirements. They address a general case and the application must co-exist with any limitations imposed by such design. For modern real-time applications this limits the quality of services offered to the end-user. Research in this field has shown that it is possible to develop dynamic systems where adaptation is the key for success. However, adaptation requires full knowledge of the system state. To overcome this we propose a framework to gather data, and interact with the operating system, extending the traditional POSIX trace model with a partial reflective model. Such combination still preserves the trace mechanism semantics while creating a powerful platform to develop new dynamic systems, with little impact in the system and avoiding complex changes in the kernel source code.

 

Session: Static Analysis    
Static Detection of Access Anomalies in Ada95   B. Burgstaller, J. Blieberger
and R. Mittermayr
(The University of Sydney, Australia,
TU Vienna, Austria and ARC Seibersdorf research GmbH, Austria)
 

In this paper we present data flow frameworks that are able to detect access anomalies in Ada multi-tasking programs. In particular, our approach finds all possible non-sequential accesses to shared non-protected variables. The algorithms employed are very efficient. Our approach is conservative and may find false positives.

 

   
One Million (LOC) and Counting: Static Analysis for Errors and Vulnerabilities in the Linux Kernel Source Code   P. T. Breuer and S. Pickin
(Universidad Carlos III de Madrid, Spain)

This article describes an analysis tool aimed at the C code of the Linux kernel, having been first described as a prototype (in this forum) in 2004. Its continuing maturation means that it is now capable of treating millions of lines of code in a few hours on very modest platforms. It detects about two uncorrected deadlock situations per thousand C source files or million lines of source code in the Linux kernel, and three accesses to freed memory. In distinction to model-checking techniques, the tool uses a configurable “3-phase” programming logic to perform its analysis. It carries out several different analyses simultaneously.

 

   
Bauhaus - a tool suite for program analysis and reverse engineering   A. Raza, G. Vogel and E. Plödereder
(Universitat Stuttgart, Germany)
 

The maintenance and evolution of critical software with high requirements for reliability is an extremely demanding, time consuming and expensive task. Errors introduced by ad-hoc changes might have disastrous effects on the system and must be prevented under all circumstances, which requires the understanding of the details of source code and system design. This paper describes Bauhaus, a comprehensive tool suite that supports program understanding and reverse engineering on all layers of abstraction, from source code to architecture.

 

Session: Verification    
SPARK Annotations within Executable UML   D. Curtis
(AWE plc, UK)

The emergence in the software industry of the Unified Modelling Language (UML) has led to the question as to whether it may be used to complement existing development techniques for high integrity systems. Work is in progress to develop a code generator for SPARK Ada from the executable UML (xUML) subset. This paper concentrates on the work completed, which enables the utilisation of SPARK annotations within xUML models for a prototype code generator. The code generated by this prototype has been successfully analysed using the SPARK toolset.

 

   
Runtime Verification of Java Programs for Scenario-Based Specifications   L. Xuandong, W. Linzhang,
Q. Xiaokang, L. Bin, Y. Jiesong,
Z. Jianhua and Z. Guoliang
(Nanjing University, China)
 

In this paper, we use UML sequence diagrams as scenario based specifications, and give the solution to runtime verification of Java programs for the safety consistency and the mandatory consistency. The safety consistency requires that any forbidden scenario described by a given sequence diagram never happens during the execution of a program, and the mandatory consistency requires that if a reference scenario described by the given sequence diagrams occurs during the execution of a program, it must immediately adhere to a scenario described by the other given sequence diagram. In the solution, we first instrument the program under verification so as to gather the program execution traces related to a given scenario-based specification; then we drive the instrumented program by random test cases so as to generate the program execution traces; last we check if the collected program execution traces satisfy the given specification. Our work leads to a testing tool which may proceed in a fully automatic and push-button fashion.

 

Session: Applications    
Secure Execution of Computations in Untrusted Hosts   S. Narayanan, M. Kandemir,
R. Brooks and I. Kolcu
(Pennsylvania State University, USA,
Clemson University, USA and UMIST, UK)
 

Proliferation of distributed computing platforms, in both small and large scales, and mobile applications makes it important to protect remote hosts (servers) from mobile applications and mobile applications from remote hosts. This paper proposes and evaluates a solution to the latter problem for applications based on linear computations that involve scalar as well as array arithmetic. We demonstrate that, for certain classes of applications, it is possible to use an optimizing compiler to automatically transform code structure and data layout so that an application can safely be executed on an untrusted remote host without being reverse engineered.

 

   
A Systematic Approach to Developing Safe Tele-operated Robots   D. Alonso, P. Sanchez,
B. Alvarez and J. A. Pastor
(Universidad Politecnica de Cartagena, Spain)
 

Tele-operated service robots are used for extending human capabilities in hazardous and/or inaccessible environments. Their use is undergoing an exponential increase in our society, reason why it is of vital importance that their design, installation and operation follow the strictest possible process, so that the risk of accident could be minimised. However, there is no such process or methodology that guides the full process from identification, evaluation, proposal of solutions and reuse of safety requirements, although a hard work is being done, specially by the standardisation committees. It’s also very difficult to even find in the literature examples of safety requirements identification and use. This paper presents the engineering process we have followed to obtain the safety requirements in one of the robots of the EFTCoR project and the way this requirements have affected the architecture of the system, with a practical example: a crane robot for ship hull blasting.

 

   
Towards developing multi-agent systems in Ada   G. Aranda, J. Palanca,
A. Espinosa, A. Terrasa
and A. García-Fornes
(Technical University of Valencia, Spain)
 

Agent-oriented technology is a rising paradigm for developing quality software in complex domains. Currently, no Ada interface or middleware exist for the development of agent-based applications. In this paper, an Ada binding for developing agent and multi-agent-based applications in Ada is proposed. This binding is compatible with an existing open-source agent platform named SPADE.

 

Session: Reliability    
A Software Reliability Model Based on a Geometric Sequence of Failure Rates   S. Wagner and H. Fischer
(Technical University Munchen, Germany)

Software reliability models are an important tool in quality management and release planning. There is a large number of different models that often exhibit strengths in different areas. This paper proposes a model that is based on a geometric sequence (or progression) of the failure rates of faults. This property of the failure process was observed in practice at Siemens among others and led to the development of the proposed model. It is described in detail and evaluated using standard criteria. Most importantly, the model performs constantly well over several projects in terms of its predictive validity.

 

   
Adaptive Random Testing Through Iterative Partitioning   T. Y. Chen, D. H. Huang
and Z. Q. Zhou
(Swinburne University of Technology and University of Wollongong, Australia)
 

Random testing (RT) is a fundamental and important software testing technique. Based on the observation that failure-causing inputs tend to be clustered together in the input domain, the approach of Adaptive Random Testing (ART) has been proposed to improve the fault-detection capability of RT. ART employs the location information of previously executed test cases to enforce an even spread of random test cases over the entire input domain. There have been several implementations (algorithms) of ART based on different intuitions and principles. Due to the nature of the principles adopted, these implementations have their own advantages and disadvantages. The majority of them require intensive computations to ensure the generation of evenly spread test cases, and hence incur high overhead. In this paper, we propose the notion of iterative partitioning to reduce the amount of the computation while retaining a high fault-detection capability. As a result, the cost effectiveness of ART has been improved.

 

   
Run-Time Detection of Tasking Deadlocks in Real-Time Systems with the Ada 95 Annex of Real-Time Systems   J. Cheng
(Saitama University, Japan)

Any existing detection method or tool for Ada 95 programs cannot detect all types of tasking deadlocks in an Ada program with the Ada 95’s annex of real-time systems. This paper investigates synchronization waiting relations in Ada 95 programs with the annex of real-time systems, extends the representation of Task-Wait-For Graph to deal with synchronization waiting relations defined in the annex of real-time systems, shows the necessary and sufficient conditions for tasking deadlock occurrences, and present a run-time tasking deadlock detector we implemented for real-time systems with the Ada 95’s annex of real-time systems.

 

Session: Compilers    
Abstract Interface Types in  GNAT: Conversions, Discriminants, and C++   J. Miranda and E. Schonberg
(University of Las Palmas de Gran Canaria, Spain and AdaCore, USA)
 

Ada 2005 Abstract Interface Types provide a limited and practical form of multiple inheritance of specifications. In this paper we cover the following aspects of their implementation in the GNAT compiler: interface type conversions, the layout of variable sized tagged objects with interface progenitors, and the use of the GNAT compiler for interfacing with C++ classes with compatible inheritance trees.

 

   
Using Mathematics to Improve Ada Compiled Code   W. D. Maurer
(George Washington University, USA)

We have developed two mathematical techniques which, used together, can increase the speed of Ada compiled code, in two ways. We can eliminate most subprogram call overhead, involving stack pointer adjustment when a subprogram is called and when it returns. We can also eliminate most static scoping overhead, requiring the use of multiple base registers when procedures are nested. In particular, all this overhead can be eliminated in the absence of recursion. One of our techniques is based on an analogy with a variant of the well-known critical path method. The other is based on a new result in directed graph theory, which has many potential applications in addition to the one presented here. 

 

Session: Distributed Systems    
Replication-Aware Transactions: How to roll a transaction over failures   M. Sharifi and H. Salimi
(University of Science and Technology, Iran)

The CORBA standard adopted by OMG supports reliability using two orthogonal mechanisms: Replication (by means of FT-CORBA standard) and Transaction (with the aid of OTS standard). Replication represents a roll-forward approach in which a failed request is re-directed into another replica that is alive. On the other hand, transaction represents a roll-back approach in which a system reverts into its last committed state upon any failure. Current researches show that integrating these two approaches is essential in 3-tier systems, wherein the replication protects system processes from failures in the middle tier, and the transaction concept ensures the data consistency in the data tier. All proposed methods for reconciling these two concepts are unanimous that the transaction approach suffers from poor performance due to the use of two-phase commit protocol. In this paper we introduce a new replication-aware transaction model based on replicated objects. This kind of transaction can jump over the failures that the replicas come across without rolling the whole transaction back (we call it roll-over). Instead, the failed objects would be removed from the replica list and re-created somewhere else if needed. Implementation results of our model show better transaction throughput in comparison with known approaches.

 

   
The Arbitrated Real-Time Protocol (AR-TP): A Ravenscar Compliant Communication Protocol for High-Integrity Distributed Systems   S. Urueña, J. Zamorano,
D. Berjón, J. Pulido,
and J. A. de la Puente
(Technical University of Madrid, Spain)
 
A new token-passing algorithm called AR-TP for avoiding the nondeterminism of some networking technologies is presented. This protocol - based on RT-EP, a research protocol also based on transmission control techniques - allows the schedulability analysis of the network, enabling the use of standard Ethernet hardware for Hard Real-Time behavior while adding congestion management. It is specially designed for High-Integrity Distributed Hard Real-Time Systems, being fully written in Ada and taking advantage of some of the new Ada 2005 features, like the Ravenscar Profile.

 

   
Interchangeable scheduling policies in real-time middleware for distribution   J. L. Campos, J. J. Gutierrez
and M. G. Harbour
(Universidad de Cantabria, Spain)
 

When a middleware layer is designed for providing semi-transparent distribution facilities to real-time applications, a trade-off must be made between the expressiveness and control capabilities of the real-time parameters used, and the simplicity of usage. Middleware specifications such as RT-CORBA or Ada’s Distributed Systems Annex (DSA) rely on the use of priorities to map the timing requirements of the application, thus restricting the possible scheduling policies. This paper presents a generic technique to express complex scheduling and timing parameters of distributed transactions, allowing real-time middleware implementations to change their scheduling policies for both the processing nodes and the networks. The technique has been tested in an implementation of Ada’s DSA, providing two interchangeable policies: a fixed-priority scheduler, and a complex contract-based flexible scheduler.

 

   


The organizers thank the exhibitors and supporters of the conference:

     

Springer Verlag published the proceedings of the conference, as vol. 4006 of Lecture Notes in Computer Science