An Application of SSP Method to Component-Based Development Process 

Vaidas  Giedrimas 

The Structural Synthesis of Programs (SSP) method is based on the idea that programs can be constructed taking into account only their structural properties. This method has been successfully used in Priz, XpertPriz and NUT systems. There exists the applications of SSP to Java-based programs generation also. However SSP method is not used to generate software from the components. This presentation discusses on the main idea to apply SSP to Component-Based development (CBD) process. The main contributions of this work – proposed several scenarios for application of SSP to CBD process.

Security for Ad Hoc Sensor Networks by Pre-Loaded Time-Sensitive Keys 

Vadim  Kimlaychuk 

Securing the information in ad hoc networks is non-trivial task due to the nature of their structure. Radio communication between nodes requires encryption of the messages and physical implementation of the nodes in real environment requires from the node to be tamper-protected. It is very hard to comply with these two demands, because tamper-protection, in most cases, is achieved by using non-symmetric encryption that has very high power consumption. From the other side symmetric encryption is very sensitive to key revival. It is very hard to distribute key securely and more – to keep it secret during operation time.
This article describes the mechanism of key distribution/storage for the ad hoc sensor network built on MICA2/MICA2DOT platform.

Distribution of Synthesized Programs 

Vahur Kotkas

In the framework of Structural Synthesis of Programs programs are automatically composed from existing modules. These modules must be preprogrammed and the task of program synthesizer is to locate and call the modules in a proper order so that the user specified task is completed. In my talk I will look at the possibilities to parallelize the synthesized programs to enable their execution a distributed platforms like computer cluster or grid.

Test Scripts Generation for Software Black-Box Tests 

Andres  Kull 

The presentation concentrates on the black-box testing of software components which have asynchronous messages-based interfaces towards its environment. This kind of software is typical for embedded systems where software components (tasks, processes, subsystems or hardware components) communicate with each other in the means of asynchronous messaging. The same paradigm exists also widely in different distributed applications where the software components are distributed over the network and communicate via asynchronous messaging.

The task of black-box testing (alternatively called behavioral or functional testing) is to verify that the software conforms to the specification. The software is viewed as a black box that transforms inputs into outputs based on the specification without information how the software is implemented.

In black-box testing the software under test (SUT) is executed against its simulated environment that acts as a simulator and tester at the same time. It provides input events to SUT and receives output events from it. In case of automated black-box testing there exist test code or script in the tester side that generates predefined input sequences for SUT and validates SUT output events against predefined conditions automatically.

Script-based testing has two main weaknesses:

1. Testers are rarely able to write sufficient amount test scripts to achive „good enough“ test coverage because the scripts writing is a time-consuming task and the work effort is about the same as for SUT implementation.

2. In case of SUT changes in maintenance phase also test scripts should be changed to reflect SUT changes. Maintenance costs increase in the same pace as the amount of scripts grows.

Those weaknesses are the main reasons, why black-box tests automatization projects often fail or don’t give expected benefits.

The paper introduces test scripts generation from SUT model. The model specifies SUT external behaviour that can be controlled and observed via its interface. Test scripts generation allows to generate sufficient amount of test scripts to reach good enough test coverage. Work effort in maintenance phase will be reduced significantly, because instead of maintaining huge amount of test scripts the test-person should maintain a SUT model only. If there are changes in SUT external behaviour then it is rather easy to update the model correspondingly and re-generate all test scripts once again. The proposed approach removes script based black box testing main weaknesses described above and therefore increases significantly test-person performance and tests quality.

Lot of theorethical and technical issues should be solved to make the above defined vison working. The presentation outlines the assumptions and limitations to define the exact research scope.

The key research issues discussed in the presentation are the following:

1. Choise of the SUT model presentation formalism that is user-friendly, follows standards, has enough expression power, and is formal for automatic test scripts generation at the same time.

2. Choise of test data presentation formalism.

3. Choise of test scripts generation strategies taking into account criteria like test coverage, length of test sequences, and ability to reveal faults.

Designing Ontologies by Using UML 

Martin  Linde 

The main solvable problem in this work is to develop solution for how to perform a preliminary ontology designing by using UML (mostly UML Class Diagrams). Currently there are many mature, professional UML tools (Rational Rose, MS Visio, Exigen Business Modeler, etc.), many IT professionals do system analysis and modeling using these tools. At the same time knowledge representation becomes more and more popular, in witch are used ontlogies. Ontologies are also main technology for defining metadata in Semantic Web (future vision of World Wide Web). There are some ontology editors, which have became quite well known (Protégé, SMORE). Actually there are different languages for designing ontlogies, but we can note the most developed and popular – OWL (Web Ontology Language). The main problem is that there are quite much investments made in UML tools and training. There are also big amount of data stored as UML models. So, it would be very usefully and gainfully, to perform preliminary ontology designing by using UML tools. Then, only if necessary, to add additional features (not provided in UML) using ontology editors e.g. next Semantic Web tools.

In order to achieve it, investigation and analyses of ODM (Ontology Definition Metamodel) has been performed and, based on that, formal reflection (transformation) defining between UML and OWL metamodels has been performed using model transformation language MOLA. MOLA is new graphical model transformation language designed in University of Latvia, IMCS. The basic idea of MOLA is to merge traditional structured programming as a control structure with pattern-based transformation rules. The key language element is a graphical loop concept. The main goal of MOLA is to describe model transformations in a natural and easy readable way.

General solution is given in this work as well as concrete solution based on chosen tools.

Exigen Business Modeler is chosen as UML tool, because it is universal modeling tool, based on metamodels. It means, it is possible to define modeling language for this tool using metamodel (for example UML 2.0 metamodel). Protégé is chosen as ontology design tool, because today it is considered to be most advanced tool for ontlogies (this opinion comes from many people involved in knowledge representation and Semantic Web research as well as from W3C – (World Wide Web Consortium).

Within this work frame a software that exports transformed model to *.owl files has been designed. Further on these files can be imported into Semantic Web design tools and proceed with ontology designing. Since MOLA interpreter is using MySQL data base as repository, the software is querying data of transformed model from this data base, translating data to OWL syntax and writing data in *.owl text files. This translation software is implemented using MS Visual Studio .NET

Designed solution has been implemented and tested by designing study process conceptual model UML class diagrams notation and by performing transformation of this model to ontology that has been fixed with OWL tools. This UML model is made using Exigen Business Modeler. The transformed ontology is imported in Protégé.


Multiagent Modelling of a Bacterial Cell 

Taivo  Lints 

Agent-based modelling is becoming more and more popular. However, most of the biology-related multiagent systems (MAS) are using agents to represent individuals and therefore the emergent behaviour appears only on the population level. I propose to use MAS for modelling a single bacterial cell. This proposal is not absolutely unique -- there have been a few similar ideas before -- but such an approach is definitely not widespread yet.

Using MAS for modelling a bacterial cell has several advantages compared to currently widespread approaches:

1) The model is much easier to compose and understand than current biological models, which usually rely on complex mathematical apparatus.

2) The model is more flexible: easier to modify and extend, and also easier to distribute over networked computers for achieving higher simulation speeds through parallelism.

3) As MAS are based on interactions and parallelism, it is easy to achieve emergent behaviour in agent based models. The emergent behaviour is usually the most interesting feature of a simulation as it is not explicitly specified in the model.

As a proof of concept I have developed a small multiagent model of the processes that initiate DNA replication and cell division (based on the Initiator Titration Model from biology, where a protein called DnaA plays the central role). The results from simulations show that this multiagent model gives qualitatively adequate results.
Improving context awareness of thin computing devices with the aid of external devices

Jürgo-Sören Preden

Thin computing devices are the building blocks of any ubiquitous computing system. The concept of ubiquitous computing assumes more of such devices than can be achieved using available methods and approaches – these devices are expected to collaborate in a way that enables them to achieve common and individual goals. The fulfilment of the objectives is dependent of the context – the environment, available resources, peer nodes and services provided by other computing devices. In case of a thin computing device with very limited resources collecting the context information may be expensive in terms of resources and in some cases collecting the required context information may be impossible.

We are looking at a scenario where the thin computing devices can use a context information service provided by a more powerful computing device. The provided service can be viewed as a GIS (geo-information system) application – it contains a digital map of the target area, all the thin computing devices in the target area contribute information (including the collected sensor data) to the map and the thin devices can make queries to the application to obtain map and context data.

To research the above (among other) topics a laboratory for agents and ad-hoc networks was created in Department of Computer Control in Tallinn University of Technology. The laboratory consists of thin computing devices - Berkeley motes that create a multi-hop ad-hoc network (MANET), a WLAN network for devices that require higher data throughput, mobile platform(s) with mote MANET or / and WLAN connectivity and a map application developed with the aid of KRATT (a joint development of Institute of Technology, University of Tartu, and Department of Computer Control, Tallinn University of Technology).
The talk presents work in progress and listener feedback is most welcome. 

A Compositional Natural Semantics and Hoare Logic for Low-Level Languages

Ando Saabas

The advent of proof-carrying code has generated significant interest in reasoning about low-level languages. It is widely believed that low-level languages with jumps must be difficult to reason about by being inherently non-modular. We argue that this is untrue. We take it seriously that, differently from statements of a high-level language, pieces of low-level code are multiple-entry and multiple-exit. And we define a piece of code to consist of either a single labelled instruction or a finite union of pieces of code. Thus we obtain a compositional natural semantics and a matching Hoare logic for a basic low-level language with jumps. By their simplicity and intuitiveness, these are comparable to the standard natural semantics and Hoare logic of While language. The Hoare logic is sound and complete wrt. the semantics and allows for compilation of proofs of the Hoare logic of While.

Requirements for integration of model components for modelling emergent behaviour 

Raul Savimaa 

The paper concentrates on possibilities how to integrate UML, the Q-model and multi-agent components of a model that describes emergent time-sensitive operational behaviour in multi-functional human organisations. The paper analyses requirements for corresponding model components and software tools in order to permit efficient automated updating different model components.

Results of the analysis would support elaboration of an automated tool aimed for management and updating model components in UML and the Q-model notation and in a multi-agent simulation environment. Two models are considered: organisation model that describes processes and actors of an organisation and change model that estimates organisation behaviour during process modifications.
Currently no suitable solution exists how components of organisation models and change models can be integrated and coherently updated. Therefore the paper gives some recommendations for problem solving.

Key words: modelling of organisations, emergent behaviour, model design, UML, agent technology, multi-agent systems, the Q-model, modelling of modifications.


Last update January 01, 1970 2:00 EET by local organizers,

© 2005 Institute of Cybernetics at TUT, All Rights Reserved