CIDEC    
ÜIK
Estonian Winter Schools in Computer Science    
Eesti arvutiteaduse talvekoolid
EWSCS 2007
EATTK 2007

12th Estonian Winter School in Computer Science (EWSCS)
XII Eesti Arvutiteaduse Talvekool

Palmse, Estonia, March 4-9, 2007

under the auspices of European Educational Forum

STUDENT TALKS AND POSTERS 2007 (abstracts)


Talks

Framework for Fast Prototyping of Secure Computations

Dan Bogdanov
University of Tartu, Inst. of Computer Science, Estonia

There are various organisations who gather sensitive data from individuals. Due to legal restrictions on how the data can be processed, it is hard or even impossible to learn global properties or trends from the collected data. More importantly, if the administrative measures fail, the privacy of the participants can not be guaranteed.

We are applying secure multi-party computation methods to make data aggregation privacy-preserving and secure. We *have designed and implemented* a programming framework which allows us to quickly prototype and evaluate privacy-preserving aggregation and data mining algorithms.

On Basic Process Algebra with Interrupt

Silvio Capobianco
Reykjavík University, Dept. of Computer Science, Iceland

Process algebra studies the behaviour of abstract processes by means of algebraic tools. Processes can evolve into other processes, or terminate, as a consequence of an action; plus, they can be merged together in several ways. The study is then performed by representing processes as terms over suitable languages, and defining derivation rules and operations on terms. Analysis is usually done modulo some equivalence relation, modeling a condition which makes impossible, for an external observer, to distinguish between two processes: examples of these are bisimilarity, meaning that "same choices are possible at same times", and complete trace equivalence, meaning that the sequences leading to termination are the same.

In this talk we will deal with the process algebra BPAint, obtained by adding to the basic process algebra BPA---i.e.,the algebra of words on actions and variables, with concatenation and nondeterministic choice---the interrupt operator, which allows a process to suspend the execution of another one until its own termination, when the latter resumes. A theorem of Aceto, Fokkink, Ingólfsdóttir and Nain states that the equational theory of BPAint, modulo bisimilarity, has no finite axiomatization. We work with complete trace equivalence instead of the more demanding bisimilarity, and illustrate some cases when such finite axiomatizations do exist.

Efficient Parallel SAT Solving in Grids

Antti Hyvärinen
Helsinki University of Technology, Lab. for Theoretical Computer Science, Finland

In addition to data storage and indexing systems, computational grids are used for solving computationally demanding tasks. Because of the inherent communication latencies and high failure probabilities of a loosely coupled and large computational grid, such an environment poses a great challenge for highly data-dependent algorithms. In this work we study the problem of solving the propositional satisfiability problem (SAT) by exploiting parallel computation. We develop an implementation of a novel parallelization scheme called scattering for NorduGrid which is a production level grid environment. The result is a dynamic and aggressive algorithm which takes into account the specific requirements posed by the environment. Scattering is analysed with respect to the long communication delays and the occasional high failure rates of individual jobs. We study different approaches to coping with the communication delays and thus maximizing the effective parallelism in the grid.

Generalised Edit Distance

Reina Käärik
University of Tartu, Inst. of Computer Science, Estonia

The simplest way to measure the similarity between two strings is to use edit distance, also known as the Levenshtein distance.

The conventional edit distance considers only a limited number of transformations that all have the same weight. In this work we propose an extended edit distance algorithm that works with a larger set of transformations that can have different weights. In addition, we extend the edit distance to work with the multibyte encodings like UTF8. To achieve greater efficiency, our implementation of the extended edit distance algorithm uses tries. Finally, we consider a version of the edit distance algorithm that learns the weights of transformations from the weights of existing transformations.

Optimal Basic Block Reordering via Hammock Decomposition

Oleg Medvedev
St. Petersburg State University, Fac. of Math. and Mechanics, Russia

Many optimizing compilers use basic block reordering to reduce conditional branch misprediction penalties, decrease the number of unconditional branches and to improve the instruction cache performance. The method consists of profiling the program on a typical workload and finding a covering of its control-flow graph with a set of vertex-disjoint paths which minimizes expected jump execution time. The problem is known to be NP-hard and is usually solved heuristically. We present a precise branch-and-bound algorithm which explores hammock hierarchy within control flow graph to decompose the initial problem into the set of subproblems of smaller sizes. Hammock hierarchy is a hierarchy of subgraphs with the one entry node and at most one exit node which can be found in quadratic time. An implementation of presented algorithm was tested on several real-world C programs and demostrated admissible performance.

Type Systems for Optimizing Stack-Based Code

Ando Saabas
Institute of Cybernetics at TUT, Estonia

We give a uniform type-systematic account of a number of optimizations and the underlying analyses for a bytecode-like stack-based low-level language, including analysis algorithms and soundness proofs. Specifically, we treat dead store instructions, load-pop pairs, duplicating load instructions, store-load pairs. The load-pop pairs and store-load pairs elimination optimizations are built on top of bidirectional analyses, facilitating correct elimination of instruction pairs spanning across basic block boundaries. As a result, no assumptions are needed about input code (it need not be the compiled form of a high-level source program, the stack need not be empty at basic block boundaries and not even need it be checked for safety before the analysis). The strongest analysis algorithms and soundness proofs are simple and uniform. (Joint work with Tarmo Uustalu.)

Completely Generic As-Path-Sensitive-As-Necessary Multithreaded Analysis

Vesal Vojdani
University of Tartu, Inst. of Computer Science, Estonia

The goblint is a static analyzer for posix-threaded C programs and currently focuses on detecting data races. In this talk, I will present the underlying methods used in our analyzer to achieve precise trace information in a completely generic way independent of the specific analysis.

Ideally, we would like to develop a specification framework where such properties can be expressed by the user. The key mechanism in achieving these goals is a generic method of transforming a simple analysis specification into a path-sensitive analysis. This will be the main focus of the talk.


Posters

Ontology-Based Development of Rule Model

Diana Bugaite
Vilnius Gediminas Technical University, Dept. of Information Systems, Lithuania

In the field or research in information systems development, the business rules approach has achieved a lot of attention and already has a strong motivation behind its application ([1], [2], [3]). Business rules are used in information systems to represent domain knowledge and to maintain rule systems efficiently in volatile business environment. In information systems business rules are implemented as information-processing rules. In practice, information-processing rules are implemented by executable rules, like active DBMS SQL triggers. A number of methods and languages were proposed to develop rule models (UML with OCL, Ross method, CDM RuleFrame, Conceptual graphs approach ([1], [2]) and etc.). But common methods and languages are insufficient or at least inconvenient for a complete and systematic modelling of all types of rules, since they emphasize only particular aspects and types of rules by neglecting other aspects. Therefore, none of the proposed languages or methods is accepted as technology standard yet. Majority of existing methods do not deal with reuse of knowledge acquired in the analysis of some application domain and automatic implementation of rules.

Ontologies are used to represent real-world domain knowledge. Ontology defines the basic concepts and their relationships comprising the vocabulary of an application domain and the axioms for constraining the relationships among concepts. Therefore, knowledge represented by ontology can be used for generating rules, which is an important and integral part of each conceptual data model. Moreover, ontology expressed in a formal way can be transformed into rule model automatically.

In our research we have suggested a method of ontology axioms transformation into a (-semi)formal rule model which is a part of a conceptual model.

The analysis of the related works on knowledge-based information system development using the domain ontology shows that rules used in application domain are part of knowledge represented by ontology. Rules are captured in ontology by axioms and are defined using ontology terms.
Axioms have a clearly defined state-part, which can be transformed into the action-part of rules expressed in the form of ECA rule, and, sometimes, condition-part, which can be transformed into the condition-part of rules expressed in the form of ECA rule. Event-part is not defined in axioms; therefore ontology was extended by event model, where events are defined using ontology terms.

Formal expressions of ontology and a conceptual data model were analysed and formal expressions of ontology and conceptual model were proposed to develop the formal rules for transformation of ontology axioms into rule model. Rules obtained were applied for transformation of the particular ontology presented by a suitable tool (Protégé-2000) and rules presented by active DBMS SQL triggers.

The software prototype was developed in the frame of the outgoing research project AGILA and the experiment of ontology axioms automatic transformation into SQL triggers was carried out. The experiment shows that the suggested formal rules can be used to transform ontology axioms described in a formal way into SQL triggers.

The next step in our research should be the refinement of the suggested method and the developed prototype. A full case study employing the proposed concepts and ideas is under development.

1. Valatkaite I., O. Vasilecas. On Business Rules Approach to the Information Systems Development. In: Linger H et al. (eds.) Proc. of ISD’2003, Australia, Kluwer Academic/Plenum Publishers, 2004, p 199-208.
2. Valatkaite I., O. Vasilecas. On Business Rules Automation: the BR-centric IS Development Framework. In J. Eder et al. (Eds.): ADBIS 2005, LNCS 3631, Springer-Verlag Berlin Heidelberg, 2005, p. 350 – 365.
3. Vasilecas, O., D. Bugaite. Ontology-based Information Systems Development: the Problem of Automation of Information Processing Rules. In: Neuhold, E., Yakhno, T. (eds.): Proc. of ADVIS‘2006. LNCS 4243, Springer, 2006, p. 187-196.

Agent Control in World With Non-Markovian Property

Jurij Chizhov
Riga Technical University, Faculty of CS and IT, Latvia

In task of agent control often occurs ambiguous states in which agent should act in various ways (for example McCallum's maze). Usually it has a place in case of Partially Observable Markov Decision Processes (POMDP), or in Worlds where agent's sensors do not provide full information about current state.

Most common ways of solving this problem is to "equip" original algorithm of agent control by additional memory, which keeps previously visited states. Thus, the chain of states might be interpreted as one complex state, which preserves the unique property of couple "state-action". At the poster you can see some implementation of that idea for various classical algorithms.

In the same time rises question about limited memory which should be optimally prepared (reserved) in case of each kind of algorithms.

Context-Aware Application Assembly for Mobile Environment

Oleg Davidyuk
Oulu University, Dept. of Electrical and Information Engineering, Finland

The growing amount of mobile devices with communication facilities and recent development of new networking technologies has been accompanied by use of component-based models where the applications are constructed from relatively small and independent components running across different networks.

These applications are more flexible that helps to predict their performance under various conditions. E.g., impact of congestion of a network link can be avoided if the communicating components are deployed at the same host or the components migrate to another part of the network. Besides, such applications are easier to replace and adapt without affecting the entire system that increases their manageability. This component-based approach has been supported by number of commercial frameworks, like CORBA [1], WCTME [2], .NET [3] and EJB [4], all of which focus on a static model of constructing component-based applications at a design time. However, a statically formed application is not able to adapt to variation of resource availability and client demand that is a critical feature for achieving flexibility in distributed systems. This problem is solved if constraints of the application and environment influence the deployment decisions and allocation of application components is performed at run-time.

The decisions how and where to deploy application components are made by an allocating algorithm, that takes into account multiple resource and application constraints and allows fast and low-cost adaptation of mobile application. For example, the algorithm deploys heavily communicating components close to each other in order to save bandwidth and maximize performance of the whole system.

My research focuses on development of the allocating algorithm and middleware for adaptive mobile applications. Such middleware supports composing of applications from distributed components and dynamically adapts them to changing characteristics of environment and user’s needs.

During the student session I would like to report my experience from using WCTME middleware for application assembly and performance results of the allocating algorithm that has been recently developed to support application assembly.

[1] The Object Management Group’s CORBA website. www.corba.org, 2006.
[2] The IBM WCTME product website. http://www-306.ibm.com/software/wireless/wctme/, 2006.
[3] The Microsoft .NET homepage. http://www.microsoft.com/net/default.mspx, 2006.
[4] The Sun Microsystems. Enterprise JavaBeans Technology. http://java.sun.com/products/ejb/, 2006.

Results of Experimental Analysis of String Algorithms on Compressed Texts

Denis Kulagin
St. Petersburg State Univ. of Inform. Technologies, Mechanics and Optics, Dept. of Computer Technol., Russia

Following experimental results are based on article by Yury Lifshits "Solving Classical String Problems on Compressed Texts". Current work contains an implementation of the algorithm, presented in above paper, and a series of experiments on various kinds of data. During the work on algorithm implementation I have created straight-line program generator, described in W. Rytter "Application of Lempel-Ziv Factorization to the Approximation of Grammar-Based Compression". The results show that presented algorithm saves a lot of memory resources, processing high-compressible data. Hence, it could be effectively applied to pattern matching over compressed biological information.

Adaptivity of Systems

Taivo Lints
Tallinn University of Technology, Dept. of Computer Control, Estonia

My research goal is to find out the properties and processes that make systems adaptive. Adaptation in general is a process through which a system restores, maintains or increases its fitness when outer and inner environments change, or increases its fitness in a nonchanging environment. It is likely that the properties and processes causing adaptivity in various systems are similar (when described at a suitable level of abstraction), or at least form a small number of different classes. Several notable generalizations have already been made, e.g. the concept of feedback and its importance in system adjustments has been proposed by cybernetics. Also, the ideas of evolution theory have been extended beyond their original biological context. However, as much as I know, there still exists no well-systematized scientific overview of adaptation in different kinds of systems, and of underlying processes of adaptation in specific cases and in general. The creation of such a source would be very helpful for many different disciplines.

In the presentation I would give a general overview of the concept of adaptivity, present some examples of adaptation in various systems, describe in more detail my research goals, and finally I would be extremely happy to get all kinds of comments and suggestions about the topic from the audience.

Related keywords: adaptivity, learning, flexibility, elasticity, plasticity, context-dependency/-dependence, context-awareness, self-organization, (self-)adjustability, (self-)reconfigurability, feedback, resilience, evolution, conformance, ...

Using Emulation for System Model Analysis

Uljana Reinsalu
Tallinn University of Technology, Dept. of Computer Engineering, Estonia

The increasing complexity of modern on chip systems has raised many design related issues. Simulation is widely used to analyze model properties during system design process. This paper presents an approach how to build an environment for simulation speedup in hardware. Emulation is one of the ways to speed up simulation. The maximum gain in simulation performance could be achieved by moving all the required modules into hardware. This is based on assumptions that a system specification, described in a hardware description language (VHDL, Verilog, SystemC) can be partitioned in a way that styles corresponding to different abstraction levels could be modeled using different simulators/emu¬lators. The three following styles can be outlined in the first order – RTL, BL, and software. Synchronization of modules working in parallel is one of the key issues because the efficiency of the whole system depends on it. The most interesting challenge is the communication between these sub-modules. These challenges are future work, this paper addresses only approach to speed up simulation. Such simulations are needed to analyze system properties at early design phases, e.g., for architectural exploration.

Classification of sleep stages using heart rate data

Audrius Varoneckas
Vytautas Magnus University, Faculty of Informatics, Kaunas, Lithuania

For the diagnosis of sleep related disorders and diseases it is important to establish the sleep structure of a patient. Contemporary methods are based on polysomnography, however this method is extremely time-consuming and expensive. The results of recent investigations show dependence between some characteristics of heart rate and sleep stages. The procedure of recording of heart rate is comparatively cheap. Heart rate is defined by moments of systolic peaks in ECG and presented as a sequence of RR intervals (time intervals between the peaks). The records of RR sequences recently published on the Internet (http://www.pri.kmu.lt/datbank/) have been used as experimental data.
Knowledge on sleep stages should be extracted from RR time series. The most frequently used characterizations of stationary time series are mean value, standard deviation and correlation function/spectral density. To analyze heart rate variability we have applied standard methods, e.g. Fourier transforms, as well as the methods of non-linear dynamics. The parameters used in this research are: RR mean value, RR standard deviation, relative weigth of very low, low and high frequencies, sum of relative weights of low and high frequencies, ratio of weights of low and high frequencies, approximate entropy and detrended fluctuation analysis.
The visualization by means of multidimensional scaling of several time series is implemented for the visual heuristic analysis of correspondence between estimated parameters and sleep stages. Well known classification methods include discriminant analysis, artificial neural networks (ANN), support vector machines (SVM) have been used for classification of sleep stages using heart rate data.

http://www.cs.ioc.ee/yik/schools/win2007/

Modified Feb 26, 2007 14:02