Student Talks 2008 (abstracts)
Bringing privacy-preserving data processing into the real world
Dan Bogdanov
University of Tartu, Inst. of Computer Science, Estonia
Several methods have been proposed for preserving the privacy of databases containing personal data, yet the ones mostly used in practice are usually based on organizational measures. We decided to approach the problem from a theoretical standpoint to determine how well could we protect the privacy by using methods from modern computer science.
We applied the well-known combination of secret-sharing and secure multi-party computation to develop an efficient, yet provably secure privacy-preserving computation method. After designing and implementing the method we have moved our focus to making it suitable for use in real-world systems. Our goal is to make the development of privacy-preserving information systems as easy as possible without sacrificing the performance and features of the underlying platform. To achieve this we have to choose a suitable model of computation and combine it with our theoretical solution.
In my talk I will introduce you to Sharemind - our framework for creating applications which provide privacy for the input data. I will share with you some of the decisions we made along with the alternatives which we considered. I will also discuss some intriguing optimization possibilities which we found to be natural for our solution.
Establishing parallel processes in a membrane system
Andrey Breslav
St. Petersburg State University of IT, Mechanics and Optics, Russia
A membrane system (also called a P system in honor of G. Paun who proposed this approach in 2000) is a system of nested compartments (called membranes) each of which contains a multiset of objects from some alphabet and is associated a set of rules which describe its evolution. Such models are used to describe a large variety of systems: from live cells and tissues to languages and reactive systems. We will discuss an approach to establishing parallel processes of computation inside a P system. This includes synthesizing rules to run different computations on the same system simultaneously, extracting results of each computation from the overall result of the system's work and synchronizing parallel processes using synchronization techniques widely used in parallel programming.
Light conceptual ETL process modeling for statistical observation templates evolution
Alexander Dolnik
St. Petersburg State University, Fac. of Math. and Mechanics, Russia
ETL process evolution is investigated. A model-driven approach to templates and ETL process evolution problem is developed. We suppose that the ETL process evolution problem is mainly a problem of a low abstraction level. So the definition of ETL process based on a conceptual model is a principal step towards effective ETL evolution. Our approach seems to be scalable, robust and simpler in use compared to existing ETL evolution frameworks and tools.
Modular interpretive decompilation of low-level code to Prolog
Miguel Gomez-Zamalloa Gil
Universidad Complutense de Madrid, Departamento de Sistemas Informáticos y Computación, Spain
Decompiling low-level code to high-level intermediate representations (e.g. Prolog) is necessary to develop analyzers, model checkers, etc. which reason about properties of low-level languages (e.g., bytecode, .NET). Interpretive (de)compilation is carried out by partially evaluating an interpreter for the low-level language (written in the high-level language) w.r.t. the code to be decompiled. There have been several proofs of concept that such interpretive approach is feasible, but there remain important open issues when it comes to decompile a real language: does the approach scale to large programs? Are decompiled programs comparable to those obtain by ad-hoc decompilers? This work addresses these issues by presenting, to the best of our knowledge, the first modular scheme to enable interpretive decompilation of low-level code to a high-level representation. A distinguishing feature of our scheme is its flexibility to control the structure of the decompiled program in a completely automatic way. For this, we introduce two notions of optimality. The first one requires that each method is decompiled just once. The second one requires that each program point is traversed just once during decompilation. We report on a prototype implementation and demonstrate the impact of our modular approach and the optimality issues on a series of realistic (low-level) Java bytecode benchmarks.
A computational model for World Wide Argument Web
Adrian Groza
Technical University of Cluj-Napoca, Dept. of Computer Science, Romania
We are in the age when we can envisage an infrastructure (World Wide Argument Web), which is native to the Internet, and which enhances software agents with the ability to debate, rise argumentation, or analyse ideas in order to provide an effective dissemination of the information to the more and more knowledge driven, but lost, human agents.
A fundamental difference between human and agent societies is that humans manifest a great amount of subjectivity in their decision to accept or reject an argument. Allowing such a heterogeneity to weigh arguments with respect to both intrinsic and extrinsic aspects of the argument is certainly not easy, but probably fundamental to deploy them in realistic applications. Regarding this issue, there is a gap between the existing formal argumentation systems (or logic-based) and their practical application. The informal approaches (or argumentation schemes-based) focus on facilitating interaction with the user, whilst they lack computational models. The current trend consists in developing hybrid approaches that combine the advantages of formal and informal ideas. The new proposed extended Argument Interchange Format ontology offers both opportunities.
Quite aware of the difficulties that lie ahead of such a task, we embark in this research on the road of developing a reasonable flexible tool that allows an expressive representation and also reasoning about the features required by a human comprehensible negotiating agent. For the logic-based inference the tool uses a variant of defeasible reasoning, whilst for the computation model of argumentation schemes, a planning-based formalism is proposed.
For the logic-based inference, the Extended Temporal Defeasible Logic is targeted on knowledge representation and argumentative reasoning in open environments, combining defeasible logic with temporal reasoning and argumentation with levels of certainty. The implemented system TeDeLo (available at http://cs-gw.utcluj.ro/~adrian/odr.html) can reason on concepts like dynamic rules and priorities, hidden rules or rollback activation.
For the argumentation schemes-based, finding the argumentation line supporting a claim is considered as an AI planning problem and it is meant to be agent-driven. By using argumentation schemes we intend to maintain a high level of abstraction to easily accommodate human intervention. The schemes are formalizing in PDDL in a framework compatible with the emerging semantic web.
The framework is convenient for automated mediation models that deal with successive encounters, required for agents to succeed in real application domains such as Online Dispute Resolution.
Situation awareness of computing agents
Jürgo Preden
Tallinn University of Technology, Dept. of Computer Control, Estonia
Situation awareness (SA) as a discipline emerged in the 1980s. At the time it was applied for studying human situation awareness. The need for SA arose from need to organize the flow of great amount of data that modern systems are able to provide to their operators. We have come to a point where we face the same problems with modern computer systems (that include software enabled devices, embedded systems) – these systems must also process great amounts of information. The concept of SA for computing systems is introduced in order to organize the information, express the state of a system and improve the decision making process of the device. The SA in computing systems is formed based on past and present interactions, both with other computing systems and the physical world, and the properties of the computing system itself. The talk describes the concept of SA in computing systems, ways of acquiring SA and ways of expressing SA. Two open questions, which I am not able to answer, arise: how much SA is enough and how much SA is required for a given computing system.
Preserving sharing in the partial evaluation of lazy functional programs
Salvador Tamarit Muñoz
Universitat Politècnica de València, Departament de Sistemes Informàtics i Computació, Spain
The goal of partial evaluation is the specialization of programs w.r.t. part of their input data. Although this technique is already well-known in the context of functional languages, current approaches are either overly restrictive or destroy sharing through the specialization process, which is unacceptable from a performance point of view.
In this work, we present the basis of a new partial evaluation scheme for first-order lazy functional programs that preserves sharing through the specialization process and still allows the unfolding of arbitrary function calls.
Automatic code generation for safety critical embedded systems: developments in the Gene-Auto project
Andres Toom
Institute of Cybernetics at TUT, Estonia
Model Driven Engineering (MDE) and Automatic Code Generation (ACG) have been very successful in the systems engineering domain. There exist a variety of different formalisms. However, often such tools have been designed in favour of expressiveness, rather than safety. One of the most popular toolsets in industry is Matlab/Simulink/Stateflow that has a lot of modelling power, but lacks in clean semantics and robustness aspects. Examples of formalisms that were specifically developed for the safety critical domain are the languages belonging to the synchronous language family, such as Esterel, Lustre and Signal.
The purpose of the Gene-Auto project is to define a modelling language, which covers a safe subset of Simulink/Stateflow and Scicos and fits the needs of its industrial partners, and on the other hand to implement an ACG from this modelling language to a subset of the C language. The ACG is developed and will be qualified according to the strict requirements of safety critical industries. Gene-Auto develops and aims to qualify some parts of the toolset using novel formal technologies.
This talk gives an overview of the main developments in the Gene-Auto project, concentrating on the Stateflow (a StateCharts like visual language) part. Semantical complexities of the Stateflow language and two alternative code generation schemes will be presented.
An abstract domain for precise inter-procedural analysis of addresses
Vesal Vojdani
University of Tartu, Inst. of Computer Science, Estonia
We present a new domain for analyzing must-equalities between addresses. The domain is a smooth combination of Herbrand and affine equalities, which enables us to precisely describe field accesses and array indexing. While the full combination of uninterpreted functions with affine equalities results in intractable assertion checking algorithms, our modest approach allows us to construct a precise inter-procedural analysis of address must-equalities that runs in polynomial time. We indicate how this analysis can be applied to verify mutual exclusion of data accesses in concurrent programs with per-element locking schemes.
Posters
Search system with difficulty ranking
Dmitry Kuranov
St.Petersburg State University Fac. of Applied Math and Process control, Russia
Very often different people, when they enter same search phrase to the system, expect that the system would return different documents. For example let’s consider the university library. When a student comes to terminal and inputs search phrase he would like to get some easy documents. When a professor comes to library terminal, he would like to retrieve some more sophisticated documents. To solve this problem I consider several approaches.
Let’s assign each text some numeral properties such as: scientific- not scientific, popular-not popular, interesting-not interesting and so on. To obtain these characteristics we can invite an expert or use a ranking system. We denote the scientific property as S(d), popular as P(d), interesting as I(d), where d is document from collection. Then to evaluate the difficulty of document we can use function: F(d)=F(S(d),P(d),I(d)…) For example we can use linear function: F(d)=a*S(d)+b*P(d)+c*I(d)
The other approach tells us that there is no fundamental difference between scientific literature and fiction. Human assimilates the text in two steps:
- Orientation stage
- Assimilation stage
The two types of relations appears during reading texts:
- Relations between document and actuality consider text as image of reality.
- Relations between document and person
The meaning of the text exists as the transformation of the knowledge base of recipient.
Let’s denote by U the universe, which includes all meanings, no matter does them revealed by mankind or not. The individuals have possible intersected knowledge bases ui (ui is subset of U). Then the process of perception can be represented by function F:U->U, where F is a function set by text. So the structure of text appears to a recipient through a correlation of the text and the knowledge base of recipient.
I suggest a new methodology of classifying documents, where document difficulty is calculated through difficulty of the words. In this way we can’t use experts to get the value of difficulty of the words. So we need an automatic tool that will calculate it. Evaluation of word difficulty is taken on clustering stage. The word difficulty depends on the frequency of word appearance in a collection of documents. The more frequently a word appears in collection, the more common it is that the difficulty of the word is low.
One way to evaluate the difficulty of the document is the take text difficulty equal to the maximum logarithm of word difficulty. This will give us supremum estimation of document difficulty. If we take minimal logarithm of word difficulty we get infimum estimation of document difficulty. But the best result is achieved by using average logarithm difficulty. So we get a search system, which can sort result output according the document difficulty, helping user to retrieve documents of his knowledge level.
Defining adaptivity
Taivo Lints
Tallinn University of Technology, Dept. of Computer Control, Estonia
The ability of some system to adapt is a feature of interest in a wide range of disciplines (biology, anthropology, business studies, engineering, to name just a few). However, there doesn't seem to be any clear definition of what exactly it means for a system to "adapt". Or, in a few cases where such a definition exists, it is usually specific to the given field of study and not very suitable for a wider generalization. This poster will take a closer look at how the adaptivity should be defined in a general setting (or whether it should be done at all), and what definition(s) would be most fruitful in the process of trying to make software-intensive systems more adaptive.
Blackboard architecture application in product life cycle task
Darja Rudenko
Riga Technical University, Faculty of CS and IT, Latvia
Every product has a life cycle. The product life cycle concept suggests that a product passes through four stages of evolution: introduction, growth, maturity and decline. As the product evolves and passes through theses four stages, profit is affected, and different strategies have to be employed to ensure that the product is a success within its market, therefore there is a need to recognize in time at which of the stages the product is at the moment.
The blackboard architecture can be successfully used for the recognition of the stages in product life cycle. The blackboard architecture is not a new technology and has a lot of benefits; therefore it can be widely used.
This poster presents an original approach of applying blackboard architecture in product life cycle.
The blackboard methodology is a complicated system task solving strategy using different knowledge sources communicating by means of common information field.
The blackboard architecture can offer a centralized system, and it is known that system centralization means that the system will be robust without complicating system’s parts. Instead, simple parts will make up robust system.
Ambient light illumination model for landscapes
Alexander Smal
St. Petersburg State University of IT, Mechanics and Optics, Russia
Ambient light is an illumination effect caused by diffusion of light rays in atmosphere. Simulation of this effect is provided using different approaches. The most of them suffers from the performance slowdown when executed on large scenes (about 1 million triangles). We discuss existing methods for ambient light computation and select the fastest. As a result we present fast heuristics ambient light algorithm for landscapes.
Last changed
November 26, 2014 19:55 EET
by
local organizers, ewscs08(at)cs.ioc.ee
EWSCS'08 page:
//cs.ioc.ee/ewscs/2008/