14th Estonian Winter School in Computer Science (EWSCS)
XIV Eesti Arvutiteaduse Talvekool

Palmse, Estonia, March 1-6, 2009

Student Talks 2009 (abstracts)

A high-level language for developing privacy-aware applications

Dan Bogdanov
Cybernetica AS, Estonia

A major obstacle in the deployment of privacy-preserving information systems is the lack of practically proven methods. There is a number of proposed techniques, most of which are too specific to be used in other applications than the designated one. Another problems is the technical complexity of such systems.

We are presenting a programming language and execution environment that is capable of executing a wide range of algorithms that support applications from auctions, voting and simple statistical analysis to complex data mining tasks. For all applications running in this environment we provide a set of privacy guarantees sufficient for general purposes. If these guarantees are acceptable for an application, then its developers can concentrate on implementing functionality and can take privacy for granted.

Secure multi-party protocols working on $Z_{2^32}$

Aleksei Gorny
University of Tartu, Inst. of Computer Science, Estonia

Traditionally, secure multi-party computation protocols have been designed to operate on shared data from finite fields. However, when considering the efficiency of software implementations, there exists a motivation to use the ring $Z_{2^32}$ and an additive sharing scheme instead.

Sven Laur and Dan Bogdanov have developed a number of efficient three-party protocols that work in the mentioned setting and are information-theoretically secure against a semi-honest threshold adversary who can corrupt a single party.

In this talk, I present novel approaches for extending these protocols to achieve stronger security guarantees. I construct a family of protocols secure against $n$-out-of-$(2n+1)$ corrupted participants and discuss ideas how to guard against malicious behaviour.

This work has been partially collaborated on with Jan Willemson.

Average-case complexity of randomized computations with bounded error

Dmitry Itsykson
St. Petersburg Department of the Steklov Mathematical Institute, Russia

We study class AvgBPP that consists of distributional problems that can be solved in average polynomial time (in terms of Levin’s average-case complexity) by randomized algorithms with bounded error. We prove that there exists a distributional problem that is complete for AvgBPP under polynomial-time samplable distributions. Since we use deterministic reductions, the existence of a deterministic algorithm with average polynomial running time for our problem would imply AvgP = AvgBPP. Note that, while it is easy to construct a promise problem that is complete for promise-BPP, it is unknown whether BPP contains complete languages. We also prove a time hierarchy theorem for AvgBPP (time hierarchy theorem is also unknown for BPP). We compare average-case classes with their classical (worst-case) counterparts and show that the inclusions are proper.

The work is available as a technical report TR08-073 of Electronic Colloquium on Computational Complexity.

Parallel algorithms for simulation of heterogeneous complex networks

Ilya Kolykhmatov
St. Petersburg State University of IT, Mechanics and Optics, Russia

Load balancing is a popular approach to task assignment in distributed computer systems such as grid computing clusters, web servers and routers. In traditional PS model, the degree of inefficiency grows linearly with the number of servers. But if the backend scheduler is changed to SRPT, the degree of inefficiency is more dependent on the heterogeneity of the servers and less dependent on the system parallelism. In this talk, I’ll show that the degree of inefficiency in load balancing designs is highly dependent on the scheduling discipline used at each of the backend servers.

A Hardware-Software Codesign Toolkit

Oleg Medvedev
St. Petersburg State University, Fac. of Math. and Mechanics, Russia

Hardware-software codesign is a well-known approach to efficient implementation of embedded systems: we not only optimize software for a fixed hardware, but also adjust hardware to software. This task can be accomplished manually by accelerating some functions with application-specific integrated circuits.
A smarter way would be to design and implement an application-specific processor together with its software development toolkit including assembler and compiler. There are two toolkits that allow to automate the latter process --- CoWare Processor Designer and Target Chess/Checkers.
Both allow to describe a processor in an instruction-set centric hardware description languages (LISA and nML respectively), generate VHDL implementation and compiler backend for the processor.

We present another language for this purpose.
It is a true hardware description language which is more high-level, than VHDL. It is based on reliable message passing and it has several levels of abstraction that allow, for example, to express pipelining and control flow in an elegant way.
The highest level allows to describe a hardware device together with an assembler syntax and binary coding format for the device (if any).

We give excerpts from a (synthesizable) RISC processor in our language. We also present a high-level hardware fast fourier transform implementation and results of its comparison with a handcoded VHDL implementation and with reference Xilinx LogiCore FFT implementation.

The latter approach allows to generate a processor and its toolkit from a single description, but the description must still be created manually and a software that can be compiled to an efficient code by the automatically generated compiler must be created separetely. Thus there is a temptation to eliminate this additional effort and consider an approach in which one implements a program that solves a problem and automatically produces a processor and an object code for this processor, that implements the program efficiently in some sense (throughput, latency, size of the processor + size of the code, energy consumption etc.).

Thus the next goal of our project is to make an extension of our language that would have enough expressive power to describe complex algorithms that can be efficiently partitioned to reconfigurable hardware and a binary code for it.
Please note, that this symmetric view doesn't pose a goal to automate compiler retargeting for any language in any way but just to create hardware and low-level code for it from a description of an algorithm.

The talk is based on a joint work with Dmitri Boulytchev, Saint-Petersburg State University

An empirical study of the structure of the shortest path tree

Irmantas Radavi?ius
Vilnius University, Faculty of Mathematics and Informatics, Lithuania

We consider a complete graph on n vertices. Edges of the graph are prescribed random positive weights X1,X2, . . . ,Xm. Here m = (n 2). We assume that these random variables are independent and have the common probability distribution with density function f(x), x ? 0. Given a vertex v let T denote the shortest path tree with root v. Let T1, T2, ? T denote the trees that are obtained from T after removal of the root v. Let N1 ? N2 ? N3 ? . . . denote (ordered) sequence of sizes of these trees. We study statistical properties of this sequence for various densities f and large n.

Analysing concurrent programs with interrupts

Martin Schwarz
Technische Universität München, Fakultät für Informatik, Germany

I have recently begun work on the analysis of concurrent programs with irregular control flow. The application of interest is embedded systems in the automobile industry. In this setting, we have a static amount of threads and a very disciplined programming style, which makes this domain well-suited for static analysis. However, the challenge is to deal precisely with complicated scheduling and synchronization primitives, including interrupts, prioritized work-queues, and waiting/suspension-constructs.

Initially, we are adapting techniques from the analysis of Posix-threaded C to certifying the absence of data-races in applications written under the OSEK specification, a widely used standard in the European automobile industry. However, the precise analysis of complicated flow will probably require that we go beyond the current framework.

Region Analysis for Certifying Absence of Races

Vesal Vojdani
Technische Universität München, Fakultät für Informatik, Germany

We propose a method for certifying the absence of data-races in programs with shared global arrays and dynamic memory allocation. In particular, we target locking schemes involving arrays of linked lists. Our approach is based on a careful analysis of indexed regions together with a must analysis of static global addresses and finite sets of symbolic lock expressions.

We have implemented the analysis in the concurrent program analyzer Goblint, and practically evaluated it on Linux device drivers.

(Joint work with Helmut Seidl; submitted to CAV 2009)

Valid CSS! Valid XHTML 1.0 Strict Last changed November 26, 2014 19:55 EET by local organizers, ewscs09(at)cs.ioc.ee
EWSCS'09 page: //cs.ioc.ee/ewscs/2009/