Hearton Hotel KyotoTo book a room, please send an e-mail to ifipwg21-kyoto@e-side.co.jp by August 24, 2007. After this date, the availability of rooms is not guaranteed. Do mention the necessary information (see below). Please also send a message to the secretary (jg@comlab.ox.ac.uk) to indicate your intention to come.
Higashi no Toin Dori Oike Agaru
Nakagyo Ku, Kyoto, Japan 604-0836
Tel +81-75-222-1300
Fax +81-75-222-1313
Sample: Scott Macdonald, e-side, Canada 9/9 in, 9/15 out, 6 nights, twin (single use), non-smoking, vegetarian
No credit number and no advance payment is needed; but don't cancel your reservation at the last minute.
The cancellation policy: up to 24th August, no penalty; after 24th August, but at least 8 days before the meeting, you pay 30% of the cost; 7 days or less before the meeting, you pay 100%.
The costs should be paid in cash to the local host during the meeting. Alcoholic drinks are paid for personally.
Lunch is organized on Friday noon. If you intend to skip a scheduled meal, and do not want to pay for it, please indicate this in your registration, not later.
The hotel has provided a map.
There will be an excursion on Thursday afternoon (see below), and a banquet on Thursday evening.
You will find a river between two stations through which we will be coming down.
longitude latitude Togetsu bridge 135.6776 35.01287 (35 0'46.32"N,135 40'39.35"E) Train Station 135.6702 35.01672 (35 1'00.19"N,135 40'12.56"E) Upper Station 135.6075 35.01324 (35 0'47.68"N,135 36'26.83"E)
The fares are 600 yen (train) and 3900 yen (boat or raft) for each. We are to hire a medium size bus to move efficiently. I don't yet know the cost of bus to be divided evenly by all.
I tried to edit kml file for google earth, but failed. So, please operate the google earth yourself.
And for those who will come down to the meeting site looking at the google earth like the superman, the coordinate of the destination (Hearton Hotel) is 35 0'42.5"N, 135 45'38.0"E.
It may be surprising that variations of the maximum segment sum (MSS) problem, a textbook example for the squiggolists, are still active topics for algorithm designers. I would like to examine the new developments from the view of relational program calculation.
It turns out that, while the classical MSS problem is solved by the Greedy Theorem, by applying the Thinning Theorem, we get a linear-time algorithm for MSS with upper bound on length. To derive an algorithm for the maximum segment density problem, on the other hand, we need a global notion of thinning which is stronger than the Thinning Theorem.
The concepts of left-negative and right-screw segments emerge from the search for monotonicity conditions. The efficiency of the resulting algorithms crucially relies on exploiting properties of the set of partial solutions and design efficient data structures for them.
Jeremy Gibbons, David Lester and Richard Bird recently published a so-called Functional Pearl on an effective way of enumerating the rationals in "Calkin-Wilf" order. They remarked that it is "not obvious" how to do the same in "Stern-Brocot" order. We show, with one algorithm, how to do both. The work is a continuation of our research on improving mathematical method reported on at the last meeting.
The torch (flashlight/bridge) problem is a problem of getting a number of people across a bridge with a limited capacity. A solution to the problem when the capacity of the bridge is 2 is well known. I will present a solution for the more general problem where the capacity is an input parameter. The solution has worst-case complexity O(N^{2}) where N is the number of people. Before embarking on the problem, I was told that it is believed to be "hard" and I do not know of any other published solution. The problem is indeed surprisingly difficult to solve, but it is not "hard" in any technical sense. I strongly suspect that my solution can be improved. The conjectured improvements lead to interesting algorithmic challenges.
(with help from Diethard Michaelis and Arjan Mooij)
(Joint work with N.Ghani, M.Hamana, T.Uustalu)
(Joint work with T.Uustalu)
(Joint work with Doug Smith and Dusko Pavlovic)
It has been shown that list homomorphisms, a class of recursive functions suitable for parallel computation on lists, can be automatically derived from a pair of sequential programs based on the third homomorphism theorem and weak inversion. In this talk, we show that it is possible to generalize the framework from lists to trees. We give a tree version of the third homomorphism theorem, and propose a calculation method for systematic derivation of tree homomorphisms that are suitable for divide-and-conquer parallel computation.
(Joint work with Akimasa Morihara.)
Feature-Oriented Software Development (FOSD) provides a multitude of formalisms, methods, languages, and tools for building variable, customizable, and extensible software. Along different lines of research different ideas of what a feature is have been developed. Although the existing approaches have similar goals, their representations and formalizations have not been integrated so far into a common framework. We present a feature algebra as a foundation of FOSD. The algebra captures the key ideas and provides a common ground for current and future research in this field, in which also alternative options can be explored.
Jeremy Gibbons, David Lester and Richard Bird recently published a so-called Functional Pearl on an effective way of enumerating the rationals in "Calkin-Wilf" order. They remarked that it is "not obvious" how to do the same in "Stern-Brocot" order. We show, with one algorithm, how to do both. The work is a continuation of our research on improving mathematical method reported on at the last meeting.
It may be surprising that variations of the maximum segment sum (MSS) problem, a textbook example for the squiggolists, are still active topics for algorithm designers. I would like to examine the new developments from the view of relational program calculation.
It turns out that, while the classical MSS problem is solved by the Greedy Theorem, by applying the Thinning Theorem, we get a linear-time algorithm for MSS with upper bound on length. To derive an algorithm for the maximum segment density problem, on the other hand, we need a global notion of thinning which is stronger than the Thinning Theorem.
The concepts of left-negative and right-screw segments emerge from the search for monotonicity conditions. The efficiency of the resulting algorithms crucially relies on exploiting properties of the set of partial solutions and design efficient data structures for them.
Exercises in mathematics are often solved using a standard procedure, such as for example solving a system of linear equations by subtracting equations from top to bottom, and then substituting variables from bottom to top. Students have to practice such procedural skills: they have to learn how to apply a par ticular strategy to an exercise. E-learning systems offer excellent possibilities for practicing procedural skills. The first explanations and motivation for a procedure that solves a par ticular kind of problems are probably best taught in a class room, or studied in a book, but the subsequent practice can often be done behind a computer. There exist many e-learning systems or intelligent tutoring systems that suppor t practicing procedural skills. The tools var y widely in breadth, depth, user-interface, etc, but, unfortunately, almost all of them lack sophisticated techniques for providing immediate feedback. If feedback mechanisms are present, they are hard coded in the tools, often even with the exercises. This situation hampers the usage of e-learning systems for practicing mathematical skills. This paper introduces a formalism for specifying strategies for solving exercises. It shows how a strategy can be viewed as a language in which sentences consist of transformation steps. Fur thermore, it discusses how we can use advanced techniques from computer science, such as term rewriting, strategies, error-correcting parsers, and parser combinators to provide feedback at each intermediate step from the star t towards the solution of an exercise. Our goal is to obtain e-learning systems that give immediate and useful feedback.
(Joint work with Arthur van Leeuwen and Wouter Pasman.)
In this presentation I will present an overview of my thesis. The main result of my thesis is that we can capture the Visitor pattern as an extensible software component that is parametrizable on both the shape of some concrete visitor, and the decomposion strategy that can be used to define functions over the concrete visitors. This contrasts with the tradition presentation, where visitors are presented as a design pattern and both of these parametrization aspects consist of early design choices on the implementation, basically precluding any form of reuse. Furthermore, it is shown how we can use DGP techniques to write generic functions over visitors defined using our software component. My work builds on some type-theoretic results on encodings of datatypes, the relation between different encodings with different recursion patterns, and the fact that the Visitor design pattern and encodings of datatypes are closely related.
The torch (flashlight/bridge) problem is a problem of getting a number of people across a bridge with a limited capacity. A solution to the problem when the capacity of the bridge is 2 is well known. I will present a solution for the more general problem where the capacity is an input parameter. The solution has worst-case complexity O(N^{2}) where N is the number of people. Before embarking on the problem, I was told that it is believed to be "hard" and I do not know of any other published solution. The problem is indeed surprisingly difficult to solve, but it is not "hard" in any technical sense. I strongly suspect that my solution can be improved. The conjectured improvements lead to interesting algorithmic challenges.
We develop virtual machines and compilers for a multi-level language, which supports multi-stage specialization by composing program fragments with quotation mechanisms. We consider two styles of virtual machines---ones equipped with special instructions for code generation and ones without---and show that the latter kind can deal with, more easily, low-level code generation, which avoids the overhead of (run-time) compilation by manipulating instruction sequences, rather than source-level terms, as data. The virtual machines and accompanying compilers are derived by program transformation, which extends Ager et al.'s derivation of virtual machines from evaluators.
(Joint work with Masashi Iwaki)
We show that cyclic structures, i.e., finite or possibly infinite structures with back-pointers, unwindable into possibly infinite structures, can be elegantly represented as nested datatypes. This representation is free of the various deficiencies characterizing the more naive representation as mixed-variant datatypes. It is inspired by the representation of lambda-terms as a nested datatype via the de Bruijn notation.
(Joint work with N.Ghani, M.Hamana, T.Uustalu)
Specware is the "flagship" system of Kestrel Institute, developed over a sequence of years, as a system that supports the correct-by-construction paradigm. An earlier system, KIDS (Kestrel Interactive Development System, Doug Smith, 1990) was designed for, and excelled in, transformational programming in the small. Specware was designed for system development in the large; unfortunately, the initial designers paid scant attention to the convenience of doing the small steps. Recently, Kestrel Institute has begun to address this issue.
The talk is a progress report, discussing an improved notion of refinement between specs based on a new notion of the "observable" ops of a spec, and newly added transformational capabilities. This relatively small effort has already achieved a marked improvement in the practical usability of Specware.
(Joint work with Alessandro Coglio and Stephen Westfold, Kestrel Institute)
Agda is a functional language for programming with dependent types. In this tutorial/demo I present a subset of the notation starting from basic lambda calculus, through simple datatypes, datatypes with strong invariants, the Curry-Howard isomorphism and equational reasoning. The strong typesystems means that theorems can be expressed as types and proofs can be expressed as programs, with the Agda implementation acting as a proof checker.
Recently, Garcia, Järvi et al. presented "An extended comparative study of language support for generic programming" [JFP 2007] where Haskell scores best out of eight compared languages (C++, SML, Ocaml, Haskell, Eiffel, Java, C\#, Cecil). There are numerous (> 8) approaches to generic programming in Haskell alone and until now there hasn't emerged a clear winner. This talk presents work in progress on comparing libraries for generic programming in Haskell when it comes to implementation techniques, expressiveness and efficiency.
(Joint work with Johan Jeuring et al.)
Program calculation is a methodology for easy construction of efficient algorithms, where efficient algorithms are systematically derived from naive but apparently correct algorithms by calculational laws. In this presentation, I propose a new calculational laws for combinatorial optimization problems, which are not only helpful for deriving dynamic programming algorithms for a wide class of combinatorial optimization problems, but also suitable for automatic implementation. I show a derivation of an algorithm for resource-constrained shortest path problems, and explain how my calculational law works.
We have been using algebraic axiom systems for a variety of application areas over the last few years. Recently, it turned out that these lend themselves easily to mechanization via existing theorem provers, notably Prover9/Mace4 by W. McCune:
http://www.cs.unm.edu/~mccune/mace4/ The axioms can be typed in directly, without extra preparation, and the system is able to prove relevant consequences fully automatically at surprising speed, although guidance through lemmas may help in proving more complicated results.A repertory of the topics that have already been treated in this way is available under
http://www.informatik.uni-augsburg.de/~hoefnepe/kleene_db/index.html In this talk we exemplify the technique with part of the axiom system from the presentation "An Algebra for Feature-Oriented Software Development" and several theorems from term rewriting and refinement calculus.Further examples can be found in the following papers:
P. Höfner, G. Struth: Automated Reasoning in Kleene Algebra. In F. Pfenning (ed.): CADE 2007. LNAI 4603. Springer 2007, 279-294 P. Höfner, G. Struth: Can Refinement be automated? In E. Boiten, J. Derrick, G. Smith (eds.): International Refinement Workshop - Refine 2007, ENTCS, 2007.I propose a semiring formulation for reasoning about an agent's changing beliefs: a dynamic epistemic semiring (DES). A DES is a modal semiring extended with epistemic-action operators. I concentrate on the revision operator by proposing an axiomatisation, developing a basic calculus and deriving the classical AGM revision axioms in the algebra.
Rabin's (distributed, probabilistic) mutual exclusion algorithm addresses the liveness of resource sharing by guaranteeing a probabilistic lower bound on the chance that a process competing for a shared resource will actually get it. At its core is a clever and unexpected mathematical "gem" around which the details of concurrency and overlapping executions are set: the gem is not obvious, but is nevertheless easily proved; the details of organising the concurrency, on the other hand, are obvious but not so easily proved. In fact there was an error in Rabin's original presentation.
"Horses for Courses" means choosing the right tool for the job. Proof of Rabin's algorithm seems so difficult, in full rigour, that McIver decided to split it up into "layers", one of which --the most intricate-- would be handled by a probabilistic extension of Cohen's Separation-and-Reduction technique, itself based on (non-probabilistic) Kleene Algebra. (Study of that algebra is the connection with WG2.1's interests.)
The talk summarises the history of the problem, and the proposed solution strategy, and sketches what the progress so far has delivered. There turn out to be interesting links between pKA and other Kleene-style algebras invented for a wholly different purpose.
Large specifications (today also referred to as "models") exhibit the same problem as large programs: They need to be developed, ideally in a systematic derivation process. We discuss means for such a "model evolution process" based on the formalism of Espces (comparable to Statecharts) and generalized refinement morphisms. To this end we also employ some kind of modal operators. The concepts can also be used to assist other development aspects such as the weaving of global policies into local specifications. The approach treats discrete models as well as continuous or hybrid models.
(Joint work with Doug Smith and Dusko Pavlovic)
I | Attachment | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|
zip | kyo12morgan.zip | manage | 5449.1 K | 17 Sep 2007 - 10:09 | JohanJeuring |