Note: The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
[pdf] Olivia Nakayima, Mostafa I. Soliman, Kazunori Ueda, and Samir A. Elsagheer, Impact of Network Complexity, Mobility Models, and Wireless Technologies on SDN-DTN Performance in Internet of Vehicles (IoV) Environments. International Journal of Computer Networks and Applications (IJCNA), Vol.11, No.4, pp.449-467, 2024, DOI: 10.22247/ijcna/2024/28.
The Internet of Vehicles (IoV) is a rapidly advancing field that necessitates highly efficient data transmission, particularly for services like video streaming, which require seamless delivery of audio and video content. However, the dynamic nature and high mobility inherent in IoV present significant challenges, impacting key metrics such as initial start time, stall count, and stall length. Existing research has addressed some optimization factors, yet comprehensive solutions that consider network complexities, node mobility patterns, and varying wireless technologies are still needed. This study proposes a model combining Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) concepts to address these challenges. By exploring the impact of network complexities, node mobility patterns, and wireless technologies (including 802.11g and 802.11ax) on video streaming performance in both intra and interIoV networks, the study identifies critical factors contributing to optimal performance. The proposed SDN-DTN based model dynamically adapts to changing network conditions, thereby minimizing initial start times, reducing stall counts, and shortening stall lengths compared to existing methods. This approach offers valuable insights into designing robust and adaptive IoV infrastructure, emphasizing the importance of addressing complex network dynamics for seamless video streaming. The findings highlight the necessity of integrating advanced networking technologies to overcome the inherent challenges of IoV, ensuring a more resilient and efficient data transmission framework.
[pdf] [slides] Naoki Yamamoto and Kazunori Ueda, Grammar-based Pattern Matching and Type Checking for Difference Data Structures. In Proceedings of the 26th International Symposium on Principles and Practice of Declarative Programming (PPDP '24), ACM, 2024, Article No.:13, pp.1-13, DOI:https://doi.org/10.1145/3678232.3678243.
We propose to extend the concept of difference lists to general data structures and call them difference data structures. Difference lists are well-known in logic programming as a data structure that realizes constant-time concatenation by retaining a reference to the tail as well as the head of the list. In recent years, they are also called list segments and appear as a typical example of separation logic, a logic for handling heaps and pointers. Difference data structures can be utilized to discuss various important concepts in programming languages including functions, evaluation contexts, and continuations, in a unified setting. To handle such structures succinctly and safely, we base this work on a graph rewriting language. We propose a typing method for graph rewriting languages based on graph grammar, which covers structures beyond algebraic data types, to allow users to define difference data types for two different major applications, i.e., (i) users can describe powerful pattern matching with user-defined shapes and (ii) the implementation can statically type-check operations on difference data structures efficiently. Our method introduces the concept of difference at the meta-level, i.e., difference data types are automatically derived from the base data type defined by graph grammar so that we can treat the base structure and difference structure uniformly without type conversion.
[pdf] [slides] Haruto Mishina and Kazunori Ueda, Introducing Quantification into a Hierarchical Graph Rewriting Language. In Proc. 34th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2024), LNCS 14919, Springer-Verlag, 2024, pp.220-239, DOI: 10.1007/978-3-031-71294-4_13.
LMNtal is a programming and modeling language based on hierarchical graph rewriting that uses logical variables to represent connectivity and membranes to represent hierarchy. On the theoretical side, it allows logical interpretation based on intuitionistic linear logic; on the practical side, its full-fledged implementation supports a graph-based parallel model checker and has been used to model diverse applications including various computational models. This paper discuss how we extend LMNtal to QLMNtal (LMNtal with Quantification) to further enhance the usefulness of hierarchical graph rewriting for high-level modeling by introducing quantifiers into rewriting as well as matching. Those quantifiers allows us to express universal quantification, cardinality and non-existence in an integrated manner. Unlike other attempts to introduce quantifiers into graph rewriting, QLMNtal has term-based syntax, whose semantics is smoothly integrated into the small-step semantics of the base language LMNtal. The proposed constructs allow combined and nested use of quantifiers within individual rewrite rules.
[slides] Kazunori Ueda, Gentle Introduction to LMNtal: Language Design and Implementation. Tutorial presented at the 17th International Conference on Graph Transformation (ICGT 2024), Enschede, the Netherlands, July 2024.
LMNtal (pronounced `elemental’) is a programming and modeling language based on hierarchical graph rewriting that uses (i) immutable variables (interpreted as point-to-point links) to represent connectivity and (ii) membranes to represent hierarchy and locality. It was initially designed to be a substrate language of diverse computational models, especially those addressing concurrency, mobility, and multiset rewriting, including Interaction Nets, Constraint Handling Rules and Bigraphs. LMNtal provides a graphical view of programs and computation, but unlike many other graph rewriting languages based on the (dominant) algebraic approach, it is formulated in the standard style of programming languages, i.e., term-based abstract syntax and structural operational semantics, to retain its affinity with other programming paradigms. LMNtal is sometimes called a logic programming language because it allows interpretation based on intuitionistic linear logic. On the practical side, the language comes with full-fledged, compiler-based, open-source implementation, which supports both (i) concurrent execution of rewrite rules and (ii) graph-based parallel model checking that scales up to 10^9 states. In addition, the tool LaViT supports visualization of graphs and visualization of state space of model checking, which both turns out to be extremely useful in understanding key properties of programs. The tutorial will provide a gentle introduction (with a number of examples and demonstrations) to various aspects of the language and its implementation.
[pdf] Olivia Nakayima, Mostafa I. Soliman, Kazunori Ueda, and Samir A. Elsagheer, Enhancing Bundle Delivery Efficiency in Mobile Ad-hoc Networks with a Multi-protocol Delay-Tolerant Network. Proceedings of the 21st ACM International Conference on Computing Frontiers (CF'24), ACM, 2024, pp.306-309, DOI: 10.1145/3649153.3649178.
Mobile Ad-hoc Networks (MANETs) are challenged by highly dynamic network conditions like delayed or frequently disrupted communication. Traditional MANETs typically rely on a single Delay-Tolerant Network (DTN) routing protocol across all nodes, which can limit efficiency in unfavourable network conditions. To address this, our research introduces a dynamic approach, employing multiple DTN routing protocols concurrently within a MANET. Each node can adaptively operate on a distinct DTN routing protocol based on the prevailing network conditions. This strategy harnesses the strengths of each protocol while mitigating their limitations, resulting in a notable enhancement in the delivery ratio, as evidenced by simulations of different network scenarios. Comparatively, this multi-protocol approach outperforms single-protocol MANETs, as simulated by the Opportunistic Network Environment (ONE) simulator. The findings emphasize that a multi-protocol strategy enhances adaptability, robustness, and overall performance in MANETs, offering a transformative approach to wireless communication network management and operation.
[pdf] Olivia Nakayima, Mostafa I. Soliman, Kazunori Ueda, and Samir A. Elsagheer Mohamed, Combining Software-Defined and Delay-Tolerant Networking Concepts With Deep Reinforcement Learning Technology to Enhance Vehicular Networks. IEEE Open Journal of Vehicular Technology, Vol.5, pp.721-736, DOI: 10.1109/OJVT.2024.3396637.
Ensuring reliable data transmission in all Vehicular Ad-hoc Network (VANET) segments is paramount in modern vehicular communications. Vehicular operations face unpredictable network conditions which affect routing protocol adaptiveness. Several solutions have addressed those challenges, but each has noted shortcomings. This work proposes a centralised-controller multi-agent (CCMA) algorithm based on Software-Defined Networking (SDN) and Delay-Tolerant Networking (DTN) principles, to enhance VANET performance using Reinforcement Learning (RL). This algorithm is trained and validated with a simulation environment modelling the network nodes, routing protocols and buffer schedules. It optimally deploys DTN routing protocols (Spray and Wait, Epidemic, and PRoPHETv2) and buffer schedules (Random, Defer, Earliest Deadline First, First In First Out, Large/smallest bundle first) based on network state information (that is; traffic pattern, buffer size variance, node and link uptime, bundle Time To Live (TTL), link loss and capacity). These are implemented in three environment types; Advanced Technological Regions, Limited Resource Regions and Opportunistic Communication Regions. The study assesses the performance of the multi-protocol approach using metrics: TTL, buffer management,link quality, delivery ratio, Latency and overhead scores for optimal network performance. Comparative analysis with single-protocol VANETs (simulated using the Opportunistic Network Environment (ONE)), demonstrate an improved performance of the proposed algorithm in all VANET scenarios.
[pdf] Hussaini Aliyu Idris, Kazunori Ueda, Bassem Mokhtar, Samir A. Elsagheer Mohamed, Machine Learning Based Misbehavior Detection System for False Data Injection Attack in Internet of Vehicles Using Neighbor Public Transport Vehicle Approach. International Journal of Computer Networks and Applications (IJCNA), Vol.11, No.2, pp.159-176, 2024, DOI: 10.22247/ijcna/2024/224442.
The integration of the Internet of Vehicles (IoV) into the Intelligent Transportation System (ITS) has significantly improved its operations, leading to a reduction in road traffic accidents, efficient traffic control, and a decrease in carbon emissions for a more sustainable environment aligned with the Sustainable Development Goals (SDGs). However, the adoption of IoV networks introduces privacy and security challenges. Although cryptographic techniques such as public-key infrastructure (PKI) proposed by standardization bodies like IEEE and ETSI provide protection against outsider attackers, they fail to address the threat posed by insider attackers. To overcome this limitation, researchers have proposed data-centric machine learning-based misbehavior detection frameworks that focus on identifying and mitigating insider attacks. However, existing approaches primarily rely on the Basic Safety Message (BSM) data received from a single vehicle, which allows attackers to manipulate the BSM data without being detected. In this paper, we present a novel data-centric misbehavior detection framework specifically designed to detect false data injection attacks in IoV networks. Our approach leverages neighboring public transportation vehicles (NPTVs) to enhance the detection capabilities. By incorporating the BSM data from NPTVs, we demonstrate the effectiveness of our proposed framework in different scenarios using deep learning, decision tree, and random forest algorithms. Through extensive evaluation, we achieved precision, recall, F1-Score, and accuracy rates of up to 99%, showcasing the superior performance of our approach.
[pdf] Hussaini Aliyu Idris, Kazunori Ueda, Bassem Mokhtar, Samir A. Elsagheer Mohamed, Novel Intelligent BSM Falsification Attack Detection System Using Trusted Neighbor Vehicle Approach in IoV. International Journal of Computing, Vol.23, No.1, 2024, pp.116-125, DOI:https://doi.org/10.47839/ijc.23.1.3443.
The proliferation of cyberattacks has emerged as a significant obstacle for advancing technologies such as the Internet of Things (IoT) and Internet of Vehicles (IoV) in recent times. Notably, cryptographic security measures have been implemented in IoV to counteract these cyberattacks. However, these security measures are inadequate when it comes to thwarting internal attackers within the network, as these attackers possess the necessary security credentials for authenticating basic safety messages (BSMs). The research community has made substantial contributions by proposing misbehavior detection systems (MDS) based on data-centric machine learning to identify and prevent internal attackers within IoV. Nevertheless, the existing MDSs in the literature rely on BSMs received from a single vehicle, thereby enabling internal attackers to manipulate their falsified BSMs and evade detection, resulting in a high incidence of false alarms. In this study, we introduce a new intelligent system for detecting falsified BSMs, employing a trusted neighbor vehicle approach (NIBFADS-UTVA)). Our approach demonstrates exceptional effectiveness, achieving an accuracy, precision, recall, and F1-Score all exceeding 99%.
[pdf] Norhan Elsayed Amer, Walid Gomaa, Keiji Kimura, Kazunori Ueda and Ahmed El-Mahdy, On the optimality of quantum circuit initial mapping using reinforcement learning. EPJ Quantum Technology, Vol.11, Article number:19, 2024, 19pp., DOI:https://doi.org/10.1140/epjqt/s40507-024-00225-1.
Quantum circuit optimization is an inevitable task with the current noisy quantum backends. This task is considered non-trivial due to the varying circuits’ complexities in addition to hardware-specific noise, topology, and limited connectivity. The currently available methods either rely on heuristics for circuit optimization tasks or reinforcement learning with complex unscalable neural networks such as transformers. In this paper, we are concerned with optimizing the initial logical-to-physical mapping selection. Specifically, we investigate whether a reinforcement learning agent with simple scalable neural network is capable of finding a near-optimal logical-to-physical mapping, that would decrease as much as possible additional CNOT gates, only from a fixed-length feature vector. To answer this question, we train a Maskable Proximal Policy Optimization agent to progressively take steps towards a near-optimal logical-to-physical mapping on a 20-qubit hardware architecture. Our results show that our agent coupled with a simple routing evaluation is capable of outperforming other available reinforcement learning and heuristics approaches on 12 out of 19 test benchmarks, achieving geometric mean improvements of 2.2% and 15% over the best available related work and two heuristics approaches, respectively. Additionally, our neural network model scales linearly as the number of qubits increases.
[pdf] Jin Sano and Kazunori Ueda, Implementing the λ_{GT} Language: A Functional Language with Graphs as First-Class Data. In Proc. 16th International Conference on Graph Transformation (ICGT 2023), LNCS 13961, Springer-Verlag, 2023, pp.263-277, DOI:https://doi.org/10.1007/978-3-031-36709-0_14.
Several important data structures in programming are beyond trees; for example, diﬀerence lists, doubly-linked lists, skip lists, threaded trees, and leaf-linked trees. They can be abstracted into graphs (or more precisely, port hypergraphs). In existing imperative programming languages, these structures are handled with destructive assignments to heaps as opposed to a purely functional programming style. These low-level operations are prone to errors and not straightforward to verify. On the other hand, purely functional languages allow handling data structures declaratively with type safety. However, existing purely functional languages are incapable of dealing with data structures other than trees succinctly. In earlier work, we proposed a new purely functional language, λ_{GT}, that handles graphs as immutable, ﬁrst-class data structures with pattern matching and designed a new type system for the language. This paper presents a prototype implementation of the language constructed in about 500 lines of OCaml code. We believe this serves as a reference interpreter for further investigation, including the design of full-ﬂedged languages based on λ_{GT}.
[pdf] 鈴木 遼，上田 和紀，坂井 滋和： 情報可視化やインタラクションのためのフレームワークSiv3Dの機能強化と普及活動， コンピュータソフトウェア，Vol.40, No.3, 2023, pp.50-72, DOI: 10.11309/jssst.40.3_50.
情報可視化やインタラクションのためのC++フレームワークSiv3Dは，2016年にOSS化されて以降，現在までに5,500回を超える機能追加と改良が公開リポジトリ上で行われている．これらには，近年のハードウェアや開発ツール，言語仕様の進化，ユーザの利用事例の検証等を踏まえた様々な新機能やAPI設計の見直し，およびC++17/C++20への対応など，興味深い事例が含まれている．本論文では，Siv3DのOSS化以降5年間における設計の選択や変更，普及のための取り組みを紹介し，情報可視化やインタラクションに関する現代的なC++フレームワークの設計と運用に役立つ知見を共有する．
[pdf] Jin Sano, Naoki Yamamoto and Kazunori Ueda: Type Checking Data Structures More Complex Than Trees. Journal of Information Processing, Vol.31, 2023, pp.112-130. DOI: 10.2197/ipsjjip.31.112.
Graphs are a generalized concept that encompasses more complex data structures than trees, such as difference lists, doubly-linked lists, skip lists, and leaf-linked trees. Normally, these structures are handled with destructive assignments to heaps, which is opposed to a purely functional programming style and makes verification difficult. We propose a new purely functional language, λ_{GT}, that handles graphs as immutable, first-class data structures with a pattern matching mechanism based on Graph Transformation and developed a new type system, F_{GT}, for the language. Our approach is in contrast with the analysis of pointer manipulation programs using separation logic, shape analysis, etc. in that (i) we do not consider destructive operations but pattern matchings over graphs provided by the new higher-level language that abstract pointers and heaps away and that (ii) we pursue what properties can be established automatically using a rather simple typing framework.
[pdf] Norhan Elsayed Amer, Walid Gomaa, Keiji Kimura, Kazunori Ueda and Ahmed El-Mahdy: On the Learnability of Quantum State Fidelity. EPJ Quantum Technology, 9, Article number:31, 2022, 23pp, DOI: 10.1140/epjqt/s40507-022-00149-8.
Current quantum processing technology is generally noisy with a limited number of qubits, stressing the importance of quantum state fidelity estimation. The complexity of this problem is mainly due to not only accounting for single gates and readout errors but also for interactions among which. Existing methods generally rely on either reconstructing the given circuit state, ideal state, and computing distance of which; or forcing the system to be on a specific state. Both rely on conducting circuit measurements, in which computational efficiency is traded off with obtained fidelity details, requiring an exponential number of experiments for full information. This paper poses the question: Is the mapping between a given quantum circuit and its state fidelity learnable? If learnable, this would be a step towards an alternative approach that relies on machine learning, providing much more efficient computation. To answer this question, we propose three deep learning models for 1-, 3-, and 5-qubit circuits and experiment on the following real-quantum processors: ibmq armonk (1-qubit), ibmq lima (5-qubit) and ibmq quito (5-qubit) backends, respectively. Our models achieved a mean correlation factor of 0.74, 0.67 and 0.66 for 1-, 3-, and 5-qubit random circuits, respectively, with the exponential state tomography method. Additionally, our 5-qubit model outperforms simple baseline state fidelity estimation method on three quantum benchmarks. Our method, trained on random circuits only, achieved a mean correlation factor of 0.968 while the baseline method achieved 0.738. Furthermore, we investigate the effect of dynamic noise on state fidelity estimation. The correlation factor substantially improved to 0.82 and 0.74 for the 3- and 5-qubit models, respectively. The results show that machine learning is promising for predicting state fidelity from circuit representation and this work may be considered a step towards efficient end-to-end learning.
[pdf] Naoki Yamamoto and Kazunori Ueda: Engineering Grammar-based Type Checking for Graph Rewriting Languages. IEEE Access, Vol.10, 2022, pp.114612-114628. DOI: 10.1109/ACCESS.2022.3217913.
The ability to handle evolving graph structures is important both for programming languages and modeling languages. Of various languages that adopt graphs as primary data structures, a graph rewriting language LMNtal provides features of both (concurrent) programming languages and modeling languages, and its implementation unifies ordinary program execution and model checking functionalities. Unlike pointer manipulation in imperative languages, LMNtal allows us to manipulate graph structures in such a way that the well-formedness of graphs is guaranteed by the language itself. However, since the shapes of graphs can be complex and diverse compared to algebraic data structures such as lists and trees, it is a non-obvious important task to formulate types of graphs to verify individual programs. With this motivation, this paper discusses LMNtal ShapeType, a type checking framework that applies the basic idea of Structured Gamma to a concrete graph rewriting language. Types are defined by generative grammars written as LMNtal rules, and type checking of LMNtal programs can accordingly be done by exploiting the model checking features of LMNtal itself. We gave a full implementation of type checking exploiting the features of the LMNtal meta-interpreter and confirmed that it works for practical operations on various graph structures, including single-step and multi-step operations on non-algebraic data structures and data structures with numerical shape constraints.
[pdf] Alimujiang Yasen and Kazunori Ueda: Revisiting Graph Types in HyperLMNtal: A Modeling Language for Hypergraph Rewriting. IEEE Access, Vol.9, 2021, pp.133449-133460. DOI: 10.1109/ACCESS.2021.3112903.
Hypergraphs are a highly expressive data structure for modeling and programming, for which high-level language constructs are yet to be established. HyperLMNtal is a modeling language based on hypergraph rewriting. Rewrite rules can copy and remove subgraphs identified by graph types which serve as a wildcard construct that enables powerful handling of subgraphs. HyperLMNtal has featured several graph types over the years, enabling it to encode various computational models. Important applications of graph types include the modeling of formal systems involving name binding including the lambda-calculus. However, the concept of graph types for hypergraphs has not been studied in sufficient detail, and our recent work revealed that the encoding of a unification algorithm modulo alpha-equivalence requires further evolution of graph types. This paper describes the motivation and redesign of a graph type for handling subgraphs appearing in the above applications and conduct experiments to show performance improvements. We believe that the idea of graph types could be useful in programming and modeling languages in general and is worth further investigation and deployment.
[pdf] Naoki Yamamoto and Kazunori Ueda: Engineering Grammar-based Type Checking for Graph Rewriting Languages. In Proc. Twelfth International Workshop on Graph Computation Models (GCM 2021), June 2021, pp.93-114.
The ability to handle evolving graph structures is important both for programming languages and modeling languages. Of various languages that adopt graphs as primary data structures, a graph rewriting language LMNtal provides features of both (concurrent) programming languages and mod- eling languages, and its implementation unifies ordinary program execution and model checking functionalities. Unlike pointer manipulation in imperative languages, LMNtal allows us to manip- ulate graph structures in such a way that the well-formedness of graphs is an invariant guaranteed by the language itself. However, since the shapes of graphs can be complex and diverse compared to algebraic data structures such as lists and trees, it is a non-obvious important task to formulate types of graphs to verify individual programs. With this motivation, this paper discusses LMNtal ShapeType, a type checking framework that applies the basic idea of Structured Gamma to a con- crete graph rewriting language. Types are defined as generative grammars written as LMNtal rules, and type checking of LMNtal programs can be done by exploiting the model checking features of LMNtal itself. We gave a full implementation of type checking using the features of the LMNtal meta-interpreter.
[pdf]
Yunosuke Yamada, Masashi Sato and Kazunori Ueda:
Constraint-based modeling and symbolic simulation of hybrid systems with HydLa and HyLaGI.
In Proc. 9th International Workshop on Model-Based Design of
Cyber-Physical Systems (CyPhy 2019),
LNCS 11971, Springer-Verlag, 2020, pp.153-178,
DOI: 10.1007/978-3-030-41131-2_8.
PDF file of the talk slides
Hybrid systems are dynamical systems that include both continuous and discrete changes. Modeling and simulation of hybrid systems can be challenging due to various kinds of subtleties of their behavior. The declarative modeling language HydLa aims at concise description of hybrid systems by means of constraints and constraint hierarchies. HyLaGI, a publicly available symbolic simulator of HydLa, featured error-free computation with symbolic parameters. Based on symbolic computation, HyLaGI provides various useful functionalities including nondeterministic execution, handling of infinitesimal quantities, and construction of hybrid automata. Nondeterministic execution in the framework of constraint programming enables us to solve inverse problems by automatic parameter search. This paper introduces these features by means of example programs. Another objective of the paper is to discuss our experiences with HydLa programming, which is unique in that its data and control structures are both based on constraint technologies. We discuss its expressive power and our experiences of the modeling of hybrid systems using constraint hierarchies.
[pdf] Taichi Tomioka, Yutaro Tsunekawa and Kazunori Ueda: Introducing Symmetry to Graph Rewriting Systems with Process Abstraction. In Proc. 12th International Conference on Graph Transformation (ICGT 2019), LNCS 11629, Springer-Verlang, 2019, pp.3-20, DOI: 10.1007/978-3-030-23611-3_1.
Symmetry reduction in model checking is a technique for reducing state spaces by exploiting the inherent symmetry of models, i.e., the interchangeability of their subcomponents. Model abstraction, which abstracts away the details of models, often strengthens the symmetry of the models. Graph rewriting systems allow us to express models in such a way that inherent symmetry manifests itself with graph isomorphism of states. In graph rewriting, the synergistic effect of symmetry reduction and model abstraction is obtained under graph isomorphism. This paper proposes a method for abstracting programs described in a hierarchical graph rewriting language LMNtal. The method automatically finds and abstracts away subgraphs of a graph rewriting system that are irrelevant to the results of model checking. The whole framework is developed within the syntax and the formal semantics of the modeling language LMNtal without introducing new domains or languages. We show that the proposed abstraction method combined with symmetry reduction reduces state spaces while preserving the soundness of model checking. We implemented the method on SLIM, an implementation of LMNtal with an LTL model checker, tested it with various concurrent algorithms, and confirmed that it automatically reduces the number of states by successfully extracting the symmetry of models.
[pdf] Muhammad Hataba, Ahmed El-Mahdy and Kazunori Ueda: Generation of Efficient Obfuscated Code Through Just-In-Time Compilation. IEICE Transactions on Information and Systems, Vol.E102-D, No.3, 2019, pp.645-649, DOI: 10.1587/transinf.2018EDL8180.
Nowadays the computing technology is going through a major paradigm shift. Local processing platforms are being replaced by physically out of reach yet more powerful and scalable environments such as the cloud computing platforms. Previously, we introduced the OJIT system as a novel approach for obfuscating remotely executed programs, making them difficult for adversaries to reverse-engineer. The system ex- ploited the JIT compilation technology to randomly and dynamically trans- form the code, making it constantly changing, thereby complicating the execution state. This work aims to propose the new design iOJIT, as an enhanced approach that patches the old systems shortcomings, and poten- tially provides more effective obfuscation. Here, we present an analytic study of the obfuscation techniques on the generated code and the cost of applying such transformations in terms of execution time and performance overhead. Based upon this profiling study, we implemented a new algo- rithm to choose which obfuscation techniques would be better chosen for "efficient" obfuscation according to our metrics, i.e., less prone to security attacks. Another goal was to study the system performance with different applications. Therefore, we applied our system on a cloud platform running different standard benchmarks from SPEC suite.
[pdf] Alimujiang Yasen and Kazunori Ueda, Name Binding is Easy with Hypergraphs. IEICE Transactions on Information and Systems, Vol.E101-D, No.4, 2018, pp.1126-1140, DOI: 10.1587/transinf.2017EDP7257.
We develop a technique for representing variable names and name
binding which is a mechanism of associating a name with an entity in
many formal systems including logic, programming languages and
mathematics. The idea is to use a general form of graph links (or
edges) called hyperlinks to represent variables, graph nodes
as constructors of the formal systems, and a graph type called
hlground to define substitutions.
Our technique is based on simple notions of graph theory in which
graph types ensure correct substitutions and keep bound variables
distinct. We encode strong reduction of the untyped
lambda-calculus to introduce our technique. Then we encode a more
complex formal system called System F_{<:}, a
polymorphic lambda-calculus with subtyping that has been one of
important theoretical foundations of functional programming languages.
The advantage of our technique is that the representation of terms,
definition of substitutions, and implementation of formal systems are
all straightforward. We formalized the graph type hlground,
proved that it ensures correct substitutions in the
lambda-calculus, and implemented hlground in HyperLMNtal,
a modeling language based on hypergraph rewriting. Experiments were
conducted to test this technique. By this technique, one can
implement formal systems simply by following the steps of their
definitions as described in papers.
[pdf] Yutaro Tsunekawa, Taichi Tomioka and Kazunori Ueda: Implementation of LMNtal Model Checkers: a Metaprogramming Approach. The Journal of Object Technology, Vol.17, No.1, 2018, pp.1:1-28, DOI: 10.5381/jot.2018.17.1.a1.
LMNtal is a modeling language based on hierarchical graph rewriting, and its implementation SLIM features state space search and an LTL model checker. Several variations and extensions of the SLIM have been developed, and all of them achieve their functionalities by modifying SLIM written in C. If a model checker is implemented in the modeling language itself, it should be easy to develop prototypes of various model checkers without changing the base implementation of the model checker. This approach is called metaprogramming which has been taken extensively in Lisp and Prolog communities.[pdf] Alimujiang Yasen and Kazunori Ueda: Unification of Hypergraph Lambda-Terms. In Proc. Second IFIP International Conference on Topics in Theoretical Computer Science (TTCS 2017), LNCS 10608, Springer-Verlag, 2017, pp.106-124, DOI:10.1007/978-3-319-68953-1_9.
We developed a technique for modeling formal systems involving name binding in a modeling language based on hypergraph rewriting. A hypergraph consists of graph nodes, edges with two endpoints and edges with multiple endpoints. The idea is that hypergraphs allow us to represent terms containing bindings and that our notion of a graph type keeps bound variables distinct throughout rewriting steps.[pdf] Kazunozi Ueda: Logic/Constraint Programming and Concurrency: The Hard-Won Lessons of the Fifth Generation Computer Project. Science of Computer Programming, Vol.164 (2018), pp.3-17, http://dx.doi.org/10.1016/j.scico.2017.06.002.
The technical goal of the Fifth Generation Computer Systems (FGCS) project (1982-1993) was to develop Parallel Inference technologies, namely systematized technologies for realizing knowledge information processing on top of parallel computer architecture. The Logic Programming paradigm was adopted as the central working hypothesis of the project. At the same time, building a large-scale Parallel Inference Machine (PIM) meant to develop a novel form of general-purpose computing technologies that are powerful enough to express various parallel algorithms and to describe a full operating system of PIM. Accordingly, the research goal of the Kernel Language was set to designing a concurrent and parallel programming language under the working hypothesis of Logic Programming. The aim of this article is to describe the design process of the Kernel Language (KL1) in the context of related programming models in the 1980's, the essence of Concurrent Logic Programming and Constraint-Based Concurrency, and how the technologies we developed in those days evolved after their conception.
[pdf] Hiroaki Akutsu, Takahiro Yamamoto, Kazunori Ueda and Hideo Saito: MEC: Network Optimized Multi-stage Erasure Coding for Scalable Storage Systems. In Proc. 22nd IEEE Pacific Rim International Symposium on Dependable Computing (PRDC 2017), Christchurch, New Zealand, January 2017, IEEE Computer Society, 2017, pp. 292-300, DOI:10.1109/PRDC.2017.54.
In scalable storage systems, there are two kinds of methods for data redundancy: mirroring and parity. Each has its pros and cons. Mirroring creates a large amount of redundancy data, resulting in less usable space. Write performance degrades proportionally to the redundancy level due to an increase in communication. Parity-based methods partition data into multiple pieces, add parity information, and distribute the pieces of data and parity information. Parity-based methods are not often used with memory class media that are faster than the network, because distributing the data across servers results in low read performance. This research aims to establish an efficient data protection method that can be applied to fast, memory class media. We propose a new parity-based method called Multi-stage Erasure Coding (MEC), which creates two different erasure codes: one at the data transmission source server, and the other at the destination server. We show that our method reduces the space required to achieve redundancy while achieving high performance by making the amount of write communication independent of the redundancy level. We built a prototype program using MEC on a commodity cluster server. We show that compared with conventional parity-based methods with redundancy level 2, read I/O throughput is over one order of magnitude higher thanks to local reads and that write I/O throughput is almost the same due to network bottleneck.
[pdf] Yutaro Tsunekawa, Taichi Tomioka and Kazunori Ueda: Implementation of LMNtal Model Checkers: a Metaprogramming Approach. First Workshop on Meta-Programming Techniques and Reflection (META'16), Amsterdam, Oct. 2016.
LMNtal is a modeling language based on hierarchical graph rewriting, and its implementation SLIM features an LTL model checker. Several variations and extensions of the SLIM model checker have been developed, and all of them achieve their functionalities by modifying SLIM. If a model checker is implemented in the modeling language itself, it should be easy to develop prototypes of various model checkers without changing the implementation of the model checker. This approach is called metaprogramming which has been taken conventionally in Lisp and Prolog. In this paper, we define first-class rewrite rules to enable metaprogramming in LMNtal. In addition, we design APIs to operate states of LMNtal programs. These features allow us to implement diverse variants of a model checker without changing SLIM.We demonstrate it by implementing an LTL model checker and a CTL model checker in LMNtal. The resulting framework should be sufficient to develop metacir- cular interpreters that treat program states flexibly.
[pdf] Kenichi Betsuno, Shota Matsumoto and Kazunori Ueda: Symbolic Analysis of Hybrid Systems Involving Numerous Discrete Changes Using Loop Detection. In Proc. Sixth Workshop on Design, Modeling and Evaluation of Cyber Physical Systems (CyPhy'16), Pittsburg, USA, October 2016, LNCS 10107, Springer-Verlag, 2017, pp.17-30, DOI:10.1007/978-3-319-51738-4_2.
Hybrid systems are dynamical systems that include both continuous and discrete changes. Some hybrid systems involve a large or infinite number of discrete changes with in an infinitesimal-width region of phase space. Systems with sliding mode are typical examples of such hybrid systems. It is difficult to analyze such hybrid systems through ordinary numerical simul ation, since the time required for simulation increases in proportion to the number of discrete changes. In this paper, we propose a method to symbolically analyze such models involvin g numerous discrete changes by detecting loops and checking loop invariants of the model's behavior. The method handles parameterized hybrid systems and checks inclusion of paramet erized states focusing on the values of a switching function that dominate the dynamics of sliding mode. We implemented the main part of the method in our symbolic hybrid system simula tor HyLaGI, and conducted analysis of example models.
[pdf] Shota Matsumoto and Kazunori Ueda: Symbolic Simulation of Parametrized Hybrid Systems with Affine Arithmetic. In Proc. 23rd International Symposium on Temporal Representation and Reasoning (TIME 2016), Copenhagen, IEEE Computer Society, Oct. 2016, pp.4-11, DOI:10.1109/TIME.2016.8.
The purpose of this research is to develop a highly reliable simulator of hybrid systems, i.e., systems involving both discrete change and continuous evolution. In particular, we aim at rigorous simulation of parametrized hybrid systems, which enables not only the analysis of model’s possible behavior but also the design of parameters that realize desired properties. Simula- tors with interval arithmetic can reliably compute a reachable set of states, but preserving the dependency of uncertain quantities in models is still challenging. In this paper, we discuss a simulation method that is based on symbolic computation and cooperates with the interval Newton method and affine arithmetic, which is able to preserve first- order dependency of uncertain quantities. We implemented the algorithm on the symbolic simulator we have been developing and evaluated the performance of the method with example models.
[pdf] Hiroaki Akutsu,Kazunori Ueda,Takeru Chiba, Tomohiro Kawaguchi, Norio Shimozono: Reliability and Failure Impact Analysis of Distributed Storage Systems with Dynamic Refuging. IEICE Transactions on Information and Systems, Vol.E99-D, No.9, 2016, pp.2259-2268, DOI:10.1587/transinf.2016EDP7139.
In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large- scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large- scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. If data loss occurs by multiple drive failure, it affects many users using a storage system. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamic change of amount of storage at each redundancy level caused by multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. We showed a failure impact model and a method for localizing the failure. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, the proposed technique turned out to scale well, and the probability of data loss decreased by two orders of magnitude for systems with a thousand drives compared to normal RAID. The appropriate distribution level could localize the failure.
[pdf] Alimujiang Yasen and Kazunori Ueda: Hypergraph Representation of Lambda-Terms. In Proc. 10th International Symposium on Theoretical Aspects of Software Engineering, IEEE Compueter Society, 2016, pp.113-116, DOI:10.1109/TASE.2016.25.
Substitution in the lambda-calculus is a subtle operation. In a formal description, Barendregt's variable convention is assumed to avoid variable capture, but such an assumption is not well suited for implementation on computers.We introduce graph representation and manipulation of lambda-terms, in which bound and free variables are encoded by using hyperlinks with different attributes. A graph type called hlground is generalized to identify the scope of a term n in substitution m[x := n], which enables bound variables and free variables to have suitable behavior during the substitution. Our representation of the lambda-terms are readable and the definition of substitution in this technique is free from any side conditions on the freeness and freshness of variables.
[pdf] [Presentation video Part I, Part II, Part III] Kazunori Ueda: Logic/Constraint Programming and Concurrency: The Hard-Won Lessons of the Fifth Generation Computer Project. In Proc. 13th International Symposium on Functional and Logic Programming (FLOPS 2016), Oleg Kiselyov and Andy King (eds.), LNCS 9613, Springer-Verlang, 2016, pp.1-11, DOI:10.1007/978-3-319-29604-3_1.
The technical goal of the Fifth Generation Computer Systems (FGCS) project (1982-1993) was to develop Parallel Inference technologies, namely systematized technologies for realizing knowledge information processing on top of parallel computer architecture. The Logic Programming paradigm was adopted as the central working hypothesis of the project. At the same time, building a large-scale Parallel Inference Machine (PIM) meant to develop a novel form of general-purpose computing technologies that are powerful enough to express various parallel algorithms and to describe a full operating system of PIM. Accordingly, the research goal of the Kernel Language was set to designing a concurrent and parallel programming language under the working hypothesis of Logic Programming. The aim of this article is to describe the design process of the Kernel Language (KL1) in the context of related programming models in the 1980's, the essence of Concurrent Logic Programming and Constraint-Based Concurrency, and how the technologies we developed in those days evolved after their conception.
[pdf] 宮原和大，上田和紀： グラフ書換え系のための効率的なグラフ正規化手法， コンピュータソフトウェア，Vol.33, No.1 (2016), pp.126-149, DOI: 10.11309/jssst.33.1_126.
グラフ書換え系をモデリング形式とするモデル検査は高い表現力と対称性吸収機能を備えているが， 検査の過程で生成されるグラフ構造の管理においてグラフ同型性判定を頻繁に行う．グラフ構造の正規化は同型なグラフが同一の表現となるようなグラフの一意表現を求める手法であり，一意表現の比較によって同型性判定を行うことができるようになるため，多数のグラフ間の同型性判定に有効である． したがって，グラフ書換え系における効率的なグラフ正規化は重要な課題といえる． 本論文では彩色単純グラフを対象としたMcKayのグラフ正規化アルゴリズムにおいて， グラフ書換えに伴って変化しないグラフ構造の情報を再利用する最適化手法を提案する．再利用の実現のために，McKayのアルゴリズムの実行過程で算出される情報がグラフ構造のどの部分を反映したものかを定式化し，再計算が必要となる情報とそうでないものを区別しながら新たなグラフの正規化を行う手法を構築した．提案手法による最適化の効果を実験的に評価し，多くの場合について，頂点数に対して線形オーダを下回る時間計算量で新たな正規化が行えることを確認した．
[pdf]
Shota Matsumoto, Fumihiko Kono, Teruya Kobayashi and Kazunori Ueda:
HyLaGI: Symbolic Implementation of a Hybrid Constraint Language,
Electronic Notes in Theoretical Computer Science, Vol.317 (2015),
pp.109-115.
(Also in Proc. NSV 2015: 8th International Workshop on Numerical Software
Verification, Seattle, USA, April 13, 2015, pp.70-75.)
A modeling language for hybrid systems HydLa and its implementation HyLaGI are described. HydLa is a constraint-based language that can handle uncertainties of models smoothly. HyLaGI calculates trajectories by symbolic formula manipulation to exclude errors resulting from floating-point arithmetic. HyLaGI features a nondeterministic simulation algorithm so it can calculate all possible qualitative different trajectories of models with uncertainties.
[pdf] Hiroaki Akutsu, Kazunori Ueda, Takeru Chiba, Tomohiro Kawaguchi and Norio Shimozono: Reliability Analysisof Highly Redundant Distributed Storage Systems with Dynamic Refuging, in Proc. 23rd Euromicro International Conference on Parallel, Distributed and Network-based Processing, Turk, Finland, March 4, 2015, pp.261-268, DOI:10.1109/PDP.2015.32.
In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed storage areas from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamically changing amount of storage at each redundancy level due to multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, we found that the probability of data loss decreased by two orders of magnitude for systems with 384 or more drives compared to normal RAID. This technique turned out to scale well, and a system with 1536 inexpensive drives attained lower data loss probability than RAID 6 with 16 enterprise-class drives.
[pdf] 石井大輔，上田和紀： 非線形ハイブリッドシステムの可到達集合の精度保証， 計測と制御，Vol.53, No.12 (2014), pp.1086-1092.
[pdf] 安田竜，吉田健人，上田和紀： LMNtal並列モデル検査における状態生成数削減及び高速化， 人工知能学会論文誌，Vol.29, No.1 (2014), pp.182-187.
SLIM is an LMNtal runtime. LMNtal is a programming and modeling language based on hierarchical graph rewriting. SLIM features automata-based LTL model checking that is one of the methods to solve accepting cycle search problems. Parallel search algorithms OWCTY and MAP used by SLIM generate a large number of states for problems having and accepting cycles. Moreover, they have a problem that performance seriously falls for particular problems. We propose a new algorithm that combines MAP and Nested DFS to remove states for problems including accepting cycles. We experimented the algorithm and confirmed improvements both in performance and scalability.
[pdf] 松本翔太，上田和紀： ハイブリッド制約言語HydLaの記号実行シミュレータHyrose， コンピュータソフトウェア，Vol.30, No.4 (2013), pp.18-35.
ハイブリッドシステムとは時間の経過に伴って状態が連続変化したり，状態や方程式系が離散変化したりする動的システムであり，物理学をはじめとした様々な分野への応用が可能である．HydLaはハイブリッドシステムの仕様を制約によって記述する宣言型言語であり，今回我々はHydLaの記号実行シミュレータとしてHyroseを開発した．Hyroseは公開されている唯一のHydLaの実装であり，パラメータを含むシステムのシミュレーションや，自動場合分けに基づく探索機能に加え，有界モデル検査機能を備えていることを特徴とする．本論文ではまずHyroseの基本設計や機能，実装に当たって明確化を要した事項などについて論じる．その後，記号実行の有用性を示す例として，システムの振る舞いがパラメータによって，定量的および定性的に変化する場合の解析例について論じる．
[pdf] Kazunori Ueda: Towards a Substrate Framework of Computation. In Concurrent Objects and Beyond, Gul Agha et al. (eds.), LNCS 8665, Springer-Verlag, 2014, pp.341-366, DOI:10.1007/978-3-662-44471-9_15.
A grand challenge in computing is to establish a substrate computational model that encompasses diverse forms of non-sequential computation. This paper demonstrates how a hypergraph rewriting framework nicely integrates various forms and ingredients of concurrent computation and how simple static analyses help the understanding and optimization of programs. Hypergraph rewriting treats processes and messages in a unified manner, and treats message sending and parameter passing as symmetric reaction between two entities. Specifically, we show how fine-grained strong reduction of the lambda-calculus can be concisely encoded into hypergraph rewriting with a small set of primitive operations.
[pdf] Kazunori Ueda, Shota Matsumoto, Akira Takeguchi, Hiroshi Hosobe, Daisuke Ishii: HydLa: A High-Level Language for Hybrid Systems. In Proc. Second Workshop on Logics for System Analysis (LfSA 2012, affiliated with CAV 2012), July 2012, pp.3-17.
We have been working on the design and implementation of HydLa, a high-level modeling language for hybrid systems. Its objectives and features include a constraint-based (as opposed to automata-based) formalism, proper handling of uncertainties, and nondeterministic execution. The talk will describe an overview of the language with live demonstration of our prototype implementation.
[pdf] Kazunori Ueda and Seiji Ogawa: HyperLMNtal: An Extension of a Hierarchical Graph Rewriting Model. Künstliche Intelligenz, Vol.26, No.1 (2012), pp.27-36. DOI:10.1007/s13218-011-0162-3.
LMNtal is a language model based
on hierarchical graph rewriting that uses point-to-point links to
represent connectivity and membranes to represent hierarchy. LMNtal
was designed to be a substrate language of various computational
models, especially those addressing concurrency, mobility and multiset
rewriting.
Although point-to-point links and membranes could be used together to
represent multipoint connectivity, our experiences with LMNtal told
that \textit{hyperlinks} would be a important and useful extension to
the language.
We have accordingly expanded LMNtal into a hierarchical hypergraph
rewriting language model, HyperLMNtal. HyperLMNtal enabled
concise description of computational models involving flexible and
diverse forms of references between data; in particular, it enabled
efficient encoding of a constraint processing language CHR in terms of
both performance and computational complexity.
This paper describes the design and implementation of HyperLMNtal as a
case study of language evolution.
[pdf] 後町将人，堀泰祐，上田和紀： LMNtal実行時処理系の並列モデル検査器への発展， コンピュータソフトウェア，Vol.28, No.4 (2011), pp.137-157.
モデル検査は，状態遷移系の振舞いから不具合の探索を網羅的に行うシステム検証技術として注目を集めている．階層グラフ書換えに基づく状態遷移系記述言語LMNtalは，実行時処理系SLIMを拡張する形でモデル検査器へと発展してきた．階層グラフ構造は，状態空間探索において重要となる対称性吸収メカニズムを備えた強力なデータ構造であるが，それでもなおモデル検査は状態空間爆発を招きやすく，時間と空間の両面で効率的な状態管理方式を必要としていた．この問題に対処してモデル検査器としての有用性を高めるべく，我々は共有メモリ環境を対象にした並列モデル検査器への拡張と状態管理の最適化手法の開発に取り組み，LMNtal をモデル記述言語としたモデル検査の検証規模拡大と高速化を実現した．
[pdf] Daisuke Ishii, Kazunori Ueda, Hiroshi Hosobe: An Interval-based SAT Modulo ODE Solver for Model Checking Nonlinear Hybrid Systems. Int. J. Softw. Tools. Technol. Transfer, Vol.13, No.5 (2011), pp.449-461, DOI:10.1007/s10009-011-0193-y.
This paper presents a bounded model checking (BMC) tool called hydlogic for hybrid systems. It translates a reachability problem of a nonlinear hybrid system into a predicate logic formula involving arithmetic constraints, and checks the satisfiability of the formula based on a satisfiability modulo theories (SMT) method. We tightly integrate (i) an incremental SAT solver to enumerate the possible sets of constraints and (ii) an interval-based solver for hybrid constraint systems (HCSs) to solve the constraints described in the formulas. The HCS solver verifies rigorously the occurrence of a discrete change by enclosing continuous states that may cause the discrete change by a set of boxes.We adopt the existence property of a unique solution in the boxes computed by the HCS solver as (i) a proof of the reachability of a model and (ii) a guide in the over-approximation refinement procedure. Our hydlogic implementation successfully handled several examples including those with nonlinear constraints.
[pdf] 渋谷俊，高田賢士郎，細部博史，上田和紀： ハイブリッドシステムモデリング言語HydLa 処理系の実行アルゴリズム， コンピュータソフトウェア，Vol.28, No.3 (2011), pp. 167-172.
ハイブリッドシステムとは時間の経過に伴って状態が連続変化したり，状態や方程式が離散変化したりする系を指す．HydLaは制約概念に基づくハイブリッドシステムモデリング言語であり，精度保証されたシミュレーションを行うことでシステム検証に役立てることを目標としている．HydLaプログラムにおいて同時刻に複数の条件が成り立つ場合や連鎖的に複数の条件が成り立つ場合のシミュレーション方法が明らかでなかった．本稿ではHydLaプログラムにおける離散変化と連続変化を正しく実行するアルゴリズムを述べる．
[pdf] 川端聡基，小林史佳，上田和紀： 強連結成分の性質を用いたOWCTYモデル検査アルゴリズムの高速化， 人工知能学会論文誌，Vol.26, No.2 (2011), pp.341-346.
Model checking is an exhaustive search method of verification. Automata-based LTL model checking is one of the methods to solve accepting cycle search problems. Model checking is prone to state-space explosion, and we may expect that parallel processing would be a promising approach. However, the optimal sequential algorithm is P-complete and is difficult to parallelize. Alternative parallel algorithms have been proposed, and OWCTY reversed is one of them. OWCTY reversed is known to be a stable and fast algorithm for models without bugs, but it does not use the characteristics of the automata used in LTL model checking. We propose a new algorithm named SCCOWCTY that exploits the SCCs (strongly connected components) of property automata. The algorithm removes states that are judged not to form accepting cycles faster than OWCTY reversed. We experimented and compared the two algorithms using DiVinE, and confirmed improvements both in performance and scalability.
[pdf] 上田和紀，細部博史，石井大輔： ハイブリッド制約言語HydLaの宣言的意味論， コンピュータソフトウェア，Vol.28, No.1 (2011), pp.306-311.
時間の経過に伴って状態が連続変化したり，状態や方程式系自体が離散変化したりする系をハイブリッドシステムと呼ぶ．我々は，不確実値の扱い，シミュレーションと検証の統合などの観点から，制約概念に基づくハイブリッドシステムモデリング言語HydLaの設計と実装を進めてきた．HydLaは，制約階層概念の採用によって制約条件を過不足なく与えることを容易にした点や，数学や論理学の記法を最大限活用する点を特徴とするが，種々の言語機能の相互作用のためにその意味論の定式化は自明ではない．本論文では，HydLaの宣言的意味論を定式化して考察を加えるとともに，宣言的意味論から導かれる性質や帰結を具体例を用いつつ論じる．
[pdf] 綾野貴之，堀泰祐，岩澤宏希，小川誠司，上田和紀： 統合開発環境によるLMNtalモデル検査， コンピュータソフトウェア，Vol.27, No.4 (2010), pp.197-214.
LMNtalは階層グラフ書換えに基づく統合計算モデルとして設計され発展してきたが，我々は新たにLMNtal の検証および探索分野への応用を目指して，LMNtalをモデリング言語とするモデル検査器を構築し，モデリングと検証の経験を蓄積している．本論文では，状態空間探索や検証の可視化機能を備えた統合開発環境LMNtalEditorとそれを利用したモデル記述例，動作例，可視化例を紹介する．新たに構築したLMNtalEditorは，全状態空間や反例の可視化とブラウジング機能や，特定の条件を満たす状態の検索機能などを備えている．並行計算やAI探索問題を含む多様な例題をLMNtalEditor上で実行することで，本統合環境がモデルや反例の理解に重要な役割を果たし，検証作業の効率を大幅に高めることが確認できた．またPromela, MSR, Coloured Petri Netなどのモデリング言語による記述のLMNtalへの変換実験を通じて，LMNtalの表現力がカバーする範囲を検証分野に拡大することができた．
[pdf] Daisuke Ishii, Kazunori Ueda, Hiroshi Hosobe: An Interval-based SAT Modulo ODE Solver for Model Checking Nonlinear Hybrid Systems. In Proc. Workshop on Verified Software: Theory, Tools, and Experiments (VSTTE'09), 2009.
This paper presents a bounded model checking (BMC) tool called hydlogic for hybrid systems. It translates a reachability problem of a nonlinear hybrid system into a predicate logic formula involving arithmetic constraints, and checks the satisfiability of the formula based on a satisfiability modulo theories (SMT) method. We tightly integrate (i) an incremental SAT solver to enumerate the possible sets of constraints and (ii) an interval-based solver for hybrid constraint systems (HCSs) to solve the constraints described in the formulas. The HCS solver verifies the occurrence of a discrete change by enclosing continuous states that may cause the discrete change by a set of boxes.We adopt the existence property of a unique solution in the boxes computed by the HCS solver as (i) a proof of the reachability of a model, and (ii) a guide in the over-approximation refinement procedure. Our hydlogic implementation successfully handled several examples including those with nonlinear constraints.
[pdf] Daisuke Ishii, Kazunori Ueda, Hiroshi Hosobe and Alexandre Goldsztejn, Interval-based solving of hybrid constraint systems. In Proc. Third IFAC Conference on Analysis and Design of Hybrid Systems (ADHS'09), 2009, pp.144-149.
An approach for reliable modeling, simulation and verification of hybrid systems is interval arithmetic, which guarantees that the resulting intervals do not exceed specified size. Interval-based computation on hybrid systems are often difficult, especially when the systems are described by nonlinear ordinary differential equations (ODEs) and nonlinear algebraic equations. We formulate the problem of detecting a discrete change in hybrid systems as a hybrid constraint system (HCS), consisting of a flow constraint on trajectories (i.e. continuous functions over time) and a guard constraint on states causing discrete changes. We also implemented a technique for solving HCSs by coordinating (i) interval-based solving of nonlinear ODEs, and (ii) a constraint programming technique for reducing of interval enclosures of solutions. The proposed technique reliably solves HCSs with nonlinear constraints. Our technique employs the interval Newton method to accelerate the reduction of interval enclosures, while guaranteeing that the enclosure contains a solution.
[pdf] Kazunori Ueda, Takayuki Ayano, Taisuke Hori, Hiroki Iwasawa and Seiji Ogawa, Hierarchical Graph Rewriting as a Unifying Tool for Analyzing and Understanding Nondeterministic Systems. In Proc. Sixth International Colloquium on Theoretical Aspects of Computing (ICTAC 2009), LNCS 5684, Springer, 2009, pp.349-355.
We have designed and implemented LMNtal (pronounced "elemental"), a language based on hierarchical graph rewriting that allows us to encode diverse computational models involving concurrency, mobility and multiset rewriting. Towards its novel applications, the system has recently evolved into a model checker that employs LMNtal as the modeling language and PLTL as the specification language. The strengths of our LMNtal model checker are its powerful data structure, highly nondeterministic computation it can express, and virtually no discrepancy between programming and modeling languages. Models expressed in Promela, MSR, and Coloured Petri Nets can be easily encoded into LMNtal. The visualizer of the LMNtal IDE turned out to be extremely useful in understanding models by state space browsing. The LMNtal IDE has been used to run and visualize diverse examples taken from the fields of model checking, concurrency and AI search.
[pdf] Kei Ohmura and Kazunori Ueda, c-sat: A Parallel SAT Solver for Clusters. In Proc. Twelfth International Conference on Theory and Applications of Satisfiability Testing (SAT 2009), LNCS 5584, Springer, 2009, pp.524-537.
Parallelizing modern SAT solvers for clusters such as Beowulf is an important challenge both in terms of performance scalability and stability. This paper describes a SAT Solver c-sat, a parallelization of MiniSat using MPI. It employs a layered master-worker architecture, where the masters handle lemma exchange, deletion of redundant lemmas and the dynamic partitioning of search trees, while the workers do search using different decision heuristics and random number seeds. With careful tuning, c-sat showed good speedup over MiniSat with reasonably small communication overhead on various clusters. On an eight-node cluster with two Dual-Core Opterons on each node (32 PEs), c-sat ran at least 23 times faster than MiniSat using 31 PEs (geometric mean; at least 31 times for satisfiable problems) for 189 large-scale problems from SAT Competition and two SAT-Races.
[pdf]
Kazunori Ueda, LMNtal as a Hierarchical Logic Programming Language.
Theoretical Computer Science, Vol.410, No.46 (2009), pp.4784-4800.
doi:10.1016/j.tcs.2009.07.043
LMNtal (pronounced "elemental") is a simple language model based on hierarchical graph rewriting that uses logical variables to represent connectivity and membranes to represent hierarchy. LMNtal is an outcome of the attempt to unify constraint-based concurrency and Constraint Handling Rules (CHR), the two notable extensions to concurrent logic programming. LMNtal is intended to be a substrate language of various computational models, especially those addressing concurrency, mobility and multiset rewriting. Although the principal objective of LMNtal was to provide a unifying computational model, it is of interest to equip the formalism with a precise logical interpretation. In this paper, we show that it is possible to give LMNtal a simple logical interpretation based on intuitionistic linear logic and a flattening technique. This enables us to call LMNtal a hierarchical, concurrent linear logic language.
[pdf] 石井大輔，上田和紀，細部博史： ハイブリッドシステムの高信頼シミュレーションのための区間に基づく制約伝播手法, 情報処理学会論文誌 数理モデル化と応用, Vol.1, No.1 (2008), pp.149-159.
離散変化と時間に関する連続変化からなるハイブリッドシステムは，物理学をはじめ，さまざまな分野の問題を記述，シミュレーションするためのモデルとして注目を集めている．ハイブリッドシステムのシミュレーションにおいては，連続状態を扱う際に計算誤差が不可避であり，厳密な解軌道を得ることができないという問題がある．本研究の提案手法では，常微分方程式の区間解析に基づく求解手法と，非線形問題のための区間制約伝播手法を用いた求解手法を統合することにより，ハイブリッド軌道の指定精度による完全な区間包囲を効率良く求めることを可能にした．また提案手法を実装するとともに，実験により提案手法の有効性を評価した．提案手法により，ハイブリッドシステムのシミュレーションを，解の精度を保証しながら高信頼に行うことが可能になる．
[pdf] 上田和紀： 論理・制約プログラミングと並行計算， コンピュータソフトウェア，Vol.25, No.3 (2008), pp.49-54.（論説）
[pdf] Kazunori Ueda, Encoding the Pure Lambda Calculus into Hierarchical Graph Rewriting. In Proc. 19th International Conference on Rewriting Techniques and Applications (RTA 2008), LNCS 5117, Springer, 2008, pp.392-408.
Fine-grained reformulation of the lambda calculus is expected to solve several difficulties with the notion of substitutions--definition, implementation and cost properties. However, previous attempts including those using explicit substitutions and those using Interaction Nets were not ideally simple when it came to the encoding of the pure (as opposed to weak) lambda calculus. This paper presents a novel, fine-grained, and highly asynchronous encoding of the pure lambda calculus using LMNtal, a hierarchical graph rewriting language, and discusses its properties. The major strength of the encoding is that it is significantly simpler than previous encodings, making it promising as an alternative formulation, rather than just the encoding, of the pure lambda calculus. The membrane construct of LMNtal plays an essential role in encoding colored tokens and operations on them. The encoding has been tested using the publicly available LMNtal implementation.
[pdf] 村山敬，工藤晋太郎，櫻井健，水野謙，加藤紀夫，上田和紀： 階層グラフ書換え言語LMNtalの処理系， コンピュータソフトウェア，Vol.25, No.2 (2008), pp.47-77.
LMNtal は階層グラフ書換えに基づく言語モデルであり，リンク構造による接続構造と膜による階層構造の表現・操作機能によって，動的データ構造や多重集合書換えを扱うプログラムを簡潔に記述することができる．LMNtal は書換え規則の適用を単位とする細粒度の並行性をもっており，正しく効率的な実装方式は自明でない．そこで言語処理系をJavaを用いて開発し，効率をできるだけ犠牲にせずに正しく動作する実装方式を確立した．処理系は中間命令列へのコンパイラ，その解釈実行系及びJavaソースへのトランスレータからなり，他言語インタフェースをはじめとするさまざまな有用な機能を備えている．複数の膜を貫くリンク構造を正しくつなぎかえるための処理や，複数の膜にある書換え規則を正しく非同期実行させるための工夫も行っている．本論文では，処理系開発において主要な技術的課題となった階層グラフ構造の保持方法，中間命令体系，安全な非同期実行方式等を中心として，公開中のLMNtal処理系の設計と実装について論じる
[pdf] 乾敦行，工藤晋太郎，原耕司，水野謙，加藤紀夫，上田和紀： 階層グラフ書換えモデルに基づく統合プログラミング言語LMNtal， コンピュータソフトウェア，Vol.25, No.1 (2008), pp.124-150.
LMNtalは階層グラフ書換えに基づく単純な言語モデルであり，接続構造の表現に論理変数を，階層構造の表現に膜を用いることを特徴としている．LMNtalは，多重集合や並行処理やモビリティなどの概念を持つさまざまな計算モデルの統合を目指すと同時に，階層グラフ書換えに基づく実用的なプログラミング言語を提供してその有用性を示すことを重要な目標としている．本論文の目的は，プログラミング言語としてのLMNtalの諸機能を紹介し，その記述力を多くの例題を用いて示すことである．我々は，算術，ルール適用制御，モジュール，他言語インタフェースなどの重要機能を階層グラフ書換えモデルの中に組み込む方法を設計し実装した．記述力の検証のためにλ計算，π計算，ambient計算，CHRなどの代表的な関連計算モデルのエンコーディングを行い，それらを実際に処理系上で動作させることに成功した．
[pdf] Kazunori Ueda, Encoding Distributed Process Calculi into LMNtal. Electronic Notes in Theoretical Computer Science, Vol.209 (2008), pp.187-200.
Towards a unifying model of concurrency, we have designed and implemented LMNtal (pronounced "elemental"), a model and language based on hierarchical graph rewriting that uses logical variables to represent connectivity and membranes to represent hierarchy. Diverse computational models including the pi-calculus and the lambda-calculus have been encoded into LMNtal and tested on our LMNtal system. This paper describes the encoding of the ambient calculus with running examples. The technical crux is the distributed management of names in the presence of locality an mobility. We have designed and implemented a self-adjusting management technique of names in which the name management and mobility operations are performed concurrently.
[pdf] Daisuke Ishii, Kazunori Ueda, Hiroshi Hosobe, An Interval-Based Approximation Method for Discrete Changes in Hybrid CC. In Frederic Benhamou, Narendra Jussien, and Barry O'Sullivan (Eds.), Trends in Constraint Programming, pp. 245-255, ISTE, London, 2007 (Re-reviewed post-proceedings of the CP2006 Workshops).
Hybrid cc is a constraint-based programming language aimed for simple modeling and rigorous simulation of hybrid systems. The existing implementation of Hybrid cc uses intervals to handle partially specified models. Although the implementation solves nonlinear equations and ordinary differential equations (ODEs) based on interval arithmetic, discrete changes are not properly computed with intervals. We propose a method for enclosing every solution of a hybrid system with a set of boxes. Our method computes continuous state changes by solving ODEs with initial time and values expressed by intervals. Then the method detects and encloses every state that will cause a discrete change. We devise algorithms for pruning and splitting intervals to obtain tight enclosures. The experimental results show the efficiency of the algorithms.
[pdf] Kazunori Ueda, Norio Kato, Koji Hara and Ken Mizuno, LMNtal as a Unifying Declarative Language. In Proc. Third workshop on Constraing Handling Rules (CHR 2006), pp.1-15 (invited talk).
LMNtal (pronounced "elemental") is a simple language model based on hierarchical graph rewriting that uses logical variables to represent connectivity and membranes to represent hierarchy. LMNtal is an outcome of the attempt to unify constraint-based concurrency and Constraint Handling Rules (CHR), the two notable extensions to concurrent logic programming. LMNtal is intended to be a substrate language of various computational models, especially those addressing concurrency, mobility and multiset rewriting. Another important goal of LMNtal has been to put hierarchical graph rewriting into practice and demonstrate its versatility by designing and implementing a full-fledged, monolithic programming language. In this paper, we demonstrate the practical aspects of LMNtal using a number of examples taken from diverse areas of computer science. Also, we discuss the relationship between LMNtal and CHR, which exhibit both commonalities and differences in various respects.
[pdf] Kazunori Ueda, Logic Programming and Concurrency: a Personal Perspective. The ALP NewsLetter, Vol.19, No.2, May 2006.
[pdf] Kazunori Ueda and Norio Kato, LMNtal: a language model with links and membranes. In Proc. Fifth Int. Workshop on Membrane Computing (WMC 2004), LNCS 3365, Springer, 2005, pp.110-125. (invited lecture)
LMNtal (pronounced "elemental") is a simple language model based on graph rewriting that uses logical variables to represent links and membranes to represent hierarchies. The two major goals of LMNtal are (i) to unify various computational models based on multiset rewriting and (ii) to serve as the basis of a truly general-purpose language covering various platforms ranging from wide-area to embedded computation. Another important contribution of the model is it greatly facilitates programming with dynamic data structures.
[pdf] 上田 和紀，加藤 紀夫： 言語モデルLMNtal． コンピュータソフトウェア，Vol.21, No.2 (2004), pp.44-60．
階層的グラフの書換えを基本原理とするプログラミング言語モデルLMNtalについて，設計の背景および関連研究を交えながら解説する．LMNtal（elemental と発音）は，並行計算や多重集合書換えをはじめとするさまざまな計算に関する概念の統合を目指して設計した言語モデルであり，(1) 計算モデルとして簡明であることと，(2) 多様なプラットフォームで利用可能な実用プログラミング言語のベースとなること，の両立を目指している．処理系も稼働を始めている．
本解説ではなるべく多くのプログラム例を交えながら，LMNtalの言語機能について解説するとともに，他の言語や計算モデルに見られる言語機能との関連付けを行なう．
[pdf] Kazunori Ueda and Norio Kato, The Language Model LMNtal. In Proc. 19th Int. Conf. on Logic Programming (ICLP'03), LNCS 2916, Springer-Verlag, pp.517-518, 2003.
[pdf] 金木 佑介，加藤 紀夫，上田 和紀： KLIC処理系におけるUNIXプロセス間通信を利用した例外処理の実装． 第6回プログラミングおよび応用のシステムに関するワークショップ (SPA2003)， 日本ソフトウェア科学会，2003年3月．
[pdf]
(author version)
Kazunori Ueda,
A Pure Meta-Interpreter for Flat GHC, A Concurrent Constraint
Language.
In Computational Logic: Logic Programming and Beyond
(Essays in Honour of Robert A. Kowalski, Part I),
A.C. Kakas, F. Sadri (Eds.), Lecture Notes in Artificial Intelligence
2407, Springer-Verlag, 2002, pp.138-161.
This paper discusses the construction of a
meta-interpreter of Flat GHC, one of the simplest and
earliest concurrent constraint languages.
Meta-interpretation has a long history in logic programming, and has
been applied extensively to building programming systems, adding
functionalities, modifying operational semantics and evaluation
strategies, and so on.
Our objective, in contrast, is to design the pair of
(i) a representation of programs suitable for code mobility and
(ii) a pure interpreter (or virtual machine)
of the represented code,
bearing networked applications of concurrent constraint programming
in mind. This is more challenging
than it might seem; indeed,
meta-interpreters of many programming languages
achieved their objectives by adding small
primitives into the languages and exploiting their functionalities.
A meta-interpreter in a pure, simple concurrent language is
useful because it is fully amenable to theoretical support including
partial evaluation.
After a number of trials and errors, we have arrived at treecode,
a ground-term representation of Flat GHC programs that can be easily
interpreted, transmitted over the network, and converted back to the
original syntax. The paper describes how the interpreter
works, where the subtleties lie, and what its design implies. It also
describes how the interpreter, given the treecode of a program, is
partially evaluated to the original program by the unfold/fold
transformation system for Flat GHC.
[pdf] (author version) Yasuhiro Ajiro and Kazunori Ueda, Kima: an Automated Error Correction System for Concurrent Logic Programs. Automated Software Engineering, Vol.9, No.1 (2002), pp.67-94.
We have implemented Kima, an automated error correction system for
concurrent logic programs. Kima corrects near-misses such as wrong
variable occurrences in the absence of explicit declarations of program
properties.
Strong moding/typing and constraint-based analysis are turning out to
play fundamental roles in debugging concurrent logic programs as well as
in establishing the consistency of communication protocols and data
types. Mode/type analysis of Moded Flat GHC is a constraint
satisfaction problem with many simple mode/type constraints, and can be
solved efficiently. We proposed a simple and efficient technique which,
given a non-well-moded/typed program, diagnoses the ``reasons'' of
inconsistency by finding minimal inconsistent subsets of mode/type
constraints. Since each constraint keeps track of the symbol occurrence
in the program, a minimal subset also tells possible sources of program
errors.
Kima realizes automated correction by replacing symbol occurrences
around the possible sources and recalculating modes and types of
the rewritten programs systematically. As long as bugs are near-misses,
Kima proposes a rather small number of alternatives that include an
intended program. Search space is kept small because the minimal subset
confines possible sources of errors in advance. This paper presents the
basic algorithm and various optimization techniques implemented in Kima,
and then discusses its effectiveness based on quantitative experiments.
Kazunori Ueda,
A Close Look at Constraint-Based Concurrency (a tutorial).
In Proc. 17th Int. Conf. on Logic Programming (ICLP'01),
Codognet, P. (ed.), Lecture Notes in Computer Science
2237, Springer, 2001, p.9.
PDF file of the talk slides
Constraint-based concurrency, also known as the cc
(concurrent
constraint) formalism, is a simple framework of concurrency that
features (i) asynchronous message passing, (ii) polyadicity and data
structuring mechanisms, (iii) channel mobility, and (iv) nonstrictness
(or computing with partial information). Needless to say, all these
features originate from the use of constraints and logical variables for
interprocess communication and data representation.
Another feature of constraint-based concurrency is its remarkable
stability; all the above features were available essentially in its
present form by mid 1980's in concurrent logic programming languages.
Another well-studied framework of concurrency is name-based
concurrency represented by the family of pi-calculi, in which
names represent both channels and tokens conveyed by channels.
Some variants of the original pi-calculus featured asynchronous
communication, and some limited the use of names in pursuit of nicer
semantical properties. These modifications are more or less related
to the constructs of constraint-based concurrency.
Integration of constraint-based and name-based concurrency can be
found in proposals of calculi such as the gamma-calculus, the
rho-calculus and the Fusion calculus, all of which incorporate
constraints or name equation into name-based concurrency.
This tutorial takes a different, analytical approach in
relating the two formalisms; we compare the roles of logical
variables and names rather than trying to integrate one into the
other. Although the comparison under their original, untyped
setting is
not easy, once appropriate type systems are incorporated to both
camps, name-based communication and constraint-based communication
exhibit more affinities.
The examples of such type systems are linear types for the
pi-calculus and the mode/linearity/capability systems for Guarded
Horn Clauses (GHC). Both are concerned with the polarity and
multiplicity of communication, and prescribe the ways in which
communication channels can be used. They help in-depth understanding
of the phenomena occurring in the course of computation.
The view of constraint-based concurrency that the tutorial intends to
provide will complement the usual, abstract view based on ask
and tell operations on a constraint store. For instance, it
reveals the highly local nature of a constraint store (both at
linguistic and implementation levels) which is often understood to be
a global, shared entity. It also brings resource-consciousness into
the logical setting.
This is expected to be a step towards the deployment of
cc languages as a common platform of
non-sequential computation including parallel, distributed, and
embedded computing.
[pdf]
(author version)
Kazunori Ueda,
Resource-Passing Concurrent Programming.
In Proc. Fourth Int. Symp. on Theoretical Aspects of Computer Software (TACS 2001),
Kobayashi, N. and Pierce, B. (Eds.), Lecture Notes in Computer Science
2215, Springer, 2001, pp.95-126.
PDF file of the talk slides
The use of types to deal with access capabilities of program
entities is becoming increasingly popular.
In concurrent logic programming, the first attempt was made in
Moded Flat GHC in 1990, which gave polarity structures (modes) to
every variable occurrence and every predicate argument. Strong
moding turned out to play fundamental roles in programming,
implementation and the in-depth understanding of constraint-based
concurrent computation.
The moding principle guarantees that each variable is written
only once and encourages capability-conscious programming.
Furthermore, it gives less generic modes to programs that discard
or duplicate data, thus providing the view of "data as
resources." A simple linearity system built upon the mode system
distinguishes variables read only once from those read possibly
many times, enabling compile-time garbage collection. Compared to
linear types studied in other programming paradigms, the primary
issue in constraint-based concurrency has been to deal with
logical variables and highly non-strict data structures they
induce.
In this paper, we put our resource-consciousness one step
forward and consider a class of `ecological' programs which
recycle or return all the resources given to them while allowing
concurrent reading of data structures via controlled aliasing.
This completely recyclic subset enforces us to think more about
resources, but the resulting programs enjoy high symmetry which we
believe has more than aesthetic implications to our programming
practice in general.
The type system supporting recyclic concurrent programming
gives a [-1,+1] capability to each occurrence of variable and
function symbols (constructors), where positive/negative values
mean read/write capabilities, respectively, and fractions mean
non-exclusive read/write paths. The capabilities are intended to
be statically checked or reconstructed so that one can tell the
polarity and exclusiveness of each piece of information handled by
concurrent processes. The capability type system refines and
integrates the mode system and the linearity system for Moded Flat
GHC. Its arithmetic formulation contributes to the simplicity.
The execution of a recyclic program proceeds so that every
variable has zero-sum capability and the resources (i.e.,
constructors weighted by their capabilities) a process absorbs
match the resources it emits. Constructors accessed by a process
with an exclusive read capability can be reused for other
purposes.
The first half of this paper is devoted to a tutorial
introduction to constraint-based concurrency in the hope that it
will encourage cross-fertilization of different concurrency
formalisms.
[pdf]
坂本 幸司，松宮 志麻，上田 和紀：
並列KLIC処理系上での配列演算の最適化．
情報処理学会論文誌：プログラミング，Vol.42, No.SIG 3 (PRO 10) (March 2001), pp.1-13.
本研究の目標は，並行論理型言語KL1のUNIX上の処理系KLIC上で，単一代入変数の特徴を生かしたまま効率の良い並列配列計算を実現することである．KL1の変数の特性を実現するために，KLICでは様々な工夫がなされており，配列はマルチバージョンベクタとして実装されている．この配列は任意のデータを要素として保持できるが，その代償として数値演算向きの最適化は十分なされていない．浮動小数点数がジェネリック・オブジェクトであるため，時間的・空間的に非効率である．特に共有メモリ型並列計算機（SMP）上の数値並列処理を考えると，KLICのベクタはSMP上で配列の共有がなされていない．そのためノード間で配列の大きさに比例する量の通信が発生して並列効果があまり望めない状態であった．
そこで本研究では，破壊的代入，タグ判別の省略などで高速に読み書きできる数値演算用の配列をジェネリック・オブジェクトとして実装した．制約ベースの静的解析系による変数のモード/型/参照数情報と具体化状態に関する性質を併用して，プログラム中のベクタを数値演算用配列に安全に置き換えることにより効率改善を行った．また，新たに多次元配列を導入することで数値演算プログラムのより自然な記述が可能となった．これらの配列は，本体をSMPの共有メモリ上に配置することで，配列に対する並列アクセスの安全性をプログラムレベルで明示したまま高度な並列化が実現できた．さらに，末尾呼び出しループに対して最適化の技法を併用することでさらに5〜8倍の性能向上を実現でき，手続き型プログラムに近い目的コードの生成を可能とした．
高木 祐介，上田 和紀：
dklic: KL1による分散KL1言語処理系の実装．
第4回プログラミングおよび応用のシステムに関するワークショップ (SPA2001)，
日本ソフトウェア科学会，2001年3月．
KL1は単純な並行論理型言語である．並行論理型言語は，並列・分散を含む並行計算をプロセス群とプロセス間通信によって宣言的に記述でき，強く型付け・モード付けされた並行論理型プログラムに対して通信プロトコルの一貫性を保証することができる．これらの性質は，分散プログラミングを容易にする．分散KL1は場所の概念を導入するプラグマ，および分散ストリーム管理を提供する．分散プラグマは並行計算のネットワーク透過性を保つように注意して設計された．dklicはKL1によって分散APIを記述し，既存のKL1言語処理系の分散拡張を行う．実働する分散KL1言語処理系を早急に実現するため，実装の単純さを重視した．
加藤 紀夫，上田 和紀：
並行論理プログラムにおける逐次実行部分の抽出方法論．
第3回プログラミングおよびプログラミング言語ワークショップ (PPL2001)，
日本ソフトウェア科学会，2001年3月，pp.2-13.
Research on compiling techniques for concurrent languages
so far mainly have focused on equivalent transformations
of source codes and intermediate language codes.
However, to build a practical optimizing compiler
requires another thing, namely, a solution for
how to apply particular optimizing techniques.
In fact, some effective optimizing techniques such as
heap destructive assignment and loop optimized execution
cannot be guaranteed by program transformation.
The analyses for these techniques tend to have been on source codes
without taking account of memory management,
which often obscures how to justify themselves particularly when
they are applied to global optimization.
Moreover, no definitive methods have been existed to analyze
those optimizing techniques which require the modularity of analyses
such as variable unboxing and loop fusion.
This paper presents a method to generate optimized object codes
directed by a bottom-up analysis that focuses on dataflow,
with above problems solved.
The analysis is performed with interfaces,
where an interface represents an expectation of the execution of a process.
We will show how to formalize interfaces in terms of
denotational semantics as well as the reason why
such formalization can justify an optimizing technique.
[pdf]
網代 育大，上田 和紀，
Kima: 並行論理プログラム自動修正系．
コンピュータソフトウェア，Vol.18, No.0 (2001), pp.122-137.
強モード/型体系の下で，並行論理プログラムの簡単な誤りを自動的に修正するシステム Kima の実装を行なった．Kima は KL1 プログラムにおける変数の少数個の書き誤りを，静的な解析によって自動的に修正することができる． 並行論理プログラミングにおける強モード体系および強型体系は，通信のプロトコルやデータ型の一貫性を保証し，多くの誤りを静的に検出することを可能にする．このとき用いられるモード/型解析は，多数の簡単なモード/型制約の制約充足問題であり，プログラム中の誤りは制約集合の矛盾を引き起こすことが多い．そして制約集合の中から矛盾する極小の部分集合を求めることで，誤りの箇所を局所的に特定することができる．Kima は特定した場所の周辺の記号の書換えとモード/型の再計算を機械的に行なうことによって自動修正を実現している． 本論文では，自動修正のアルゴリズムや効率化手法について述べるとともに，いくつかの定量的な実験を行なって，その有効性について論じる．
[ps.gz]
(author version)
Yasuhiro Ajiro and Kazunori Ueda,
Kima -- an Automated Error Correction System for Concurrent
Logic Programs.
In Proc. Fourth International Workshop on
Automated Debugging (AADEBUG 2000).
We have implemented Kima, an automated error correction system for
concurrent logic programs. Kima corrects near-misses such as wrong
variable occurrences in the absence of explicit declarations of program
properties.
Strong moding/typing and constraint-based analysis are turning to play
fundamental roles in debugging concurrent logic programs as well as in
establishing the consistency of communication protocols and data types.
Mode/type analysis of Moded Flat GHC is a constraint satisfaction
problem with many simple mode/type constraints, and can be solved
efficiently. We proposed a simple and efficient technique which, given
a non-well-moded/typed program, diagnoses the "reasons" of
inconsistency by finding minimal inconsistent subsets of mode/type
constraints. Since each constraint keeps track of the symbol occurrence
in the program, a minimal subset also tells possible sources of program
errors.
Kima realizes automated correction by replacing symbol occurrences
around the possible sources and recalculating modes and types of
the rewritten programs systematically. As long as bugs are near-misses,
Kima proposes a rather small number of alternatives that include an
intended program. Search space is kept small because the minimal subset
confines possible sources of errors in advance. This paper presents the
basic algorithm and various optimization techniques implemented in Kima,
and then discusses its effectiveness based on quantitative experiments.
[pdf]
Kazunori Ueda, Linearity Analysis of Concurrent Logic Programs.
In
Proc. International Workshop on Parallel and Distributed
Computing for Symbolic and Irregular Applications,
Ito, T. and Yuasa, T. (eds.), World Scientific, 2000, pp.253-270.
Automatic memory management and the hiding of the notion of pointers are the prominent features of symbolic processing languages. They make programming easy and guarantee the safety of memory references. For the memory management of linked data structures, copying garbage collection is most widely used because of its simplicity and desirable properties. However, if certain properties about runtime storage allocation and the behavior of pointers can be obtaind by static analysis, a compiler may be able to generate object code closer to that of procedural programs. In the fields of parallel, distributed and real-time computation, it is highly desirable to be able to identify data structures in a program that can be managed without using garbage collection. To this end, this paper proposes a framework of linearity analysis for a concurrent logic language Moded Flat GHC, and proves its basic property. The purpose of linearity analysis is to distinguish between fragments of data structures that may be referenced by two or more pointers and those that cannot be referenced by two or more pointers. Data structures with only one reader are amenable to compile-time garbage collection or local reuse. The proposed framework of linearity analysis is constraint-based and involves both equality and implicational constraints. It has been implemented as part of klint v2, a static analyzer for KL1 programs.
[pdf]
加藤 紀夫，上田 和紀：
並行論理型言語における同期ポイントの移動の安全性について．
情報処理学会論文誌：プログラミング，Vol.41, No.SIG 2 (PRO 6) (March 2000), pp.13-28.
並行論理型言語では，プロセス間の同期は変数に対する入出力によって行われる．したがって入力が不完全なために何も出力できないプロセスは，必要な入力を待った後で実行すればよいであろう．このプロセス変換は同期ポイントの移動と呼ばれており，プロセスの最適な実行順序を計算するために使用して実装の最適化に応用可能である．この手法を実装するには，プロセスが何かを出力するために必要な入力を求める必要があるが，並行論理型言語のような非決定的な枠組みにおいて，それを正確に求めることは一般には難しい．そこで本論文では，何らかの値に具体化されている必要のある変数を指摘する抽象実行の方法を紹介し，その正当性を論じる．発散も失敗もしないプロセスは，同期ポイントの移動の前後で表示的意味が保存されることを証明した． 正当性を論じるために使用した表示的意味論は，操作的意味論におけるプロセスの入出力の時系列を集めたものであり，その定義の簡素さによって表示的意味の等価性の証明を容易にしている．また，述語呼び出しの書き換え時に起こる変数名の付け替えについて，式の上で新しい変数を入手可能にする変数変換と呼ばれる記法を導入し，いくつかの性質を証明した．それを利用して構成した，プロセスの書き換えの親子関係を明示的に残す操作的意味論についても説明する．
[pdf]
上田 和紀：
自己調整二分木の並列操作．
コンピュータソフトウェア, Vol.16, No.5 (1999), pp.78-83.
SleatorとTarjanによる自己調整二分木(splay tree)に対して並列操作を可能にする操作アルゴリズムを提案する．提案するアルゴリズムは，同一の木に対する複数の更新・挿入・削除操作のパイプライン的並列実行を許し，かつ操作系列のスループット(単位時間内に処理可能な操作の個数)とレスポンス(個々の操作の償却計算量 (amortized complexity)) を両立させることを目的としている．スループットの最適性と挿入操作の対数的レスポンスについては理論的結果を示す．削除操作は，木の形状に関する良い性質を保つにもかかわらず，Sleatorらの枠組みでは最適性が証明できない．このことについても論じる．
沼尻 務，竹岡 厚，渡辺 高志，芦川 将之，上田 和紀：
全文検索システムVernoのアーキテクチャの設計．
第2回インターネットテクノロジーワークショップ論文集, 日本ソフトウェア科学会，1999，pp.163-170.
筆者が開発を進めているWWW全文検索システムVernoは，ユーザの複雑多岐な要求に対して応えられるプログラマブルな検索システムというコンセプトの下にアーキテクチャを設計し，実装を行っている．検索システムを3つの層に分けることにより，様々なデータベースをシステムに容易に統合することを可能にし，またネットワーク層の検索言語を用いてWWW上にととまらず，ユーザが自由にフロントエンドを設計できるようにした．一方，検索の中心となるインデックスデータベースでは，あらゆる検索を可能にするという条件を満たすべく，低速だが様々な要求に応えられるunigramによるデータベースを実装する一方で，与えられた検索要求を効率の良い検索処理にコンパイルする方法の開発，並びに検索単位を大きくしたデータベースの実装もあわせて行った．
[pdf]
(author version)
Kazunori Ueda, Concurrent Logic/Constraint Programming: The Next
10 Years.
In The Logic Programming Paradigm: A 25-Year Perspective,
K. R. Apt, V. W. Marek, M. Truszczynski, and D. S. Warren (eds.),
Springer-Verlag, 1999, pp.53-71.
PDF file of the talk slides
Concurrent logic/constraint programming is a simple and elegant formalism of concurrency that can potentially address a lot of important future applications including parallel, distributed, and intelligent systems. Its basic concept has been extremely stable and has allowed efficient implementations. However, its uniqueness makes this paradigm rather difficult to appreciate. Many people consider concurrent logic/constraint programming to have rather little to do with the rest of logic programming. There is certainly a fundamental difference in the view of computation, but careful study of the differences will lead to the understanding and the enhancing of the whole logic programming paradigm by an analytic approach. As a model of concurrency, concurrent logic/constraint programming has its own challenges to share with other formalisms of concurrency as well. They are: (1) a counterpart of lambda-calculus in the field of concurrency, (2) a common platform for various non-sequential forms of computing, and (3) type systems that cover both logical and physical aspects of computation.
前田 直人，上田 和紀：
オブジェクト共有空間を利用した分散プログラミング支援フレームワーク SOR.
第2回プログラミングおよび応用のシステムに関するワークショップ(SPA'99)，
日本ソフトウェア科学会，1999年3月．
オブジェクト共有空間という抽象化を利用した，Java言語環境による分散プログラミング支援フレームワークSORを提案する．オブジェクト共有空間は，格納されるデータの単位がオブジェクトであり，格納されたオブジェクトの内部状態はメソッド呼び出しによって変更，通知される，オブジェクト指向に基づいたモデルである．オブジェクト共有空間の実装を利用することで，異なるJVM間での仮想的なオブジェクトの共有や，オブジェクトの永続化が可能となり，分散システムの記述を容易にする．また，その実装においてproxyによる通信の隠蔽や，共有オブジェクトの物理的な位置の隠蔽，移動や複製による通信効率の向上や，耐故障性の付加が可能である．
[pdf]
(author version)
Yasuhiro Ajiro, Kazunori Ueda and Kenta Cho,
Error-Correcting Source Code.
In Proc. International Conference on Principles and Practice of Constraint
Programming (CP98),
Michael Maher and Jean-Francois Puget (eds.),
Lecture Notes in Computer Science 1520, Springer-Verlag, 1998, pp.40-54.
We study how constraint-based static analysis can be applied to the
automated and systematic debugging of program errors.
Strongly moding and constraint-based mode analysis are turning out to play
fundamental roles in debugging concurrent logic/constraint programs as
well as in establishing the consistency of communication protocols and
in optimization. Mode analysis of Moded Flat GHC is a constraint
satisfaction problem with many simple mode constraints, and can be
solved efficiently by unification over feature graphs. We have proposed
a simple and efficient technique which, given a non-well-moded program,
diagnoses the "reasons" of inconsistency by finding minimal
inconsistent subsets of mode constraints. Since each constraint keeps
track of the symbol occurrence in the program that imposed the
constraint, a minimal subset also tells possible sources of program
errors. The technique is quite general and can be used with other
constraint-based frameworks such as strong typing.
Based on the above idea, we study the possibility of
automated debugging in the absence of mode/type declarations. The
mode constraints are usually imposed redundantly, and the
constraints that are considered correct can be used for correcting
wrong symbol occurrences found by the diagnosis. As long as bugs
are near-misses, the automated debugger can propose a rather small
number of alternatives that include the intended program. Search space
is kept small because constraints effectively prune many irrelevant
alternatives. The paper demonstrates the technique by way of
examples.
[pdf]
Kazunori Ueda and Ryoji Tsuchiyama,
Optimizing KLIC Generic Objects by Static Analysis.
In Proc. 11th International Conference on Applications of Prolog,
Prolog Association of Japan, 1998, pp.27-33.
The KLIC system has achieved both high portability and extensibility by employing C as an intermediate language and featuring generic objects that allow users to define new classes of data. It is also efficient for an untyped and unmoded language with fine-grained concurrency, but its flexibility incurs runtime overhead that could be reduced by static analysis. This paper studies how constraint-based static analysis and abstract interpretation can be used to reduce dynamic data checking and to optimize loops. We applied the proposed technique to the optimization of floating-point numbers and their arrays. The optimized KL1 programs turned out to be only 34%-70% slower than the comparable C programs.
[pdf]
網代 育大，長 健太，上田 和紀：
静的解析と制約充足によるプログラム自動デバッグ．
コンピュータソフトウェア, Vol.15, No.1 (1998), pp.54-58.
静的解析によってプログラムの単純な誤りを自動的に修正する技法を，並行論理プログラムを例として検討する．強モード体系の下での並行論理プログラミングでは，誤ったプログラムから得られるモード制約の充足不可能な極小部分集合から，多くの場合，誤りの箇所を正確に特定することができる．さらにモード制約の冗長性は，変数の書き誤りなどの軽微な誤りの自動修正を可能にする．同様の技術は型体系についても適用でき，本論文ではそれらの実験的検討を行う．
[pdf]
池口 祐子，村山 和宏，上田 和紀：
見込み計算を用いたニュースリーダの応答性改善法．
情報処理学会論文誌，Vol.38, No.6 (1997), pp.1235-1243.
現在使われている会話システムの多くは，基本的に逐次処理パラダイムに基づいて記述してあり，コマンドの負荷が重い場合，コマンド発行からシステム応答まで時間がかかる．そこで，(1)ユーザのコマンド発行とシステムとの並行並列処理，(2)漸増的計算，(3)利用可能な計算資源を考慮にいれて計算する資源依存計算，といった要素技術に基づいて，会話システムの応答性改善を行った．本論文では，会話システムの応答性改善の実験として，ニュースリーダを題材とし，定量的に評価する．
[pdf]
Kenta Cho and Kazunori Ueda, Diagnosing Non-Well-Moded Concurrent Logic Programs.
In Proc. 1996 Joint International Conference and Symposium on Logic Programming (JICSLP'96), Michael Maher (ed.), The MIT Press, 1996, pp.215-229.
Strong moding and constraint-based mode analysis are expected to play
fundamental roles in debugging concurrent logic/constraint programs
as well as in establishing the consistency of
communication protocols and in optimization.
Mode analysis of Moded Flat
GHC is a constraint satisfaction problem with many simple mode
constraints, and can be solved efficiently by unification over feature
graphs. In practice, however, it is important to be able to analyze
non-well-moded programs (programs whose mode constraints are
inconsistent) and present plausible "reasons" of inconsistency
to the programmers in the absence of mode declarations.
This paper discusses the application of strong moding to systematic
and efficient static program debugging. The basic idea, which turned
out to work well at least for small programs, is to find a minimal
inconsistent subset from an inconsistent set of mode constraints and
indicate the symbol( occurrence)s in the program text that imposed
those constraints. A bug can be pinpointed better by finding more
than one overlapping minimal subset. These ideas can be readily extended
to finding multiple bugs at once. For large programs,
stratification of predicates narrows search space and produces more
intuitive explanations. Stratification plays a fundamental role in
introducing mode polymorphism as well.
Our experiments show that the
sizes of minimal subsets are small enough for bug location and do not
depend on the program size, which means that diagnosis can be done in
almost linear time.
長 健太，上田 和紀：
モード誤りをもつ並行論理プログラムの静的デバッグ手法．
1996年並列処理シンポジウム論文集，情報処理学会，1996年6月，pp.219-226.
モードづけのできない，すなわちモード制約が充足不可能である並行論理プログラムに対して，その理由を解析してプログラマに提示する方法を論じた最初の論文．充足不可能なモード制約集合から，充足不可能な極小の部分集合を求める基本アルゴリズムとその手間を論じ，ついでプログラムの層化構造にしたがった解析手法を提案している．提案する方式により，プログラムの誤りの多くが，静的に，しかも効率良く発見できる．
[pdf]
(author version)
Kazunori Ueda, Experiences with Strong Moding in Concurrent Logic/Constraint
Programming.
In Proc. International Workshop on Parallel Symbolic
Languages and Systems (PSLS'95), Lecture Notes in Computer Science 1068,
Springer-Verlag, Berlin, April 1996, pp.134-153.
Strong moding is turning out to play fundamental roles in concurrent logic programming (or in general, concurrent constraint programming) as strong typing does but in different respects. "Principal modes" can most naturally be represented as feature graphs and can be formed by unification. We built a mode analyzer, implementing mode graphs and their operations by means of concurrent processes and streams (rather than records and pointers). This is a non-trivial programming experience with complicated process structures and has provided us with several insights into the relationship between programming with dynamic data structures and programming with dynamic process structures. The mode analyzer was then applied to the analyzer itself to study the characteristics of the mode constraints it imposed and of the form of large mode graphs. Finally, we show how our framework based on principal moding can be extended to deal with (1) random-access data structures, (2) mode polymorphism, (3) higher-order constructs, and (4) various non-Herbrand constraint systems.
並行論理プログラミングおよび（その一般化としての）並行制約プログラミングのための強モード体系の実装と，モード解析実験の結果について論じている．強モード体系をもつ並行論理プログラミングの記述力，およびモード解析の実用性を，実用規模の例題で示した．また，既存のモード体系の拡張について，高階述語の扱い，一般的制約体系の扱い，多相モード(polymorphic modes)の三つの面から論じている．
旭 哲男，池口 祐子，村山 和宏，上田 和紀：
見込み計算を用いたネットワークニュースリーダの応答性改善法．
田中二郎編，インタラクティブシステムとソフトウェアＩＩＩ，近代科学社，pp.113-122, 1995年12月.
Most interactive systems, written as deterministic sequential programs, respond poorly to time-consuming commands. This paper discusses how we can reduce the response (latency) of interactive systems using (i) concurrrent/parallel processing between systems and user tasks, which is a form of speculative computation, (ii) incremental computation, and (iii) resource-dependent computation including real-time processing. The techniques were tailored and applied to our network newsreader composed of DNAS and GNUS. We have so far implemented an active article cache system for DNAS and have re-implemented some time-consuming GNUS commands, and the results of those experiments are reported. This paper (in Japanese) was presented at WISS'95 (JSSST Workshop on Interactive Systems and Software), Hiroshima, Japan, November 1995.
[pdf (slides)]
Kazunori Ueda, Strong Moding in Concurrent Logic/Constraint Programming.
Tutorial given at the
Twelfth International Conference on Logic Programming, Kanagawa,
Japan, June 1995.
当国際会議のadvanced tutorial講演．並行論理プログラミングおよび(その一般化としての)並行制約プログラミングにおける強いモード体系の技術的詳細について講義を行なった．従来得られていた知見に加え，高階述語の扱い，一般的制約体系の扱い，多相モード(polymorphic modes)の扱いについても解説した．
[pdf]
(author version)
Kazunori Ueda, I/O Mode Analysis in Concurrent Logic Programming.
In Proc. Int. Workshop on Theory and Practice of Parallel Programming (TPPP'94), Ito, T. and Yonezawa, A. (eds.), Lecture Notes in
Computer Science 907, Springer-Verlag, Berlin, pp.356-368, May 1995.
This paper briefly reviews concurrent logic programming and the I/O mode system designed for the concurrent logic language Flat GHC. The mode system plays fundamental roles both in programming and implementation in almost the same way as type systems do but in different respects. It provides us with the information on how data are generated and consumed and thus the view of "data as resources". It statically detects bugs resulting from ill-formed dataflow and advocates the "programming as wiring" paradigm. Well-modedness guarantees the safety of unification, the basic operation in concurrent logic programming. Information on the numbers of access paths to data can be obtained by slightly extending the framework, which can be used for compile-time garbage collection and the destructive update of structures.
並行論理プログラミングの枠組と特徴を，他の並行プログラミングないしは並行計算の枠組と比較しつつ紹介したのち，並行論理プログラミングにおける入出力モード解析の目的，方法，性質，意義について例を用いてわかりやすく解説した論文．入出力モード解析の意義については，プログラミング，実装，言語設計の三面にわたって考察を行なっている．
[pdf]
Kazunori Ueda, Moded Flat GHC for Data-Parallel Programming.
In Proc. FGCS'94 Workshop on Parallel Logic Programming,
ICOT, Tokyo, pp.27-35, December 1994.
Concurrent logic languages have been used mainly for the (parallel) processing of rather irregular symbolic applications. However, since concurrent logic languages are essentially general-purpose, they should be applicable to problems with regular structures and their data-parallel processing as well. This paper studies the possibility of massively parallel processing in concurrent logic programming, focusing on arrays and its data-parallel processing.
Moded Flat GHCを，配列データの並列処理および超並列処理に適用するための基本概念と方向性について述べた論文．モード体系下での配列に対する基本操作の設計原理を述べたあと，配列データの並列処理および超並列(データ並列)処理の方法について論じ，いくつかの具体的例題がどのようにデータ並列処理できるかを考察している．モード体系があれば，非手続き型言語でも，配列操作の記述や解析が自然に行なえることを論じている．
[pdf]
(author version)
Kazunori Ueda and Masao Morita, Moded Flat GHC and Its Message-Oriented
Implementation Technique.
New Generation Computing, Vol.13, No.1, pp.3-43, November 1994.
Concurrent processes can be used both for programming computation and
for programming storage. Previous implementations of Flat GHC,
however, have been tuned for computation-intensive programs, and
perform poorly for storage-intensive programs (such as programs
implementing reconfigurable data structures using processes and
streams) and demand-driven programs. This paper proposes an
optimization technique for programs in which processes are almost
always suspended. The technique compiles unification for data
transfer into message passing. Instead of reducing the number of
process switching operations, the technique optimizes the cost of each
process switching operation and reduces the number of cons
operations for data buffering.
The technique is based on a mode system which is powerful enough to
analyze bidirectional communication and streams of streams. The mode
system is based on mode constraints imposed by individual clauses
rather than on global dataflow analysis, enabling separate analysis of
program modules. The introduction of a mode system into Flat GHC
effectively subsets Flat GHC; the resulting language is called
Moded Flat GHC. Moded Flat GHC programs enjoy two important
properties under certain conditions: (1) reduction of a goal clause
retains the well-modedness of the clause, and (2) when execution
terminates, all the variables in an initial goal clause will be bound
to ground terms. Practically, the computational complexity of
all-at-once mode analysis can be made almost linear with respect to
the size n of the program and the complexity of the data
structures used, and the complexity of separate analysis is higher
only by O(log n) times. Mode analysis provides useful
information for debugging as well.
Benchmark results show that the proposed technique well improves the
performance of storage-intensive programs and demand-driven programs
compared with a conventional native-code implementation. It also improves
the performance of some computation-intensive programs.
We expect that the proposed technique will expand the
application areas of concurrent logic languages.
GHCプログラムのための制約概念に基づくモード体系を提案，その定義，解析の基本的枠組，解析アルゴリズムとその手間，基本定理などの理論的諸側面を詳細に論じ，また実際面への影響を考察している． また，モード体系の応用としてのメッセージ指向の処理系作成技法を提案している．Seventh Int. Conf. on Logic Programming発表論文(The MIT Press, Cambridge, Mass, U.S.A., pp.3-17)の大改訂版．
Kazunori Ueda, Design and Evolution of the Concurrent Programming
Language GHC
早稲田大学理工学部紀要，No.57, pp.1-16, March 1994.
This paper reviews the design and the evolution of the concurrent programming language Guarded Horn Clauses (GHC). GHC was born from a study of parallelism in logic programming, but turned out to be a simple and flexible concurrent programming language with a number of desirable properties. We give both an operational semantics of GHC based on transitions, and an abstract view of GHC computation based on the notion of transactions. Also, we discuss in detail the properties of GHC such as its atomic operations, which have much to do with the language design. GHC has been very stable but has also evolved since its conception. Flat GHC was the first step of subsetting, and the second subset, Moded Flat GHC, enabled compile-time mode analysis. KL1, the kernel language of Japan's Fifth Generation Computer Systems (FGCS) project, was designed based on Flat GHC, featuring constructs for controlling parallel computation. The outstanding feature of KL1 is its clear distinction between concurrency and parallelism.
並行プログラミング言語GHCの設計，および設計後の研究の発展につ いてまとめた論文．GHCの計算のトランザクション列による捉えかた，GHCの構文，形式的操作的意味論，操作的意味論とトランザクション意味論との関係，非可分操作，変数束縛環境の性質，失敗などの例外事象の捉えかた，サブセットの設計，KL1との関連等についての知見を集大成したもの．情報処理学会30周年記念国際会議(InfoJapan'90)招待論文の改訂版．
[pdf]
(author version)
Kazunori Ueda and Masao Morita, Message-Oriented Parallel Implementation of
Moded Flat GHC.
New Generation Computing, Vol.11, No.3-4, pp.323-341, July 1993.
We proposed in [Ueda and Morita 1990] a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We have built an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.
Moded Flat GHCの新たな処理方式であるメッセージ指向の処理方式を，共有メモリ並列計算機上で実現する方法を論じ，性能を評価したもの．排他制御に関連する種々の問題点を考察している．密結合並列計算機上で，小粒度の並列性を活かすことに成功した．
[pdf]
Kazunori Ueda, The Fifth Generation Project: Personal Perspectives.
Commun. ACM, Vol.36, No.3, pp.65-76, March 1993.
(The paper is avaiable from ACM Ditital Library as a joint article with
Kazuhiro Fuchi, Robert Kowalski, Koichi Furukawa, Ken Kahn, Takashi Chikayama and Evan Tick under the title "Launching the New Era".
第5世代コンピュータプロジェクトにおける核言語の設計過程を詳細に論じたもの．核言語に求められた要求仕様，基礎となるパラダイムとしての論理プログラミングと並行論理プログラミングとの比較，GHCの誕生の経緯，核言語および並行論理プログラミングの将来展望等について詳しく述べている．
[pdf]
横尾 真，上田 和紀：
並列論理型言語上の制約充足方式の比較．
情報処理学会論文誌，Vol.32, No.3 (1993), pp.296-303.
並列論理型言語を用いて，人工知能分野の制約充足問題を解くためのプログラミ ング・パラダイムを提案し，例題を用いて種々の版の性能を比較評価した．
[pdf]
Kazunori Ueda and Takashi Chikayama, Design of the Kernel Language for the
Parallel Inference Machine.
The Computer Journal, Vol.33, No.6, pp.494-500, December 1990.
We review the design of the concurrent logic language GHC, the basis of the kernel language for the Parallel Inference Machine being developed in the Japanese Fifth Generation Computer Systems project, and the design of the parallel language KL1, the actual kernel language being implemented and used. The key idea in the design of these languages is the separation of concurrency and parallelism. Clarification of concepts of this kind seems to play an important role in bridging the gap between parallel inference systems and knowledge information processing in a coherent manner. In particular, design of a new kernel language has always encouraged us to re-examine and reorganize various existing notions related to programming and to invent new ones.
並列推論マシンのための核言語の設計について論じたもの．核言語の設計指針，核言語の基礎となった並行プログラミング言語GHCとその発展，GHCに基づいて設計された並列核言語KL1の諸機能，処理系開発の状況等を述べている．核言語の設計の大きな特徴が，理論的並行言語であるGHCの提供する並行機能と，実用的並列言語であるKL1が提供する並列機能との明確な分離であることを論じている．
[pdf]
Kazunori Ueda,
Designing a Concurrent Programming Language.
In Proceedings of an International Conference organized by the IPSJ to
Commemorate the 30th Anniversary (InfoJapan'90), Information Processing
Society of Japan, October 1990, pp.87-94.
This paper reviews the design and the evolution of a concurrent programming language Guarded Horn Clauses (GHC). GHC was born from a study of parallelism in logic programming, but turned out to be a simple and flexible concurrent programming language with a number of nice properties. We give both an abstract view of GHC computation based the notion of transactions and a concrete view, namely an operational semantics, based on reductions. Also, we discuss in detail the properties of GHC such as its atomic operations, which have much to do with the design of GHC.
[pdf]
Kazunori Ueda and Masao Morita,
A New Implementation Technique for Flat GHC.
In Proceedings of the Seventh International Conference on Logic
Programming (ICLP'90),
The MIT Press, Cambridge, MA, 1990, pp. 3-17.
Concurrent processes can be used both for programming computation and for programming storage. Previous implementations of Flat GHC, however, have been tuned for computation-intensive programs, and perform poorly for storage-intensive programs (such as programs implementing reconfigurable data structures using processes and streams) and demand-driven programs. This paper proposes an optimization technique for programs in which processes are almost always suspended. The technique compiles unification for data transfer into message passing. Instead of reducing the number of process switching operations, the technique optimizes the cost of each process switching operation and reduces the number of cons operations for data buffering. The technique is based on a mode system which is powerful enough to analyze bidirectional communication and streams of streams. The mode system is based on mode constraints imposed by individual program clauses rather than on global dataflow analysis. Benchmark results show that the proposed technique well improves the performance of storage-intensive programs and demand-driven programs compared with a conventional native-code implementation. It also improves the performance of some computation-intensive programs. Although many problems remain to be solved particularly with regard to parallelization, we expect that the proposed technique will expand the application areas of concurrent logic languages.
[pdf]
Kazunori Ueda, Parallelism in Logic Programming.
In Information Processing 89, Ritter, G.X. (ed.), North-Holland,
pp.957-964, August 1989.
This paper discusses two aspects of parallelism in logic programming: parallelism as a computational formalism (often referred to as concurrency) and the implications of parallelism with regard to performance. Two alternatives for a parallel logic programming system are compared in detail. One allows programmers to describe processes and communication using concurrent logic languages, and the other attempts to exploit the parallelism of ordinary logic programs.
論理プログラミングにおける並列性について，計算を表現する枠組としての並行性 (concurrency) と，効率的実行のための並列性 (parallelism) の両面から検討したもの．通常の論理型言語と並列論理型言語の相違と関係を詳細に議論している．国際会議``IFIP 11th World Computer Congress''の招待論文．
[pdf]
Kazunori Ueda, Guarded Horn Clauses: A Parallel Logic Programming
Language with the Concept of a Guard.
ICOT Technical Report TR-208, Institute for New Generation Computer
Technology (ICOT), Tokyo, 1986.
Revised version in Programming of Future Generation Computers,
Nivat, M. and Fuchi, K. (eds.), North-Holland, Amsterdam, pp.441-456, 1988.
This paper defines a parallel logic programming language called Guarded Horn Clauses (GHC), introducing the ideas that went into the design of the language. It also points out that GHC can be viewed as a proess description language in which input and output can be described naturally by treating the outside world as a process. Relationship between GHC and logic programming in the original, strict sense is also discussed. Some familiarity with GHC and/or other parallel logi programming languages will be helpful in reading this paper, though it is not indispensable.
GHCを，通常の論理プログラムの導出(resolution)に基づく操作的意味論と対比させつつ，より厳密に定義した論文．GHCのプロセス記述言語としての解釈，論理プログラム言語における入出力の設計，および通常の論理プログラミングの観点から見たGHCについても論じている．1986年10月に開催されたFirst Franco-Japanese Symposium on Programming of Future Generation Computersの講演論文．
[pdf]
Kazunori Ueda, Theory and Practice of Concurrent Systems: The Role of Kernel
Language in the FGCS Project.
In Proc. Int. Conf. on Fifth Generation Computer Systems 1988
(FGCS'88), ICOT, Tokyo, November 1988, pp.165-166.
第五世代コンピュータ・プロジェクトにおいて (GHCに基づく)核言語が果たす役割と今後の核言語関連の研究方向を論じている．当国際会議のパネル討論 ``Theory and Practice of Concurrent Systems'' のポジション・ペーパー．
[pdf]
Kazunori Ueda and Koichi Furukawa, Transformation Rules for GHC Programs.
In Proc. Int. Conf. on Fifth Generation Computer Systems 1988
(FGCS'88), ICOT, Tokyo, November 1988, pp.582-591.
Transformation rules for (Flat) GHC programs are presented, which refine the previous rules proposed by one of the authors. The rules are based on unfolding/folding and are novel in that they are stated in terms of idempotent substitutions with preferred directions of bindings. They are more general than the old rules in that no mode system is assumed and that the rule of folding is included. The presentation of the rules suggests that idempotent substitutions with preferred directions of bindings are an appropriate tool for modeling information in (concurrent) logic programming. A semantic model is given which associates a multiset of goals with the set of possible finite sequences of transactions (via substitutions) with the multiset. A transformation preserves the set of transaction sequences that are made without the risk of the failure of unification. The model also deals with anomalous behavior such as the failure of unification and deadlock, so it can be shown with the same model that the transformation cannot introduce those anomalies. Applications of the transformation technique include the fusion of communicating parallel processes. Some familiarity with GHC is assumed.
Unfoldingとfoldingに基づくGHCプログラムの変換規則の提案．論理プログラミングのための変換規則を拡張し，並行プログラムとしての特徴(同期条件，正常挙動，異常挙動等)の保存について論じている．変換規則を正当化するための意味論についても言及している．
[pdf]
Kazunori Ueda, Making Exhaustive Search Programs Deterministic, Part II.
In Proc. Fourth Int. Conf. on Logic Programming (ICLP'87),
The MIT Press, Cambridge, Mass., U.S.A., pp.356-375, May 1987.
This paper complements the previous paper "Making Exhaustive Search Programs Deterministic" which showed a systematic method for compiling a Horn-clause program for exhaustive search into a GHC program or a Prolog program with no backtracking. This time we present a systematic method for deriving a deterministic logic program that simulates coroutining execution of a generate-and-test logic program. The class of compilable programs is sufficiently general, and compiled programs proved to be efficient. The method can also be viewed as suggesting a method of compiling a naive logic program into (very) low-level languages.
[pdf]
(author version)
Kazunori Ueda, Making Exhaustive Search Programs Deterministic.
New Generation Computing, Vol.5, No.1, pp.29-44, 1987.
This paper presents a technique for compiling a Horn-clause program intended for exhaustive search into a GHC (Guarded Horn Clauses) program. The technique can be viewed also as a transformation technique for Prolog programs which compiles away the 'bagof' primitive and non-determinate bindings. The class of programs to which our technique is applicable is shown with a static checking algorithm; it is nontrivial and could be extended. An experiment on a compiler-based Prolog system showed that our technique improved the efficiency of exhaustive search by 6 times for a permutation generator program. This compilation technique is important also in that it exploits the AND-parallelism of GHC for parallel search.
Prolog流の探索機能を，GHCをはじめとする並列論理型言語の効率よいプログラムに自動的にコンパイルする方法の提案．並列論理型言語が自動探索機能を欠くことに対する批判への解答であると共に，Prologの全解収集機能に対する考察と批判を含む．その後の並列論理型言語による探索問題のプログラミングの研究の出発点となる．
[pdf]
Kazunori Ueda and Takashi Chikayama, Concurrent Prolog Compiler on Top of Prolog.
In Proc. 1985 Symp. on Logic Programming,
IEEE Computer Society, pp.119-126, July 1985.
A Concurrent Prolog compiler, whose target language is (sequential) Prolog, was implemented in Prolog. The object program obtained can further be compiled into machine codes by a Prolog compiler. Due to the similarity among the source, target and implementation languages, the compiler and the runtime support were small and very rapidly developed. Benchmark tests showed that (twice) compiled Concurrent Prolog programs ran 2.7 to 4.4 times faster and 2.7 to 5.3 times slower than comparable Prolog programs running on the interpreter and compiler, respectively, of the same Prolog system. After these experiments, the Concurrent Prolog compiler was modified to obtain a compiler of the new parallel logic programming language, GHC (Guarded Horn Clauses), and almost the same efficiency was achieved. These compilers will serve for practice of parallel logic programming.
Concurrent Prologプログラムを効率よいPrologプログラムにコンパイルする方式を提案，実現，評価したもの．実現した処理系は，当時最高速のConcurrent Prologの処理系であった．その処理系を翻案して作成したGHC処理系の紹介も行なっている．
[pdf]
Kazunori Ueda, Guarded Horn Clauses.
ICOT Technical Report TR-103,
Institute for New Generation Computer Technology (ICOT), Tokyo, 1985.
Revised version [pdf] in
Proc. Logic Programming '85,
Wada, E.(ed.), Lecture
Notes in Computer Science 221, Springer-Verlag, Berlin Heidelberg New York
Tokyo, 1986, pp.168-179.
Also in Concurrent Prolog: Collected Papers,
Shapiro, E.Y. (ed.), The MIT Press, Cambridge, 1987, pp.140-156.
A set of Horn clauses augmented with a `guard' mechanism is shown to be a simple and yet powerful parallel logic programming language.
（並行論理型言語 GHC (Guarded Horn Clauses) に関する最初の論文．）
[pdf]
Nakashima, H., Tomura, S. and Ueda, K.,
What is a Variable in Prolog?
In Proc. International Conference on Fifth Generation Computer
Systems 1984 (FGCS'84), ICOT, Tokyo, pp. 327-332, November 1984.
We review the treatment of variables in Prolog.
Prolog does not have sufficient features to manipulate variables as
data objects. This paper introduces two new language concepts:
freezing and melting of
variables in terms. Accordingly, system predicates having the
freezing and multing capabilities, i.e., those for input-output and
database handling, are revised and made more basic.
This revision of primitives increases Prolog's ability to handle
variables, hence clauses, as data objects. As a result, a more
sophisticated debugger, global logical variables, efficient
meta-inference, and so on become realizable.
We present an efficient implementation for the freeze and melt
operations. In particular, melting of a frozen term term is achieved
in constant time irrespective of its size.
[pdf]
Kazunori Ueda and Takashi Chikayama,
Efficient Stream/Array Processing in Logic Programming Languages.
In Proc. Int. Conf. on Fifth Generation Computer Systems 1984 (FGCS'84),
ICOT, Tokyo, pp. 317-326, November 1984.
The Concurrent Prolog predicate for merging n input streams is investigated, and a compilation technique tor getting its efficient code is presented. Using the technique, data are transferred with a delay independent of n. Furthermore, it is shown that the addition and the removal of an input stream can be done within an average time of 0(1). The predicate for distributing data on an input stream to n output streams can also be realized as efficiently as n-ary merge. The compilation technique for the distribute predicate can further be applied to the implementation of mutable arrays that allow constant-time accessing and updating. Although the efficiency stated above could be achieved by a sophisticated compiler, the codes should be provided directly by the system to get rid of the bulk of source programs and the time required to compile them.
並列論理型言語においてそれまで非効率とされていた多対1のプロセス間通信，1対多のプロセス間通信，および配列操作を効率的に実現する処理系作成方式の提案．提案する方式の処理系上での，各基本操作の計算量の評価を含む．その後，単純化された方法が，多くの並列論理型言語処理系に実装されている．
[pdf]
中島 秀之，上田 和紀，戸村 哲：
述語論理型言語における副作用によらない入出力と文字列操作．
情報処理学会論文誌，Vol.24, No.6 (1983), pp.745-753.
Prologに代表される述語論理型言語は プログラムを その仕様に近い形で記述できることを大きな特徴とする.しかし入出力に関しては 命令型言語と同様 副作用を通じて行うことが多かった.本論文では 述語ではなく変数を通じた 副作用によらないPrologの入出力を論じる.また従来の多くの言語の入出力は データの転送と 内外部表現間の変換を まとまった機能として提供していた.本論文では 入出力はたんなる文字列の転送ととらえ 変換操作は 言語に文字列操作機能を用意し それを用いて記述するようにした.これらの工夫により 入出力の概念が単純でわかりやすいものになり しかもその扱いが柔軟になったと考える.順アクセス媒体との入出力は たんなる文字列変数を通じて行えばよいが データ構造の工夫により 窓構造をもったディスプレイヘの出力も扱える.文字列操作は Prologのパターンマッチング機能を用いて簡潔に記述できる.実行可能パターンの考え方をとりいれてパターンと述語とを統一的に扱うので 文字列に導入した基本操作が連結のみであるにもかかわらず パターンの記述能力は強力である.提案する機能はProIog/KR上に作成中であるが さらに効率のよい作成技法にもふれる.残された課題には 複雑なパターンマッチングにおけるバックトラックの制御などがある.
[pdf]
稲村 実, 豊田 弘道, 上田 和紀, 大森 隆司, 藤村 貞夫：
Binary Decision Tree による多重分光画像の高速処理．
計測自動制御学会論文集，Vol.15, No.4 (1979), pp.486-491.
Remotely sensed multispectral images are analyzed by the methods of maximum likelihood, Euclidean minimum distance and correlation etc. But a new fast analyzing method is searched because the processing speed of the conventional is slow. This paper describes the multispectral image processing by means of a binary decision tree. To appreciate the ability of the image analysis, the efficiency of the binary decision tree is compared with the efficiency of the maximum likelihood method under the same conditions. As a result, the processing speed of the binary decision tree was shown to be twenty times as fast as that of the maximum likelihood with a nine-classes six-features classification. Moreover, the processing time of the binary decision tree method does not increase even if the number of the features increses. This property is another strong point of the binary decision tree. On the contrary, the processing time of the maximum likelihood method is in proportion to the square of the number of selected features. On the other hand, the classification accuracy of the binary decision tree was about equal to that of the maximum likelihood method.
Kazunori Ueda, Guarded Horn Clauses.
[pdf (1013898 bytes)]
The University of Tokyo, 1986.
This thesis introduces the programming language Guarded Horn Clauses which is abbreviated to GHC. Guarded Horn Clauses was born from the examination of existing logic programming languages and logic programming in general, with special attention paid to parallelism.
The main feature of GHC is its extreme simplicity compared with the other parallel programming languages. GHC is a restriction of a resolution-based theorem prover for Horn-clause sentences. The restriction has two aspects: One is the restriction on data flow caused by unification, and the other is the introduction of choice nondeterminism. The former is essential for a general-purpose language and it also provides GHC with a synchronization primitive. The latter is required by the intended applications which include a system interacting with the outside world. What is characteristic with GHC is that all the restrictions have been imposed as the semantics given to the sole additional syntactic construct, guard. Although Guarded Horn Clauses can be classified into the family of logic programming languages, it has close relationship to other formalisms including dataflow languages, Communicating Sequential Processes, and functional languages for multiprocessing. Except for the lack of higher-order facilities, GHC can be viewed as a generalization of these frameworks. The simplicity and generality of GHC will make it suitable for a standard not only of parallel logic programming languages but of parallel programming languages. Moreover, it is simple enough to be regarded as a computation model as well as a programming language. Attention has always been paid to the possibility of efficient implementation during the design stage of GHC. We showed that stream merging and distribution which are expected to be heavily used can be implemented with the same time-complexity as the time-complexity of many-to-one communication in procedural languages. Furthermore, we made available an efficient compiler-based implementation of a subset of GHC on top of Prolog. GHC has lost the completeness as a theorem prover deliberately, not as a result of compromise. Nevertheless, it can be used for efficient implementation of exhaustive solution search for Horn-clause programs. We showed how to automatically compile a Horn-clause program for exhaustive search into a GHC program.Yuxi Fu and Kazunori Ueda (eds.), Mathematical Structures in Computer Science, Vol.25, No.4, December 2014.
国際会議 Eighth Asian Symposium on Programming Languages and Systems (APLAS 2010) の論文選集．
Kazunori Ueda (ed.), Programming Languages and Systems: 8th Asian Symposium, APLAS 2010. Lecture Notes in Computer Science 6461, Springer-Verlag, 2010.
2010年11月〜12月に上海で開催された国際会議8th Asian Symposium on Programming Languages and Systemsの論文集．
Herbert Kuchen and Kazunori Ueda (eds.), Functional and Logic Programming---5th International Symposium on Functional and Logic Programming, FLOPS 2001. Lecture Notes in Computer Science 2024, Springer-Verlag, 2001.
2001年3月に東京で開催された国際会議5th International Symposium on Functional and Logic Programmingの論文集．
R. K. Shyamasundar and Kazunori Ueda (eds.), Advances in Computing Science---ASIAN'97. Lecture Notes in Computer Science 1345, Springer-Verlag, 1997.
1997年12月にカトマンズで開催された国際会議Asian Computing Science Conferenceの論文集．
「並列処理シンポジウムJSPP'97論文集」
情報処理学会，1997年5月
1997年5月に神戸で開催された並列処理シンポジウムの論文集．
「楽しいプログラミング: 記号の世界」
中島秀之，上田和紀共著，岩波書店，東京，1992年5月
記号処理プログラミングと知識情報処理の入門書．9章と付録からなる．Prologを用いて，記号処理の基本概念とプログラミング技法を解説している．バックトラック探索や計画，自然言語対話などの，人工知能分野の題材を多用し，知識プログラミングの基本が理解できるようにしている．
Logic Programming: Proceedings of the 1991 International Symposium
Vijay Saraswat and Kazunori Ueda (eds.),
The MIT Press, Cambridge, Mass, U.S.A., 1991年10月
1991年10月にサンディエゴで開催された国際会議ILPS'91 (International Logic Programming Symposium)の論文集．
Special Issue: Selected Papers from the Seventh International
Conference on Logic Programming, 1990
Kazunori Ueda and Takashi Chikayama (eds.),
New Generation Computing, Vol.9, Nos.3,4 (1991)
標記国際会議 (ICLP'90) の論文選集．
「続・新しいプログラミング・パラダイム」
井田哲雄，田中二郎編，上田和紀他10名共著，
共立出版，東京,1990年11月.
1980年代後半に発展した種々のプログラミング・パラダイムを，オムニバス形式で解説したもの．9章からなる．第1章 "並行プログラミングとGHC" (pp.1-34)を執筆担当．
「並列論理型言語GHCとその応用」
淵一博監修，古川康一，溝口文雄共編，古川康一，竹内彰一，上田和紀他6名共著，
共立出版，東京，1987年9月
並行論理プログラミング言語GHCに関するそれまでの研究成果を集大成したもの．GHC序論，並列論理型言語の比較，GHCの基本，GHCのプログラミング技術，GHCによる全解探索，GHC応用プログラム(1)〜(4)，GHCのプログラミング環境，GHCのProlog上の処理系からなる．第3章 "GHCの基本" (pp.33-66)，第5章 "GHCによる全解探索" (pp.93-115)，付録 "GHCのProlog上の処理系" (pp.217-270)を執筆担当．
[pdf] Kazunori Ueda, Interlocking 3D-printed bars, trusses and space frames to build arbitrarily large structures. engrXiv.org, May 2022, DOI: https://doi.org/10.31224/2338 .
It is shown that the 'Tsugite' (joint) technology used in the traditional Japanese construction method turns to be highly useful and practical in making small 3D-printed modules that can then be assembled to build arbitrarily large structures made up of bars and trusses. Of numerous variations of the Tsugite technology, we choose and focus on Okkake Daisen Tsugi, one of the most widely used methods for joining beams and columns, and study how it can be scaled down in such a way that it can be printed with low-cost FDM 3D printers using PETG and PLA filaments. Our tensile test shows that a 4.8 mm square bar with an Okkake Daisen Tsugi joint comes with a breaking load of 160-210 N. We further applied our idea to build arbitrarily large, functional 2D trusses and 3D trusses (space frames), which are expected to open up many new applications.
[pdf]
竹口輝，和田亮，松本翔太，細部博史，上田和紀：ハイブリッド制約言語プログラムのハイブリッドオートマトンへの変換アルゴリズム，
日本ソフトウェア科学会第29回大会講演論文集，2A-3，2012.
時間の経過に伴って状態が連続変化したり，状態や方程式が離散変化したりするシステムをハイブリッドシステムと呼ぶ．HydLa はハイブリッドシステムを制約によって記述する言語であり，精度保証されたシミュレーションを行うことで，システムの検証に役立てることを目標としている． これまで，HydLaで記述されたハイブリッドシステムの無限時間における性質を検証する手法が明らかでなかった．HydLa プログラムをハイブリッドシステムのモデリング手法の一つであるハイブリッドオートマトンに変換することで，既存のハイブリッドオートマトンを用いた検証器を利用でき，無限時間におけるシステムの到達可能性や安全性の検証が可能となる．本論文では，HydLa プログラムのハイブリッドオートマトンへの変換アルゴリズムを述べる．
[pdf]
上田和紀，石井大輔，細部博史：
制約概念に基づくハイブリッドシステムモデリング言語HydLa，
第5回システム検証の科学技術シンポジウム予稿集，
産業技術総合研究所システム検証研究センター，
pp.1-6, 2008.
Hybrid systems are systems involving continuous changes of states
and discrete changes of both states and equations governing their behavior.
Description of hybrid systems have typically been made by extending
automata and Petri nets, but we are aiming at establishing a
constraint-based framework with a view to handling uncertain values,
guranteeing the result of computation and unifying simulation and
verification.
This paper introduces a constraint-based modeling language HydLa for
hybrid systems. The two important features of HydLa are: (1) it
features constraint hierarchies to facilitate well-defined modelling
and (2) it provides simple syntax and semantics by employing
concepts and notations of mathematics and logic wherever possible.
[pdf]
上田和紀，プログラムと対称性．
夏のプログラミングシンポジウム「アッと驚くプログラミング」報告集，
情報処理学会，pp.69-74, 2005.
プログラム，特に書換えに基づくプログラムにおける対称性とその意 義について例を通じて考える．対称性を意識したプログラムは，プロ グラム不変量や可逆性など，単に美しいという以上の価値を持つよう に思われ，また対称性はプログラミングおける思考にも大きな役割を 演ずる．
[pdf]
Kazunori Ueda and Norio Kato,
Programming with Logical Links: Design of the LMNtal Language.
In Proc. Third Asian Workshop on Programming Languages and Systems (APLAS 2002), pp.115-126, 2002.
We propose LMNtal, a simple language model based on the rewriting of hierarchical graphs that use logical variables to represent links. The two major goals of the model are (i) to unify various computational models based on multiset rewriting and (ii) to serve as the basis of a truly general-purpose language covering various platforms ranging from wide-area to embedded computation. Another important contribution of the model is it greatly facilitates programming with dynamic data structures.
[pdf] 上田和紀，加藤紀夫，GHCからLMNtalへ ． 情報処理学会 2002年度 夏のプログラミングシンポジウム，2002年9月．
[pdf] 上田和紀，加藤紀夫：Programming with Logical Links. 日本ソフトウェア科学会第19回大会論文集，2002年9月．
論理変数をリンクとする階層的グラフの書換えに基づく簡単な並行言語モデル LMNtal を提案する．本モデルは，多重集合の書換えに基づく多くの計算モデルの統合を目指すとともに，広域分散計算から極小規模計算までを包含しうる真に汎用なプログラミング言語のベースとなることを目指している．それとともに，動的データ構造のプログラミングを大幅に平易にすることも目指している．
[pdf] 加藤紀夫，上田和紀：モード制約の漸近的一様補強による並行論理プログラムのoccurs-check解析． 日本ソフトウェア科学会第19回大会論文集，2002年9月．
出現検査（occurs-check）は、単一化（unification）で無限構造を生成しないことを検査する処理であるが、効率が悪いため多くの論理型言語処理系で意図的に省略されてきた。しかしこの省略により、一階述語論理の証明器としての健全性が失われる、またはプログラム停止性の保証が困難になるなどの問題が生じており、省略の安全性を保証する解析技法が必要とされている。本研究では、入出力モード付きの並行論理プログラムに対して、特定の引数パス集合のモードが漸近的に一様となるようにモード制約が補強できることを検査することにより、プログラムが無限構造を生成しないことを静的に保証するアルゴリズムを開発した。これにより、従来は行えなかった循環依存的な双方向通信を行う並行論理プログラムに対する検査を可能した。
[pdf] 松村量，高山啓，高木祐介，加藤紀夫，上田和紀：分散言語処理系DKLIC の設計と実装． 日本ソフトウェア科学会第19回大会論文集，2002年9月．
並行論理型言語は、暗黙の並列性および自動同期の機構を持ち、静的解析によって通信プロトコルの一貫性が保証できるなど特徴を持つため、分散アプリケーションの記述に適している。私達は、並行論理型言語 KL1 の処理系 KLIC に分散拡張のための API を提供することによって、宣言型広域分散プログラミング環境を実現することを目指す DKLIC プロジェクトを進めている。現在までに、未定義変数同士の単一化が検出できるように KLIC を拡張し、分散論理変数の無駄な中継ノードを自動的に排除して最適な通信路を形成する機構を実装した。
[pdf] 網代育大，上田和紀：反復深化A*探索によるもっともらしいプログラムの効率的な生成． 人工知能学会全国大会（第17回）論文集，1E3-02, 2002年6月．
We have implemented the Kima system, which automatically corrects wrong occurrences of variable symbols in concurrent logic programs under strong moding and typing in the absence of the explicit declarations of programs. Automated error correction is attempted basically by generate-and-test search, namely the generation of possible rewritings and the computation of their principal modes and types. Search space is kept small because the locations of bugs can be limited to small regions of program text by computing the minimal inconsistent subsets of mode and type constraints. In terms of the semantics of the mode system, we have found there are syntactical constraints which plausible programs should observe. The syntactical constraints we have presented as heuristic rules turned out to be quite effective not only for multiple alternatives proposed by Kima to be reduced but also for the optimization of searching alternatives to errors. We explain this optimization technique by showing the algorithms of an iterative-deepening A* search with respect to the plausibility.
上田和紀：自己調整二分木の並列操作． 日本ソフトウェア科学会第15回大会論文集，1998年9月．
SleatorとTarjanによる自己調整二分木(splay tree)に対して並列操作を可能にする操作アルゴリズムを提案する．提案するアルゴリズムは，同一の木に対する複数の更新・挿入・削除操作のパイプライン的並列実行を許し，かつ操作系列のスループット(単位時間内に処理可能な操作の個数)とレスポンス(個々の操作の償却計算量(amortized complexity))を両立させることを目的としている．スループットの最適性と挿入操作の対数的レスポンスについては理論的結果を示す．削除操作は，木の形状に関する良い性質を保つにもかかわらず，Sleatorらの枠組みでは最適性が証明できない．このことについても論じる．
上田和紀：並行論理プログラムの参照数解析． 情報処理学会プログラミング研究会 (SWoPP'98)，1998年8月．
関数型言語や論理型言語をはじめとする記号処理言語は，自動記憶管理機構とポインタの隠蔽とによって，プログラミングの容易さとメモリ参照の安全性を実現している．現在もっとも広く使われている技法はコピーによるゴミ集め(copying garbage collection)であり，さまざまな望ましい性質をもっている．だが，もしプログラム実行時に起きる記憶域の割付けや参照に関して静的な（= コンパイル時の）解析ができれば，手続き型プログラムにより近い性能のコードを生成することができる．また並列分散処理や実時間処理のためには，ゴミ集めによらない記憶管理が可能なプログラムのクラスが同定できることが非常に重要である．並行論理型言語Moded Flat GHCにおけるそのような解析の基本的な技法として，参照数解析の枠組みを提案し，解析の安全性を証明する．参照数解析とは，複数の参照ポインタをもちうるデータ構造と，単一の参照ポインタしかもたないデータ構造とを，静的プログラム解析によって区別するものであり，後者に対してはコンパイル時ゴミ集めや記憶域の局所的再利用(local reuse)が可能となる．提案する技法は，以前に提案したモード解析技法と同様，制約に基づく定式化を特徴としており，すでに，KL1用静的解析系klint第2版の機能の一つとして部分的に実装されている．
上田学，菊地賢太郎，土山了士，小林一成，大原拓三，外山孝伸，
上田和紀：並列計算機システムＦＯＬＯＮの通信ライブラリの設計と評価．
情報処理学会研究報告，Vol.96, No.23 (93-ARC-117)，1996年3月，pp.31-36．
パーソナルコンピュータを要素として，高コストパフォーマンスの並列計算機システムFOLONを，ハードウェア工作なしに構築した．PCI-VMEバス変換アダプタを用いて，低レンテンシの分散共有メモリを実現している．このようなシステムの通信性能は，通信ライブラリ階層の設計にかかっており，本論文では，(1) デバイスドライバ，(2) VMEバス制御ライブラリ，および (3) UDPレベルの機能を実現するFCP(FOLON Control Protocol)の各階層の設計と基本性能を報告した．
[pdf]
Kazunori Ueda, Introduction to Guarded Horn Clauses.
ICOT Technical Report TR-209,
Institute for New Generation Computer Technology (ICOT), Tokyo,
November 1986.
This paper informally introduces a programming language Guarded Horn Clauses (GHC) through program examples. GHC is a paral- lel programming language devised from investigation of the basic framework and practice of logic programming. It has introduced the guard construct with sim- ple semantics into logic programming to express interacting processes. A GHC program naturally expresses parallelism inherent in the original problem. The simple and uniform framework of GHC should be far easier to understand than the constructs of conventional parallel programming languages. We explain GHC in comparison with Prolog, the best-known logic programming language.