Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Authors: Champarnaud, Jean-Marc | Ouardi, Faissal | Ziadi, Djelloul
Article Type: Research Article
Abstract: The aim of this paper is to describe a quadratic algorithm for computing the equation K-automaton of a regular K-expression as defined by Lombardy and Sakarovitch. Our construction is based on an extension to regular K-expressions of the notion of c-continuation that we introduced to compute the equation automaton of a regular expression as a quotient of its position automaton.
Keywords: regular K-expression, K-automaton, equation K-automaton, c-continuation
DOI: 10.3233/FI-2009-0001
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 1-16, 2009
Authors: Chang, Chin-Chen | Chou, Yung-Chen
Article Type: Research Article
Abstract: Image authentication is an important research topic of maintaining the integrity of digital image contents. Fragile image authentication is the technique for achieving the goal of image content integrity maintenance. This article presents a fragile image authentication scheme based on the concept of wet paper codes. The proposed scheme modifies dry pixels on an image to conceal an image signature. The proposed authentication scheme can exactly detect the tampered area on a tampered image. For saving …computation cost of signature embedding, an exclusive-or operation is used in the proposed authentication scheme. The experimental results show that the proposedmethod not only has good visual quality of an authorized image but also successfully detects tampered areas on a tampered image. Show more
Keywords: Data hiding, image authentication, signature, wet paper codes
DOI: 10.3233/FI-2009-0002
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 17-26, 2009
Authors: Cīrulis, Jānis
Article Type: Research Article
Abstract: Study of the so called knowledge ordering of rough sets was initiated by V.W. Marek and M. Truszczynski at the end of 90-ies. Under this ordering, the rough sets of a fixed approximation space form a domain in which every set ↓ is a Boolean algebra. In the paper, an additional operation inversion on rough set domains is introduced and an abstract axiomatic description of obtained algebras of rough set is given. It is shown that the …resulting class of algebras is essentially different from those traditional in rough set theory: it is not definable, for instance, in the class of regular double Stone algebras, and conversely. Show more
Keywords: description domain, double Stone algebra, inversion, knowledge ordering, rough set, semi-Boolean domain
DOI: 10.3233/FI-2009-0003
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 27-41, 2009
Authors: Ferilli, S. | Basile, T.M.A. | Biba, M. | Di Mauro, N. | Esposito, F.
Article Type: Research Article
Abstract: First-Order Logic formulæ are a powerful representation formalism characterized by the use of relations, that cause serious computational problems due to the phenomenon of indeterminacy (various portions of one description are possibly mapped in different ways onto another description). Being able to identify the correct corresponding parts of two descriptions would help to tackle the problem: hence, the need for a framework for the comparison and similarity assessment. This could have many applications in …Artificial Intelligence: guiding subsumption procedures and theory revision systems, implementing flexible matching, supporting instance-based learning and conceptual clustering. Unfortunately, few works on this subject are available in the literature. This paper focuses on Horn clauses, which are the basis for the Logic Programming paradigm, and proposes a novel similarity formula and evaluation criteria for identifying the descriptions components that are more similar and hence more likely to correspond to each other, based only on their syntactic structure. Experiments on real-world datasets prove the effectiveness of the proposal, and the efficiency of the corresponding implementation in the above tasks. Show more
Keywords: First-Order Logic, Logic Programming, Similarity/Distance Measures
DOI: 10.3233/FI-2009-0004
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 43-66, 2009
Authors: Giesen, Joachim | Schuberth, Eva | Stojaković, Miloš
Article Type: Research Article
Abstract: We show that any comparison based, randomized algorithm to approximate any given ranking of n items within expected Spearman's footrule distance n^{2} /ν(n) needs at least n (min{log ν(n), log n} − 6) comparisons in the worst case. This bound is tight up to a constant factor since there exists a deterministic algorithm that shows that 6n log (n) comparisons are always sufficient.
Keywords: algorithms, sorting, ranking, Spearman's footrule metric, Kendall's tau metric
DOI: 10.3233/FI-2009-0005
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 67-72, 2009
Authors: Gorawski, Marcin | Dyga, Adam
Article Type: Research Article
Abstract: This paper describes the idea of a multi-dimensional bucket index designed for efficient indexing of telemetric readings. This data structure answers spatio-temporal range queries concerning utility usage within user selected region and time. In addition, it has a capability to adjust to incoming data and therefore is suitable to process data of highly dynamic nature. The paper also presents a stochastic prediction method to estimate utility usage in the near future.
Keywords: indexing of telemetric readings, bucket index, utility usage prediction, exponential smoothing, spatial data warehouse
DOI: 10.3233/FI-2009-0006
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 73-86, 2009
Authors: Halaš, Radomír
Article Type: Research Article
Abstract: In the paper we resolve an open problem posed by D.Ciucci in [1] on the independence of axioms for SBL⌝-algebras. The main aim is to present its solution which is, in contrast to the one presented in [4], simple, transparent and easily verifiable.
Keywords: Basic logic, residuated lattice, t-norm, axiomsystem, BL-algebra, SBL-algebra, SBL⌝-algebra
DOI: 10.3233/FI-2009-0007
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 87-92, 2009
Authors: Han, Yo-Sub | Salomaa, Kai | Wood, Derick
Article Type: Research Article
Abstract: We investigate the nondeterministic state complexity of basic operations for prefix-free regular languages. The nondeterministic state complexity of an operation is the number of states that are necessary and sufficient in the worst-case for a minimal nondeterministic finite-state automaton that accepts the language obtained from the operation. We establish the precise state complexity of catenation, union, intersection, Kleene star, reversal and complementation for prefix-free regular languages.
Keywords: prefix-freeness, regular languages, nondeterministic state complexity, descriptional complexity
DOI: 10.3233/FI-2009-0008
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 93-106, 2009
Authors: Saha, Sriparna | Bandyopadhyay, Sanghamitra
Article Type: Research Article
Abstract: In this paper, a novel point symmetry based pattern classifier (PSC) is proposed. A recently developed point symmetry based distance is utilized to determine the amount of point symmetry of a particular test pattern with respect to a class prototype. Kd-tree based nearest neighbor search is used for reducing the complexity of point symmetry distance computation. The proposed point symmetry based classifier is well-suited for classifying data sets having point symmetric classes, irrespective of any convexity, …overlap or size. In order to classify data sets having line symmetry property, a line symmetry based classifier (LSC) along the lines of PSC is thereafter proposed in this paper. To measure the total amount of line symmetry of a particular point in a class, a new definition of line symmetry based distance is also provided. Proposed LSC preserves the advantages of PSC. The performance of PSC and LSC are demonstrated in classifying fourteen artificial and real-life data sets of varying complexities. For the purpose of comparison, k-NN classifier and the well-known support vector machine (SVM) based classifiers are executed on the data sets used here for the experiments. Statistical analysis, ANOVA, is also performed to compare the performance of these classification techniques. Show more
Keywords: Pattern Classification, Point Symmetry, Kd-tree, Symmetry based distance, Nearest Neighbor Rule, Line Symmetry
DOI: 10.3233/FI-2009-0009
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 107-123, 2009
Authors: Wang, Chung-Chuan | Chang, Chin-Chen | Jan, Jinn-Ke
Article Type: Research Article
Abstract: In this paper, watermarking authentication schemes for binary images are investigated. Dual-Pair Blocks (DPBs), 2 × 2 blocks with two black and two white pixels, are employed as the basic units of information embedding. With 6 distinctive patterns obtained by pixel patterns, a DPB can be represented by a senary value as the embedding information. Variable length coding is adopted to enhance the information embedding capacity of the DPB. The secret key of the watermarking is …a pseudo random function with a seed value, which selects DPBs for information embedding. Based on the aforementioned framework, two novel watermarking schemes are presented. The first one, named dual-pair block authentication (DPBA), is an irreversible one featuring high information embedding capacity, and well visual quality. The second scheme, named senary Huffman compression authentication (SHCA) scheme, supports reversible watermarking unique in the literature. Matching-pair technique is further employed both schemes to reduce the number of altered bits after watermarking for better visual quality. Experimental results show that, the DPBA scheme has the highest information embedding capacity on all comparing schemes. The average compression ratio of the reversible SHCA scheme is up to 38.96%. The average distortion rates per accessed DPB for the proposed DPBA and SHCA schemes are 0.485 and 0.46, respectively. Show more
Keywords: visual perception, watermarking authentication, DPB, DPBA, SHCA
DOI: 10.3233/FI-2009-0010
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 125-155, 2009
Authors: Wang, Xia | Zhang, Wenxiu
Article Type: Research Article
Abstract: As an effective tool for data analysis and knowledge processing, the theory of concept lattices has been studied extensively and applied to various fields. In order to discover useful knowledge, one often ignores some attributes according to a particular purpose and merely considers the subcontexts of a rather complex context. In this paper, we make a deep investigation on the theory of concept lattices of subcontexts. An approach to construct the concept lattice of a context …is first presented by means of the concept lattices of its subcontexts. Then the concept lattices induced by all subcontexts of the context are considered as a set, and an order relation is introduced into the set. It is proved that the set together with the order relation is a complete lattice. Finally, the top element and the bottom element of the complete lattice are also obtained. Show more
Keywords: concept lattice, subcontext, congruence, closure operator, order relation
DOI: 10.3233/FI-2009-0011
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 157-169, 2009
Authors: Yao, Chih-Chia | Yu, Pao-Ta | Hung, Ruo-Wei
Article Type: Research Article
Abstract: The major problem of SVMs is the dependence of the nonlinear separating surface on the entire dataset which creates unwieldy storage problems. This paper proposes a novel design algorithm, called the extractive support vector algorithm, which provides improved learning speed and a vastly improved performance. Instead of learning and training with all input patterns, the proposed algorithm selects support vectors from the input patterns and uses these support vectors as the training patterns. Experimental results reveal …that our proposed algorithmprovides near optimal solutions and outperforms the existing design algorithms. In addition, a significant framework which is based on extractive support vector algorithm is proposed for image restoration. In the framework, input patterns are classified by three filters: weighted order statistics filter, alpha-trimmed mean filter and identity filter. Our proposed filter can achieve three objectives: noise attenuation, chromaticity retention, and preservation of edges and details. Extensive simulation results illustrate that our proposed filter not only achieves these three objectives but also possesses robust and adaptive capabilities, and outperforms other proposed filtering techniques. Show more
Keywords: support vector machines, unwieldy storage, image restoration, weighted order statistics filter, alpha-trimmed mean filter
DOI: 10.3233/FI-2009-0012
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 171-190, 2009
Authors: Zhang, Leyou | Hu, Yupu | Wu, Qing
Article Type: Research Article
Abstract: In this paper, a new construction of hierarchical ID-Based signature (HIBS) scheme is proposed. The new scheme has some advantages over the available schemes: the private keys size shrinks as the identity depth increases and the signature size is a constant as it consists of three group elements. Furthermore, under the h-CDH assumption, our scheme is provably secure in the standard model. This assumption is more natural than many of the hardness assumptions recently introduced to …HIBS in the standard model. Show more
Keywords: HIBS, standard model, h-CDH problem, provably secure
DOI: 10.3233/FI-2009-0013
Citation: Fundamenta Informaticae, vol. 90, no. 1-2, pp. 191-201, 2009
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]