Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Article Type: Other
DOI: 10.3233/FI-2009-120
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. I-I, 2009
Authors: Blajdo, Piotr | Hippe, Zdzislaw S. | Mroczek, Teresa | Grzymala-Busse, Jerzy W. | Knap, Maksymilian | Piatek, Lukasz
Article Type: Research Article
Abstract: We present results of extensive experiments performed on nine data sets with numerical attributes using six promising discretization methods. For every method and every data set 30 experiments of ten-fold cross validation were conducted and then means and sample standard deviations were computed. Our results show that for a specific data set it is essential to choose an appropriate discretization method since performance of discretization methods differ significantly. However, in general, among all of these discretization …methods there is no statistically significant worst or best method. Thus, in practice, for a given data set the best discretization method should be selected individually. Show more
Keywords: Rough sets, discretization, cluster analysis, merging intervals, en-fold cross validation, test on the difference between means, F-test, sign test
DOI: 10.3233/FI-2009-121
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 121-131, 2009
Authors: Chan, Chien-Chung | Tzeng, Gwo-Hshiung
Article Type: Research Article
Abstract: Dominance-based rough set introduced by Greco et al. is an extension of Pawlak¡¯s classical rough set theory by using dominance relations in place of equivalence relations for approximating sets of preference ordered decision classes satisfying upward and downward union properties. This paper introduces the concept of indexed blocks for representing dominance-based approximation spaces. Indexed blocks are sets of objects indexed by pairs of decision values. In our study, inconsistent information is represented by …exclusive neighborhoods of indexed blocks. They are used to define approximations of decision classes. It turns out that a set of indexed blocks with exclusive neighborhoods forms a partition on the universe of objects. Sequential rules for updating indexed blocks incrementally are considered and illustrated with examples. Show more
Keywords: Rough sets, Dominance-based rough sets, Multiple criteria decision analysis (MCDA), Classification, Sorting, Indexed blocks, Granular computing
DOI: 10.3233/FI-2009-122
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 133-146, 2009
Authors: Ciucci, Davide
Article Type: Research Article
Abstract: Generalized approximation algebras and approximation algebras are defined as a theoretical counterpart of all the situations where a "lower" and an "upper" mapping are used. Some models of these structures are discusse, among them rough sets, fuzzy rough sets and possibility theory. Generalized approximation framework and approximation framework are also introduced as an abstraction of all those cases where several approximations are possibile on the same element. Also in this case some …examples are given. Show more
DOI: 10.3233/FI-2009-123
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 147-161, 2009
Authors: Dembczyński, Krzysztof | Kotłowski., Wojciech | Słowiński, Roman
Article Type: Research Article
Abstract: Ordinal classification problems with monotonicity constraints (also referred to as multicriteria classification problems) often appear in real-life applications, however, they are considered relatively less frequently in theoretical studies than regular classification problems. We introduce a rule induction algorithm based on the statistical learning approach that is tailored for this type of problems. The algorithm first monotonizes the dataset (excludes strongly inconsistent objects), using Stochastic Dominance-based Rough Set Approach, and then uses forward …stagewise additive modeling framework for generating a monotone rule ensemble. Experimental results indicate that taking into account knowledge about order andmonotonicity constraints in the classifier can improve the prediction accuracy. Show more
Keywords: ordinal classification, monotonicity constraints, rule ensembles, forward stagewise additive modeling, boosting, dominance-based rough set approach
DOI: 10.3233/FI-2009-124
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 163-178, 2009
Authors: Gong, Xun | Wang, Guoyin | Xiong, Lili
Article Type: Research Article
Abstract: Human beings are born with a natural capacity of recovering shape from merely one image. However, it is still a challenging mission for current techniques to make a computer have such an ability. To simulate the modeling procedure of human visual system, a Ternary Deformation Framework (TDF) is proposed to reconstruct a realistic 3D face from one 2D frontal facial image, with prior knowledge regarding facial shape learnt from a 3D face data set. Based upon …the reconstructed 3D face, a novelmethod via linear regression is then proposed to estimate that person's pose on another image with pose variations. Simulation results show that TDF outperforms the conventional methods with respect to the modeling precision and that reconstructions on real photographs have achieved favorable visual effects. Moreover, the comparison results validated the effectiveness of using the 3D face in the proposed pose estimation method. Show more
Keywords: face reconstruction, pose estimation, deformation, linear regression
DOI: 10.3233/FI-2009-125
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 179-195, 2009
Authors: Janicki, Ryszard
Article Type: Research Article
Abstract: A systematic procedure for deriving weakly ordered non-numerical rankings from given sets of data is proposed and analysed. The data are assumed to be collected using the Pairwise Comparisons paradigm. The concept of a partially ordered approximation of an arbitrary binary relation is formally defined and some solutions are proposed. The problem of testing and the importance of indifference and the power of weak order extensions are also discussed.
DOI: 10.3233/FI-2009-126
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 197-217, 2009
Authors: Li, Huaxiong | Yao, Yiyu | Zhou, Xianzhong | Huang, Bing
Article Type: Research Article
Abstract: A two-phase learning strategy for rule induction from incomplete data is proposed, and a new form of rules is introduced so that a user can easily identify attributes with or without missing values in a rule. Two levels of measurement are assigned to a rule. An algorithm for two-phase rule induction is presented. Instead of filling in missing attribute values before or during the process of rule induction, we divide rule induction into two phases. In …the first phase, rules and partial rules are induced based on non-missing values. In the second phase, partial rules are modified and refined by the imputation of some missing values. Such rules truthfully reflect the knowledge embedded in the incomplete data. The study not only presents a new view of rule induction from incomplete data, but also provides a practical solution. Experiments validate the effectiveness of the proposed method. Show more
Keywords: missing attribute values, filled-in values, two-phase rule induction
DOI: 10.3233/FI-2009-127
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 219-232, 2009
Authors: Lingras, Pawan | Chen, Min | Miao, Duoqian
Article Type: Research Article
Abstract: Most of the business decisions are based on cost and benefit considerations. Data mining techniques that make it possible for the businesses to incorporate financial considerations will be moremeaningful to the decisionmakers. Decision theoretic framework has been helpful in providing a better understanding of classification models. This study describes a semi-supervised decision theoretic rough set model. The model is based on an extension of decision theoretic model proposed by Yao. The proposal is used to …model financial cost/benefit scenarios for a promotional campaign in a real-world retail store. Show more
Keywords: Rough sets, Rough approximation, Probability, Decision theory, Cost/benefit analysis, k-means clustering algorithm
DOI: 10.3233/FI-2009-128
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 233-244, 2009
Authors: Liu, Dun | Li, Tianrui | Ruan, Da | Zou, Weili
Article Type: Research Article
Abstract: Knowledge in an information system evolves with its dynamical environment. A new concept of interesting knowledge based on both accuracy and coverage is defined in this paper for dynamic information systems. An incremental model and approach as well as its algorithm for inducing interesting knowledge are proposed when the object set varies over time. A case study validates the feasibility of the proposed method.
Keywords: Rough sets, interesting knowledge, accuracy, coverage, dynamic information systems, data mining
DOI: 10.3233/FI-2009-129
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 245-260, 2009
Authors: Song, Jing | Li, Tianrui | Ruan, Da
Article Type: Research Article
Abstract: Decision trees are one of the most popular data-mining techniques for knowledge discovery. Many approaches for induction of decision trees often deal with the continuous data and missing values in information systems. However, they do not perform well in real situations. This paper presents a new algorithm, decision tree construction based on the Cloud transform and Rough set theory under the characteristic relation (CR), for mining classification knowledge from a given data set. The continuous data …is transformed into discrete qualitative concepts via the cloud transformation and then the attribute with the smallest weighted mean roughness under the characteristic relation is selected as the current splitting node. Experimental evaluation shows the decision trees constructed by the CR algorithm tend to have a simpler structure, much higher classification accuracy and more understandable rules than those by C5.0 in most cases. Show more
Keywords: Rough set theory, cloud transform, decision trees, weighted mean roughness, characteristic relation
DOI: 10.3233/FI-2009-130
Citation: Fundamenta Informaticae, vol. 94, no. 2, pp. 261-273, 2009
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]