Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Purchase individual online access for 1 year to this journal.
Price: EUR 410.00Impact Factor 2024: 0.4
Fundamenta Informaticae is an international journal publishing original research results in all areas of theoretical computer science. Papers are encouraged contributing:
- solutions by mathematical methods of problems emerging in computer science
- solutions of mathematical problems inspired by computer science.
Topics of interest include (but are not restricted to): theory of computing, complexity theory, algorithms and data structures, computational aspects of combinatorics and graph theory, programming language theory, theoretical aspects of programming languages, computer-aided verification, computer science logic, database theory, logic programming, automated deduction, formal languages and automata theory, concurrency and distributed computing, cryptography and security, theoretical issues in artificial intelligence, machine learning, pattern recognition, algorithmic game theory, bioinformatics and computational biology, quantum computing, probabilistic methods, & algebraic and categorical methods.
Article Type: Other
Abstract: This very special, 27.0 – 1 volume of Fundamenta Informaticae is dedicated to Andrzej Skowron on the occasion of his 70th birthday. The contributions are on an invitational basis, but they have been reviewed according to the usual standards of the journal. The editors want to thank all the contributors and reviewers for their great work. Without it, this volume would be much less special. It is very hard, if even possible, to describe Andrzej Skowron in a finite collection of words. He is such an unique personality and scientist. To get some understanding what he is …like it may help to read the accounts included in this preface. These accounts are provided by persons who interact with Andrzej for years on both professional and personal grounds: Roman Świniarski with family, Janusz Kacprzyk, Damian Niwiński, and Stanisław Matwin. When we started to circulate the idea of this special volume among Andrzej's extended scientific family, we have met an enthusiastic response. So enthusiastic in fact, that we were initially a little bit overwhelmed. Everybody wanted to be on board. We managed to convince several groups of researchers to join forces and write one comprehensive, yet compact article instead of several. In this way, it was possible to fit the material in one, thirty-six-piece volume. The thirty-six articles that make this special volume of Fundamenta Informaticae span over a very wide range of topics. They reflect Andrzej Skowron's activities as a researcher and a scholar as well as his influence on a broad scientific community. In order to make this volume more approachable we have ordered the papers with respect to general areas their represent. To do that we have used a methodology that has quite bit to do with results of one of the research projects Andrzej was recently involved in. Namely, we have manually performed a semantic clustering of our contribution pool. As a result the papers have been organized into four disjoint clusters (thematic groups) that we briefly introduce below. First of the clusters gathers articles that correspond to some fundamental directions in recent and past research of Andrzej Skowron. The reader will find in this cluster papers representing such areas as: foundations of rough sets, logical aspects of both rough and related models of computation, foundational issues relating to logical aspects of non-classical computational systems, formal and computational aspects of inference systems, and nature-inspired computational systems. In this cluster we have contributions by: Mihir K. Chakraborty and Mohua Banerjee; Anna Gomolińska and Marcin Wolski; Ewa Orłowska and Ivo Düntsch; Yiyu Yao; Lech Polkowski and Maria Semeniuk-Polkowska; Ludwik Czaja; Grzegorz Rozenberg, Gheorghe Paun, and Mario J. Perez-Jimenez; Alberto Pettorossi, Fabio Fioravanti, Maurizio Proietti, and Valerio Senni; Andrzej Szałas and Patrick Doherty. The second cluster contains papers that describe research results in topics associated with discovering, representing and making use of knowledge learned form data. In particular, several of approaches described in these papers make use of reducts and decision rules. The contributions made by Andrzej Skowron to methods and algorithms for representation, reduction, and simplification of information retrieved from data are instrumental here. There are also papers that deal with approximations and approximation spaces, an area pioneered by Andrzej. Members of this cluster are papers by: Mikhail Moshkov, Talha Amin, Igor Chikalov, and Beata Zielosko; Roman Słowiński, Salvatore Greco, and Izabela Szczęch; Jerzy Grzymała-Busse and Patrick G. Clark; Wojciech Ziarko and Xugunag Chen; Shusaku Tsumoto and Shoji Hirano; Zbigniew Raś and Hakim Touati; Hui Wang and Ivo Düntsch; Zbigniew Suraj and Krzysztof Pancerz; Jan Komorowski, Marcin Kruczyk, Nicholas Baltzer, Jakub Mieczkowski, Michał Dramiński, and Jacek Koronacki. The third group of contributions relates to another large area of research on which Andrzej Skowron left his mark. The papers represent studies on fundamentals and applications of granular approach to knowledge-based systems as well as investigations into underlying notions of closeness, similarity, and nearness. They also address challenges associated with construction and usage of granular systems – in particular multi-layered, hierarchical ones – in knowledge discovery and decision support. Papers by the following authors make this group: Sankar K. Pal, Jayanta Kumar Pal, and Shubhra Sankar Ray; Marzena Kryszkiewicz; Bożena Kostek and Andrzej Kaczmarek; Alicja Wakulicz-Deja, Agnieszka Nowak-Brzezińska, and Małgorzata Przybyła-Kasperek; James Peters and Sheela Ramanna; Hung Son Nguyen, Sinh Hoa Nguyen, Tuan Trung Nguyen, and Marcin Szczuka; Guoyin Wang, Yuchao Liu, Deyi Li, and Wen He; Witold Pedrycz; Tsau Young Lin, Yong Liu, and Wenliang Huang. The fourth and final group contains nine papers that represent a little wider range of topics. Among them are papers that deal with data processing in general, including research related to database technology as well as search techniques. There are papers in this cluster that deal with data and knowledge representation and navigation. There are also described various aspects of data mining including those that make use of multiagent approach as well as methods based on processing of visual information. In this cluster the reader will find contributions by: Jarosław Stepaniuk, Maciej Kopczyński, and Tomasz Grzes; Dominik Ślęzak, Piotr Synak, Arkadiusz Wojna, and Jakub Wróblewski; Henryk Rybiński and Jacek Lewandowski; Jiming Liu, Hao Lan Zhang, and Yanchun Zhang; Jan G. Bazan, Andrzej Jankowski, and Sylwia Buregwa-Czuma; Wojciech Froelich, Rafał Deja, and Grażyna Deja; Piotr Wasilewski and Adam Krasuski; Ning Zhong, Linchan Qin, Shengfu Lu, and Mi Li; Andrzej Czyżewski and Karol Lisowski. The editors of this special volume would like to wish a Happy Birthday to Andrzej and hope that he will like this little gift. Dominik Ślęzak Hung Son Nguyen Marcin Szczuka October 2013 Show more
DOI: 10.3233/FI-2013-891
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. xix-xxviii, 2013
Authors: Chakraborty, Mihir K. | Banerjee, Mohua
Article Type: Research Article
Abstract: The article analyses prevalent definitions of rough sets from the foundational and mathematical perspectives. In particular, the issue of language dependency in the definitions, and implications of the definitions on the issue of vagueness are discussed in detail.
Keywords: Rough sets, Vagueness
DOI: 10.3233/FI-2013-892
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 1-15, 2013
Authors: Wolski, Marcin | Gomolińska, Anna
Article Type: Research Article
Abstract: The paper addresses the problem of concept formation (knowledge granulation) in the settings of rough set theory. The original version of rough set theory implicitly accommodates a lot of well-established philosophical assumptions about concept formation as presented by A. Rand. However, as suggested by S. Hawking and L. Mlodinow, one has also to consider the dynamics of the universe of objects and different scales at which concepts may be formed. These both aspects have already been discussed separately in rough set theory. Different forms of dynamics have been addressed explicitly – especially the case of extending the universe by new …objects; in contrast, different scales of description have been addressed implicitly, mainly within the Granular Computing (GrC) paradigm. Following the example of Life, the famous game invented by J. Conway, we describe the corresponding dynamics in Pawlak information systems using a GrC driven methodology. Having dynamics discussed, we address the problem of concept formation at zoom-out scales of description. To this end, we build Scott systems as information systems describing the universe at a coarser scale than the original scale of Pawlak systems. We regard these systems as a special type of classifications, which have already been studied in the context of rough sets by A. Skowron et al. Show more
Keywords: rough set, granular computing, pre-bilattice, classification, Scott system
DOI: 10.3233/FI-2013-893
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 17-33, 2013
Authors: Düntsch, Ivo | Orłowska, Ewa
Article Type: Research Article
Abstract: Rough relation algebras are a generalization of relation algebras such that the underlying lattice structure is a regular double Stone algebra. Standard models are algebras of rough relations. A discrete duality is a relationship between classes of algebras and classes of relational systems (frames). In this paper we prove a discrete duality for a class of rough relation algebras and a class of frames.
DOI: 10.3233/FI-2013-894
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 35-47, 2013
Authors: Yao, Yiyu
Article Type: Research Article
Abstract: In rough set theory, one typically considers pairs of dual entities such as a pair of lower and upper approximations, a pair of indiscernibility and discernibility relations, a pair of sets of core and non-useful attributes, and several more. By adopting a framework known as hypercubes of duality, of which the square of opposition is a special case, this paper investigates the role of duality for interpreting fundamental concepts in rough set analysis. The objective is not to introduce new concepts, but to revisit the existing concepts by casting them in a common framework so that we can obtain more …insights into an understanding of these concepts and their relationships. We demonstrate that these concepts can, in fact, be defined and explained in a common framework, although they first appear to be very different and have been studied in somewhat isolated ways. Show more
Keywords: Core attributes, useful and non-useful attributes, duality, hypercubes of duality, indiscernibility and discernibility relations and matrices, lower and upper approximations, square of opposition
DOI: 10.3233/FI-2013-895
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 49-64, 2013
Authors: Semeniuk-Polkowska, Maria | Polkowski, Lech
Article Type: Research Article
Abstract: The notion of extensionality means in plain sense that properties of complex things can be expressed by means of their simple components, in particular, that two things are identical if and only if certain of their components or features are identical; e.g., the Leibniz Identitas Indiscernibilium Principle: two things are identical if each applicable to them operator yields the same result on either; or, extensionality for sets, viz., two sets are identical if and only if they consist of identical elements. In mereology, this property is expressed by the statement that two things are identical if their parts are the …same. However, building a thing from parts may proceed in various ways and this, unexpectedly, yields various extensionality principles. Also, building a thing may lead to things identical with respect to parts but distinct with respect, e.g., to usage. We address the question of extensionality for artifacts, i.e., things produced in some assembling or creative process in order to satisfy a chosen purpose of usage, and, we formulate the extensionality principle for artifacts which takes into account the assembling process and requires for identity of two artifacts that assembling graphs for the two be isomorphic in a specified sense. In parallel, we consider the design process and design things showing the canonical correspondence between abstracta as design products and concreta as artifacts. In the end, we discuss approximate artifacts as a result of assembling with spare parts which analysis does involve rough mereology. Show more
Keywords: Mereology, Rough Mereology, Artifacts, Extensionality Property
DOI: 10.3233/FI-2013-896
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 65-80, 2013
Authors: Czaja, Ludwik
Article Type: Research Article
Abstract: Information system of net structures based on their calculus (a distributive lattice) is introduced and, in this context, basic notions of rough set theory are re-formulated and exemplified.
DOI: 10.3233/FI-2013-897
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 81-97, 2013
Authors: Păun, Gheorghe | Pérez-Jiménez, Mario J. | Rozenberg, Grzegorz
Article Type: Research Article
Abstract: This paper continues an investigation into bridging two research areas concerned with natural computing: membrane computing and reaction systems. More specifically, the paper considers a transfer of two assumptions/axioms of reaction systems, non-permanency and the threshold assumption, into the framework of membrane computing. It is proved that: (1) spiking neural P systems with non-permanency of spikes assumption characterize the semilinear sets of numbers, and (2) symport/antiport P systems with threshold assumption (translated as ω multiplicity of objects) can solve SAT in polynomial time. Also, several open research problems are stated.
Keywords: Membrane computing, reaction system, semilinear set, fypercomputation, SAT
DOI: 10.3233/FI-2013-898
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 99-114, 2013
Authors: Fioravanti, Fabio | Pettorossi, Alberto | Proietti, Maurizio | Senni, Valerio
Article Type: Research Article
Abstract: In this paper we present an overview of the unfold/fold proof method, a method for proving theorems about programs, based on program transformation. As a metalanguage for specifying programs and program properties we adopt constraint logic programming (CLP), and we present a set of transformation rules (including the familiar unfolding and folding rules) which preserve the semantics of CLP programs. Then, we show how program transformation strategies can be used, similarly to theorem proving tactics, for guiding the application of the transformation rules and inferring the properties to be proved. We work out three examples: (i) the proof of predicate …equivalences, applied to the verification of equality between CCS processes, (ii) the proof of first order formulas via an extension of the quantifier elimination method, and (iii) the proof of temporal properties of infinite state concurrent systems, by using a transformation strategy that performs program specialization. Show more
Keywords: Automated theorem proving, program transformation, constraint logic programming, program specialization, bisimilarity, quantifier elimination, temporal logics
DOI: 10.3233/FI-2013-899
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 115-134, 2013
Authors: Doherty, Patrick | Szałas, Andrzej
Article Type: Research Article
Abstract: This paper focuses on approximate reasoning based on the use of approximation spaces. Approximation spaces and the approximated relations induced by them are a generalization of the rough set-based approximations of Pawlak. Approximation spaces are used to define neighborhoods around individuals and rough inclusion functions. These in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logical theory which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between …properties of approximations and properties of approximation spaces. Using ideas from correspondence theory, we develop an analogous framework for approximation spaces. We also show that this framework can be strongly supported by automated techniques for quantifier elimination. Show more
Keywords: approximate reasoning, rough sets, approximation spaces, quantifier elimination, knowledge representation
DOI: 10.3233/FI-2013-900
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 135-149, 2013
Authors: Amin, Talha | Chikalov, Igor | Moshkov, Mikhail | Zielosko, Beata
Article Type: Research Article
Abstract: Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification – exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than …the ordinary optimization (length or coverage). Show more
Keywords: Dynamic programming, decision rules, classifiers
DOI: 10.3233/FI-2013-901
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 151-160, 2013
Authors: Greco, Salvatore | Słowiński, Roman | Szczęch, Izabela
Article Type: Research Article
Abstract: The paper focuses on Bayesian confirmation measures used for evaluation of rules induced from data. To distinguish between many confirmation measures, their properties are analyzed. The article considers a group of symmetry properties. We demonstrate that the symmetry properties proposed in the literature focus on extreme cases corresponding to entailment or refutation of the rule's conclusion by its premise, forgetting intermediate cases. We conduct a thorough analysis of the symmetries regarding that the confirmation should express how much more probable the rule's hypothesis is when the premise is present rather than when the negation of the premise is present. As …a result we point out which symmetries are desired for Bayesian confirmation measures. Next, we analyze a set of popular confirmation measures with respect to the symmetry properties and other valuable properties, being monotonicity M, Ex1 and weak Ex1 , logicality L and weak L. Our work points out two measures to be the most meaningful ones regarding the considered properties. Show more
Keywords: Bayesian confirmation measures, symmetry properties, rule evaluation
DOI: 10.3233/FI-2013-902
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 161-176, 2013
Authors: Clark, Patrick G. | Grzymala-Busse, Jerzy W.
Article Type: Research Article
Abstract: In this paper we present results of experiments on 166 incomplete data sets using three probabilistic approximations: lower, middle, and upper. Two interpretations of missing attribute values were used: lost and “do not care” conditions. Our main objective was to select the best combination of an approximation and a missing attribute interpretation. We conclude that the best approach depends on the data set. The additional objective of our research was to study the average number of distinct probabilities associated with characteristic sets for all concepts of the data set. This number is much larger for data sets with “do not …care” conditions than with data sets with lost values. Therefore, for data sets with “do not care” conditions the number of probabilistic approximations is also larger. Show more
Keywords: characteristic sets, singleton, subset and concept approximations, lower, middle and upper approximations, incomplete data
DOI: 10.3233/FI-2013-903
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 177-191, 2013
Authors: Ziarko, Wojciech | Chen, Xugunag
Article Type: Research Article
Abstract: The article reviews the basics of the variable precision rough set and the Bayesian approaches to data dependencies detection and analysis. The variable precision rough set and the Bayesian rough set theories are extensions of the rough set theory. They are focused on the recognition and modelling of set overlap-based, also referred to as probabilistic, relationships between sets. The set-overlap relationships are used to construct approximations of undefinable sets. The primary application of the approach is to analysis of weak data co-occurrence-based dependencies in probabilistic decision tables learned from data. The probabilistic decision tables are derived from data to represent …the inter-data item connections, typically for the purposes of their analysis or data value prediction. The theory is illustrated with a comprehensive application example illustrating utilization of probabilistic decision tables to face image classification. Show more
Keywords: rough sets, approximation space, probabilistic dependencies, variable precision rough sets, Bayesian rough sets, probabilistic decision tables, machine learning
DOI: 10.3233/FI-2013-904
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 193-207, 2013
Authors: Tsumoto, Shusaku | Hirano, Shoji
Article Type: Research Article
Abstract: This paper proposes a new framework for incremental rule induction of medical diagnostic rules based on incremental sampling scheme and rule layers. When an example is appended, four possibilities can be considered. Thus, updates of accuracy and coverage are classified into four cases, which give two important inequalities of accuracy and coverage for induction of probabilistic rules. By using these two inequalities, the proposed method classifies a set of formulae into four layers: the rule layer, subrule layer (in and out) and the non-rule layer. Then, the obtained rule and subrule layers play a central role in updating proabilistic rules. …If a new example contributes to an increase in the accuracy and coverage of a formula in the subrule layer, the formula is moved into the rule layer. If this contributes to a decrease of a formula in the rule layer, the formula is moved into the subrule layer. The proposed method was evaluated on a dataset regarding headaches, whose results show that the proposed method outperforms the conventional methods. Show more
Keywords: incremental rule induction, rough sets, RHINOS, incremental sampling scheme, subrule layer
DOI: 10.3233/FI-2013-905
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 209-223, 2013
Authors: Touati, Hakim | Ras, Zbigniew W.
Article Type: Research Article
Abstract: Meta-actions effect and selection are the fundamental core for a successful action rule execution. All atomic action terms on the left-hand side of an action rule have to be covered by well chosen meta-actions in order for it to be executed. The choice of meta-actions depends on the antecedent side of action rules; however, it also depends on their list of atomic actions that are outside of the action rule scope, seen as side effects. In this paper, we strive to minimize the side effects by decomposing the left-hand side of an action rule into executable action rules covered by …a minimal number of meta-actions and resulting in a cascading effect. This process was tested and compared to original action rules. Experimental results show that side effects are diminished in comparison with the original meta-actions applied while keeping a good execution confidence. ding effect. This process was tested and compared to original action rules. Experimental results show that side effects are diminished in comparison with the original meta-actions applied while keeping a good execution confidence. Show more
Keywords: Meta-actions, Action rules decomposition, Side effects
DOI: 10.3233/FI-2013-906
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 225-240, 2013
Authors: Wang, Hui | Düntsch, Ivo | Trindade, Luis
Article Type: Research Article
Abstract: In this paper we review Lattice Machine, a learning paradigm that “learns” by generalising data in a consistent, conservative and parsimonious way, and has the advantage of being able to provide additional reliability information for any classification. More specifically, we review the related concepts such as hyper tuple and hyper relation, the three generalising criteria (equilabelledness, maximality, and supportedness) as well as the modelling and classifying algorithms. In an attempt to find a better method for classification in Lattice Machine, we consider the contextual probability which was originally proposed as a measure for approximate reasoning when there is insufficient data. …It was later found to be a probability function that has the same classification ability as the data generating probability called primary probability. It was also found to be an alternative way of estimating the primary probability without much model assumption. Consequently, a contextual probability based Bayes classifier can be designed. In this paper we present a new classifier that utilises the Lattice Machine model and generalises the contextual probability based Bayes classifier. We interpret the model as a dense set of data points in the data space and then apply the contextual probability based Bayes classifier. A theorem is presented that allows efficient estimation of the contextual probability based on this interpretation. The proposed classifier is illustrated by examples. Show more
Keywords: Lattice machine, contextual probability, generalisation, classification
DOI: 10.3233/FI-2013-907
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 241-256, 2013
Authors: Pancerz, Krzysztof | Suraj, Zbigniew
Article Type: Research Article
Abstract: The aim of this paper is to present the methods and algorithms of information systems decomposition. In the paper, decomposition with respect to reducts and the so-called global decomposition are considered. Moreover, coverings of information systems by components are discussed. An essential difference between two kinds of decomposition can be observed. In general, global decomposition can deliver more components of a given information system. This fact can be treated as some kind of additional knowledge about the system. The proposed approach is based on rough set theory. To demonstrate the usefulness of this approach, we present an illustrative example coming …from the economy domain. The discussed decomposition methods can be applied e.g. for design and analysis of concurrent systems specified by information systems, for automatic feature extraction, as well as for control design of systems represented by experimental data tables. Show more
Keywords: decomposition, information analysis, information system, knowledge representation, machine learning, reduct, rough sets
DOI: 10.3233/FI-2013-908
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 257-272, 2013
Authors: Kruczyk, Marcin | Baltzer, Nicholas | Mieczkowski, Jakub | Dramiński, Michał | Koronacki, Jacek | Komorowski, Jan
Article Type: Research Article
Abstract: An important step prior to constructing a classifier for a very large data set is feature selection. With many problems it is possible to find a subset of attributes that have the same discriminative power as the full data set. There are many feature selection methods but in none of them are Rough Set models tied up with statistical argumentation. Moreover, known methods of feature selection usually discard shadowed features, i.e. those carrying the same or partially the same information as the selected features. In this study we present Random Reducts (RR) - a feature selection method which precedes classification …per se. The method is based on the Monte Carlo Feature Selection (MCFS) layout and uses Rough Set Theory in the feature selection process. On synthetic data, we demonstrate that the method is able to select otherwise shadowed features of which the user should be made aware, and to find interactions in the data set. Show more
Keywords: Feature selection, random reducts, rough sets, Monte Carlo
DOI: 10.3233/FI-2013-909
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 273-288, 2013
Authors: Pal, Jayanta Kumar | Ray, Shubhra Sankar | Pal, Sankar K.
Article Type: Research Article
Abstract: MicroRNAs (miRNA) are one kind of non-coding RNA which play many important roles in eukaryotic cell. Investigations on miRNAs show that miRNAs are involved in cancer development in animal body. In this article, a threshold based method to check the condition (normal or cancer) of miRNAs of a given sample/patient, using weighted average distance between the normal and cancer miRNA expressions, is proposed. For each miRNA, the city block distance between two representatives, corresponding to scaled normal and cancer expressions, is obtained. The average of all such distances for different miRNAs is weighted by a factor, to generate the threshold. …The weight factor, which is cancer dependent, is determined through an exhaustive search by maximizing the F score during training. In a part of the investigation, a ranking algorithm for cancer specific miRNAs is also discussed. The performance of the proposed method is evaluated in terms of Matthews Correlation Coefficient (MCC) and by plotting points (1 – Specificity vs: Sensitivity) in Receiver Operating Characteristic (ROC) space, besides the F score. Its efficiency is demonstrated on breast, colorectal, melanoma lung, prostate and renal cancer data sets and it is observed to be superior to some of the existing classifiers in terms of the said indices. Show more
Keywords: miRNA expression analysis, cancer detection, pattern recognition, bioinformatics
DOI: 10.3233/FI-2013-910
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 289-305, 2013
Authors: Kryszkiewicz, Marzena
Article Type: Research Article
Abstract: The cosine and Tanimoto similarity measures are typically applied in the area of chemical informatics, bio-informatics, information retrieval, text and web mining as well as in very large databases for searching sufficiently similar vectors. In the case of large sparse high dimensional data sets such as text or Web data sets, one typically applies inverted indices for identification of candidates for sufficiently similar vectors to a given vector. In this article, we offer new theoretical results on how the knowledge about non-zero dimensions of real valued vectors can be used to reduce the number of candidates for vectors sufficiently cosine …and Tanimoto similar to a given one. We illustrate and discuss the usefulness of our findings on a sample collection of documents represented by a set of a few thousand real valued vectors with more than ten thousand dimensions. Show more
Keywords: sparse data sets, high dimensional data sets, the cosine similarity, the Tanimoto similarity, text mining, data mining, information retrieval, similarity joins, inverted indices
DOI: 10.3233/FI-2013-911
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 307-323, 2013
Authors: Kostek, Bozena | Kaczmarek, Andrzej
Article Type: Research Article
Abstract: This study aims to create an algorithm for assessing the degree to which songs belong to genres defined a priori. Such an algorithm is not aimed at providing unambiguous classification-labelling of songs, but at producing a multidimensional description encompassing all of the defined genres. The algorithm utilized data derived from the most relevant examples belonging to a particular genre of music. For this condition to be met, data must be appropriately selected. It is based on the fuzzy logic principles, which will be addressed further. The paper describes all steps of experiments along with examples of analyses and results obtained.
Keywords: Music Information Retrieval (MIR), Music genre classification, Music parametrization, Query systems, Intelligent decision systems
DOI: 10.3233/FI-2013-912
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 325-340, 2013
Authors: Wakulicz-Deja, Alicja | Nowak-Brzezińska, Agnieszka | Przybyła-Kasperek, Małgorzata
Article Type: Research Article
Abstract: This paper discusses the issues related to the conflict analysis method and the rough set theory, process of global decision-making on the basis of knowledge which is stored in several local knowledge bases. The value of the rough set theory and conflict analysis applied in practical decision support systems with complex domain knowledge are expressed. The furthermore examples of decision support systems with complex domain knowledge are presented in this article. The paper proposes a new approach to the organizational structure of a multi-agent decision-making system, which operates on the basis of dispersed knowledge. In the presented system, the local …knowledge bases will be combined into groups in a dynamic way. We will seek to designate groups of local bases on which the test object is classified to the decision classes in a similar manner. Then, a process of knowledge inconsistencies elimination will be implemented for created groups. Global decisions will be made using one of the methods for analysis of conflicts. Show more
Keywords: knowledge bases, rough set theory, conflict analysis, decision support systems, cluster analysis, relation of friendship, relation of conflict
DOI: 10.3233/FI-2013-913
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 341-356, 2013
Authors: Peters, James F. | Ramanna, Sheela
Article Type: Research Article
Abstract: This paper introduces descriptive set patterns that originated from our visits with Zdzisław Pawlak and Andrzej Skowron at Banacha and environs in Warsaw. This paper also celebrates the generosity and caring manner of Andrzej Skowron, who made our visits to Warsaw memorable events. The inspiration for the recent discovery of descriptive set patterns can be traced back to our meetings at Banacha. Descriptive set patterns are collections of near sets that arise rather naturally in the context of an extension of Solomon Leader's uniform topology, which serves as a base topology for compact Hausdorff spaces that are proximity spaces. The …particular form of proximity space (called EF-proximity) reported here is an extension of the proximity space introduced by V. Efremovič during the first half of the 1930s. Proximally continuous functions introduced by Yu.V. Smirnov in 1952 lead to pattern generation of comparable set patterns. Set patterns themselves were first considered by T. Pavlidis in 1968 and led to U. Grenander's introduction of pattern generators during the 1990s. This article considers descriptive set patterns in EF-proximity spaces and their application in digital image classification. Images belong to the same class, provided each image in the class contains set patterns that resemble each other. Image classification then reduces to determining if a set pattern in a test image is near a set pattern in a query image. Show more
Keywords: Descriptive set pattern, EF-proximity, Grenander pattern generator, near sets, proximally continuous function, proximity space, uniform topology
DOI: 10.3233/FI-2013-914
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 357-367, 2013
Authors: Nguyen, Sinh Hoa | Nguyen, Tuan Trung | Szczuka, Marcin | Nguyen, Hung Son
Article Type: Research Article
Abstract: This paper summarizes the some of the recent developments in the area of application of rough sets and granular computing in hierarchical learning. We present the general framework of rough set based hierarchical learning. In particular, we investigate several strategies of choosing the appropriate learning algorithms for first level concepts as well as the learning methods for the intermediate concepts. We also propose some techniques for embedding the domain knowledge into the granular, layered learning process in order to improve the quality of hierarchical classifiers. This idea, which has been envisioned and developed by professor Andrzej Skowron over the last …10 years, shows to be very efficient in many practical applications. Throughout the article, we illustrate the proposed methodology with three case studies in the area of pattern recognition. The studies demonstrate the viability of this approach for such problems as: sunspot classification, hand-written digit recognition, and car identification. Show more
Keywords: Concept approximation, granular computing, layered learning, hand-written digits, object identification, sunspot recognition, pattern recognition, classification
DOI: 10.3233/FI-2013-915
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 369-384, 2013
Authors: Liu, Yuchao | Li, Deyi | He, Wen | Wang, Guoyin
Article Type: Research Article
Abstract: Granular computing is one of the important methods for extracting knowledge from data and has got great achievements. However, it is still a puzzle for granular computing researchers to imitate the human cognition process of choosing reasonable granularities automatically for dealing with difficult problems. In this paper, a Gaussian cloud transformation method is proposed to solve this problem, which is based on Gaussian Mixture Model and Gaussian Cloud Model. Gaussian Mixture Model (GMM) is used to transfer an original data set to a sum of Gaussian distributions, and Gaussian Cloud Model (GCM) is used to represent the extension of a …concept and measure its confusion degree. Extensive experiments on data clustering and image segmentation have been done to evaluate this method and the results show its performance and validity. Show more
Keywords: Granular computing, Gaussian Mixture Model, Gaussian Cloud Model, Data clustering, Image segmentation
DOI: 10.3233/FI-2013-916
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 385-398, 2013
Authors: Pedrycz, Witold
Article Type: Research Article
Abstract: Fuzzy sets (membership functions) are numeric constructs. In spite of the underlying semantics of fuzzy sets (which is inherently linked with the higher level of abstraction), the membership grades and processing of fuzzy sets themselves emphasize the numeric facets of all pursuits stressing the numeric nature of membership grades and in this way reducing the interpretability and transparency of results. In this study, we advocate an idea of a granular description of membership functions where instead of numeric membership grades, introduced are more interpretable granular descriptors (say, low, high membership, etc.). Granular descriptors are formalized with the aid of various …formal schemes available in Granular Computing, especially sets (intervals), fuzzy sets, and shadowed sets. We formulate a problem of a design of granular descriptors as a certain optimization task, elaborate on the solutions and highlight some areas of applications. Show more
Keywords: Granular Computing, granular description of membership, rough sets, shadowed sets, optimization, granular fuzzy modeling
DOI: 10.3233/FI-2013-917
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 399-412, 2013
Authors: Lin, Tsau Young | Liu, Yong | Huang, Wenliang
Article Type: Research Article
Abstract: This paper explains the mathematics of large scaled granular computing (GrC), augmented with a new Knowledge theory, by unifying rough set theories (RS) into one single concept, namely, neighborhood systems (NS). NS was first introduced in 1989 by T. Y. Lin to capture the concepts of “near” (topology) and “conflict” (security). Since 1996 when the term Granular Computing (GrC) was coined by T. Y. Lin to label Zadeh's vision, NS has been pushed into the “heart” of GrC. In 2011, LNS, the largest NS, was axiomatized; it implied that this set of axioms defines a new mathematics that realizes Zadeh's …vision. The main messages are: this new mathematics is powerful and practical. Show more
Keywords: granular computing, neighborhood system, central knowledge, rough set, topological space, variable precision rough set
DOI: 10.3233/FI-2013-918
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 413-428, 2013
Authors: Stepaniuk, Jaroslaw | Kopczynski, Maciej | Grzes, Tomasz
Article Type: Research Article
Abstract: In this paper we propose a combination of capabilities of the FPGA based device and PC computer for data processing using rough set methods. Presented architecture has been tested on the exemplary data sets. Obtained results confirm the significant acceleration of the computation time using hardware supporting rough set operations in comparison to software implementation.
DOI: 10.3233/FI-2013-919
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 429-443, 2013
Authors: Ślęzak, Dominik | Synak, Piotr | Wojna, Arkadiusz | Wróblewski, Jakub
Article Type: Research Article
Abstract: We present analytic data processing technology derived from the principles of rough sets and granular computing. We show how the idea of approximate computations on granulated data has evolved toward complete product supporting standard analytic database operations and their extensions. We refer to our previous works where our query execution algorithms were described in terms of iteratively computed rough approximations. We explain how to interpret our data organization methods in terms of classical rough set notions such as reducts and generalized decisions.
Keywords: Analytic Data Processing Systems, Rough-Granular Computational Models
DOI: 10.3233/FI-2013-920
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 445-459, 2013
Authors: Lewandowski, Jacek | Rybiński, Henryk
Article Type: Research Article
Abstract: Polyhierarchical structures play an important role in artificial intelligence, especially in knowledge representation. The main problem with using them efficiently is lack of efficient methods of accessing related nodes, which limits the practical applications. The proposed hybrid indexing approach generalizes various methods and makes possible combining them in a uniform manner within one index, which adapts to a particular topology of the data structure. This gives rise to a balance between compactness of the index and fast responses to the search requests. The correctness of the proposed method is formally shown, and its performance is evaluated. The results prove its …high efficiency. Show more
DOI: 10.3233/FI-2013-921
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 461-477, 2013
Authors: Zhang, Hao Lan | Liu, Jiming | Zhang, Yanchun
Article Type: Research Article
Abstract: Online social networks (OSN) are facing challenges since they have been extensively applied to different domains including online social media, e-commerce, biological complex networks, financial analysis, and so on. One of the crucial challenges for OSN lies in information overload and network congestion. The demands for efficient knowledge discovery and data mining methods in OSN have been rising in recent year, particularly for online social applications, such as Flickr, YouTube, Facebook, and LinkedIn. In this paper, a Belief-Desire-Intention (BDI) agent-based method has been developed to enhance the capability of mining online social networks. Current data mining techniques encounter difficulties of …dealing with knowledge interpretation based on complex data sources. The proposed agent-based mining method overcomes network analysis difficulties, while enhancing the knowledge discovery capability through its autonomy and collective intelligence. Show more
Keywords: Online social networks, agent networks, AOC, BDI agents
DOI: 10.3233/FI-2013-922
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 479-494, 2013
Authors: Bazan, Jan G. | Buregwa-Czuma, Sylwia | Jankowski, Andrzej W.
Article Type: Research Article
Abstract: This paper investigates the approaches to an improvement of classifiers quality through the application of a domain knowledge. The expertise may be utilizable on several levels of decision algorithms such as: feature extraction, feature selection, a definition of temporal patterns used in an approximation of the concepts, especially of the complex spatio-temporal ones, an assignment of an object to the concept and a measurement of the objects similarity. The domain knowledge incorporation results then in the reduction of the size of searched spaces. The work constitutes an overview of classifier building methods efficiently utilizing the expertise, worked out latterly by …Professor Andrzej Skowron research group. The methods using domain knowledge intended to enhance the quality of classic classifiers, to identify the behavioral patterns and for automatic planning are discussed. Finally it answers a question whether the methods satisfy the hopes vested in them and indicates the directions for future development. Show more
Keywords: rough set, concept approximation, ontology of concepts, discretization, behavioral pattern identification, automated planning, wisdom technology
DOI: 10.3233/FI-2013-923
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 495-511, 2013
Authors: Froelich, Wojciech | Deja, Rafał | Deja, Grażyna
Article Type: Research Article
Abstract: The disease of diabetes mellitus has spread in recent years across the world, and has thus become an even more important medical problem. Despite numerous solutions already proposed, the problem of management of glucose concentration in the blood of a diabetic patient still remains as a challenge and raises interest among researchers. The data-driven models of glucose-insulin interaction are one of the recent directions of research. In particular, a data-driven model can be constructed using the idea of sequential patterns as the knowledge representation method. In this paper a new hierarchical, template-based approach for mining sequential patterns is proposed. The …paper proposes also to use functional abstractions for the representation and mining of clinical data. Due to the experts knowledge involved in the construction of functional abstractions and sequential templates, the discovered underlying template-based patters can be easily interpreted by physicians and are able to provide recommendations of medical therapy. The proposed methodology was validated by experiments using real clinical data of juvenile diabetes. Show more
Keywords: data mining, sequential patterns, diabetes mellitus
DOI: 10.3233/FI-2013-924
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 513-528, 2013
Authors: Krasuski, Adam | Wasilewski, Piotr
Article Type: Research Article
Abstract: We present a method for improving the detection of outlying Fire Service's reports based on domain knowledge and dialogue with Fire & Rescue domain experts. The outlying report is considered as an element which is significantly different from the remaining data. We follow the position of Professor Andrzej Skowron that effective algorithms in data mining and knowledge discovery in big data should incorporate an interaction with domain experts or/and be domain oriented. Outliers are defined and searched on the basis of domain knowledge and dialogue with experts. We face the problem of reducing high data dimensionality without loosing specificity and …real complexity of reported incidents. We solve this problem by introducing a knowledge based generalization level intermediating between analyzed data and experts domain knowledge. In our approach we use the Formal Concept Analysis methods for both generation of the appropriate categories from data and as tools supporting communication with domain experts. We conducted two experiments in finding two types of outliers in which outlier detection was supported by domain experts. Show more
Keywords: outlier detection, formal concept analysis, fire service, granular computing
DOI: 10.3233/FI-2013-925
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 529-544, 2013
Authors: Qin, Linchan | Zhong, Ning | Lu, Shengfu | Li, Mi
Article Type: Research Article
Abstract: Lack of understanding of users' underlying decision making process results in the bottleneck of EB-HCI (eye movement-based human-computer interaction) systems. Meanwhile, considerable findings on visual features of decision making have been derived from cognitive researches over past few years. A promising method of decision prediction in EB-HCI systems is presented in this article, which is inspired by the looking behavior when a user makes a decision. As two features of visual decision making, gaze bias and pupil dilation are considered into judging intensions. This method combines the history of eye movements to a given interface and the visual traits of …users. Hence, it improves the prediction performance in a more natural and objective way. We apply the method to an either-or choice making task on the commercial Web pages to test its effectiveness. Although the result shows a good performance only of gaze bias but not of pupil dilation to predict a decision, it proves that hiring the visual traits of users is an effective approach to improve the performance of automatic triggering in EB-HCI systems. Show more
DOI: 10.3233/FI-2013-926
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 545-560, 2013
Authors: Czyżewski, Andrzej | Lisowski, Karol
Article Type: Research Article
Abstract: Pawlak's flowgraph has been applied as a suitable data structure for description and analysis of human behaviour in the area supervised with multicamera video surveillance system. Information contained in the flowgraph can be easily used to predict consecutive movements of a particular object. Moreover, utilization of the flowgraph can support reconstructing object route from the past video images. However, such a flowgraph with its accumulative nature needs a certain period of time for adaptation to changes in behaviour of objects which can be caused, e.g. by closing a door or placing other obstacle forcing people to pass it by. In …this paper a method for reduction of time needed for flowgraph adaptation is presented. Additionally, distance measure between flowgraphs is also introduced in order to determine if carrying out the adaptation process is needed. Show more
DOI: 10.3233/FI-2013-927
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 561-576, 2013
Article Type: Other
Citation: Fundamenta Informaticae, vol. 127, no. 1-4, pp. 577-579, 2013
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]