Scalable, efficient and correct learning of markov boundaries under the faithfulness assumption

Jose M. Peña*, Johan Björkegren, Jesper Tegner

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Scopus citations

Abstract

We propose an algorithm for learning the Markov boundary of a random variable from data without having to learn a complete Bayesian network. The algorithm is correct under the faithfulness assumption, scalable and data efficient. The last two properties are important because we aim to apply the algorithm to identify the minimal set of random variables that is relevant for probabilistic classification in databases with many random variables but few instances. We report experiments with synthetic and real databases with 37, 441 and 139352 random variables showing that the algorithm performs satisfactorily.

Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages136-147
Number of pages12
StatePublished - Dec 1 2005
Event8th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, ECSQARU 2005 - Barcelona, Spain
Duration: Jul 6 2005Jul 8 2005

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3571 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other8th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, ECSQARU 2005
CountrySpain
CityBarcelona
Period07/6/0507/8/05

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Scalable, efficient and correct learning of markov boundaries under the faithfulness assumption'. Together they form a unique fingerprint.

Cite this