SCALING UP THE ACCURACY OF K -NEAREST-NEIGHBOUR CLASSIFIERS: A NAIVE-BAYES HYBRID

L. Jiang, D. Wang, Z. Cai, S. Jiang, and X. Yan

References

  1. [1] C. Merz, P. Murphy, & D. Aha, UCI repository of machine learning databases. Department of ICS, University of California, Irvine.
  2. [2] I.H. Witten & E. Frank, Data mining: Practical machinelearning tools and techniques, Second Edition (San Francisco, CA: Morgan Kaufmann, 2005).
  3. [3] T.M. Mitchell, Instance-based learning, Chapter 8, Machine learning (New York: McGraw-Hill, 1997).
  4. [4] R. Kohavi & G. John, Wrappers for feature subset selection. Artificial Intelligence Journal, 97(1–2), 1997, 273–324, special issue on relevance.
  5. [5] P. Langley & S. Sage, Induction of selective Bayesian classifiers, Proc. 10th Conf. on Uncertainty in Artificial Intelligence, Seattle, Washington, USA, 1994, 339–406.
  6. [6] L. Jiang, H. Zhang, Z. Cai, & J. Su, Evolutional NaiveBayes, Proc. 1st Int. Symp. on Intelligent Computation andits Applications, ISICA 2005, 344–350, China University ofGeosciences Press.
  7. [7] D. Aha, Tolerating noisy, irrelevant, and novel attributes in instance-based learning algorithms, International Journal of Man-Machine Studies, 36(2), 1992, 267–287.
  8. [8] K.K. Han, Text categorization using weight adjusted k-nearest neighbour classification. Technical report, Department of CS, University of Minnesota, 1999.
  9. [9] N. Friedman, D. Geiger, & M. Goldszmidt, Bayesian network classifiers, Machine learning, 29, 1997, 131–163.
  10. [10] Z. Huang, A fast clustering algorithm to cluster very large categorical data sets in data mining, Proc. SIGMOD Workshop on Research Issues on Data Mining and Knowledge Discovery, Tucson, AZ, 1997.
  11. [11] M.J. Greenacre, Theory and applications of correspondence analysis (London: Academic Press, 1984).
  12. [12] C. Stanfill & D. Waltz, Toward memory-based reasoning,Communications of the ACM, 29, December 1986, 1213–1228.
  13. [13] D.R. Wilson & T.R. Martinez, Improved heterogeneous distance functions, Journal of Artificial Intelligence Research, 6, 1997, 1–34.
  14. [14] Z. Xie, W. Hsu, Z. Liu, & M. Lee, SNNB: A selectiveneighborhood based Naive Bayes for lazy learning, Proc. 6th Pacific-Asia Conf. on KDD, Taipei, Taiwan, 2002, 104–114, Springer. 42
  15. [15] P. Langley, W. Iba, & K. Thomas, An analysis of Bayesian classifiers, Proc. 10th National Conf. of Artificial Intelligence, Madison, WI, 223–228, AAAI Press.
  16. [16] J.L. Bentley, Multidimensional binary search trees used for associative searching, Communications of the ACM, 18(9), 1975, 509–517.
  17. [17] R. Kohavi, Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid, Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining (KDD-96), Portland, OR, USA, 1996, 202–207, AAAI Press.
  18. [18] Z. Zheng & G.I. Webb, Lazy learning of Bayesian rules, Machine Learning, 41(1), 2000, 53–84.
  19. [19] E. Frank, M. Hall, & B. Pfahringer, Locally weighted Naive Bayes, Proc. Conf. on Uncertainty in Artificial Intelligence, Acapulco, Mexico, 2003, 249–256, Morgan Kaufmann.
  20. [20] L. Jiang, H. Zhang, & J. Su, Instance Cloning Local Naive Bayes, Proc. 18th Canadian Conf. on Artificial Intelligence, CAI 2005, LNAI 3501, Victoria, Canada, 280–291, Springer Press.
  21. [21] L. Jiang, H. Zhang, & Z. Cai, Discriminatively improving Naive Bayes by evolutionary feature selection, Romanian Journal of Information Science and Technology, ROMJIST, 9(3), 2006, 163–174.

Important Links:

Go Back